On the Stability of Feature Selection in the Presence of Feature Correlations

Abstract

Feature selection is central to modern data science. The ‘stability’ of a feature selection algorithm refers to the sensitivity of its choices to small changes in training data. This is, in effect, the robustness of the chosen features. This paper considers the estimation of stability when we expect strong pairwise correlations, otherwise known as feature redundancy. We demonstrate that existing measures are inappropriate here, as they systematically underestimate the true stability, giving an overly pessimistic view of a feature set. We propose a new statistical measure which overcomes this issue, and generalises previous work.

Publication
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery (ECML/PKDD). Acceptance rate 130/734 (18%)

Related