RT - Journal Article
T1 - An Overview of the New Feature Selection Methods in Finite Mixture of Regression Models
JF - JIRSS
YR - 2011
JO - JIRSS
VO - 10
IS - 2
UR - http://jirss.irstat.ir/article-1-164-en.html
SP - 201
EP - 235
K1 - em algorithm
K1 - mixture models
K1 - mixture of regression models
K1 - penalized likelihood
K1 - regularization
K1 - variable selection.
AB - Variable (feature) selection has attracted much attention in contemporary statistical learning and recent scientific research. This is mainly due to the rapid advancement in modern technology that allows scientists to collect data of unprecedented size and complexity. One type of statistical problem in such applications is concerned with modeling an output variable as a function of a small subset of a large number of features. In certain applications, the data samples may even be coming from multiple subpopulations. In these cases, selecting the correct predictive features (variables) for each subpopulation is crucial. The classical best subset selection methods are computationally too expensive for many modern statistical applications. New variable selection methods have been successfully developed over the last decade to deal with large numbers of variables. They have been designed for simultaneously selecting important variables and estimating their effects in a statistical model. In this article, we present an overview of the recent developments in theory, methods, and implementations for the variable selection problem in finite mixture of regression models.
LA eng
UL http://jirss.irstat.ir/article-1-164-en.html
M3
ER -