Volume 10, Issue 2 (November 2011)                   JIRSS 2011, 10(2): 201-235 | Back to browse issues page

XML Persian Abstract Print

Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Khalili A. An Overview of the New Feature Selection Methods in Finite Mixture of Regression Models. JIRSS. 2011; 10 (2) :201-235
URL: http://jirss.irstat.ir/article-1-164-en.html
Abstract:   (10320 Views)
Variable (feature) selection has attracted much attention in contemporary statistical learning and recent scientific research. This is mainly due to the rapid advancement in modern technology that allows scientists to collect data of unprecedented size and complexity. One type of statistical problem in such applications is concerned with modeling an output variable as a function of a small subset of a large number of features. In certain applications, the data samples may even be coming from multiple subpopulations. In these cases, selecting the correct predictive features (variables) for each subpopulation is crucial. The classical best subset selection methods are computationally too expensive for many modern statistical applications. New variable selection methods have been successfully developed over the last decade to deal with large numbers of variables. They have been designed for simultaneously selecting important variables and estimating their effects in a statistical model. In this article, we present an overview of the recent developments in theory, methods, and implementations for the variable selection problem in finite mixture of regression models.
Full-Text [PDF 1033 kb]   (4711 Downloads)    

Received: 2011/11/7 | Accepted: 2015/09/12 | Published: 2011/11/15

Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

© 2015 All Rights Reserved | Journal of The Iranian Statistical Society