Volume 10, Number 2 (November 2011)                   JIRSS 2011, 10(2): 201-235 | Back to browse issues page


XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Khalili A. An Overview of the New Feature Selection Methods in Finite Mixture of Regression Models. JIRSS. 2011; 10 (2) :201-235
URL: http://jirss.irstat.ir/article-1-164-en.html

Abstract:   (6468 Views)
Variable (feature) selection has attracted much attention in contemporary statistical learning and recent scientific research. This is mainly due to the rapid advancement in modern technology that allows scientists to collect data of unprecedented size and complexity. One type of statistical problem in such applications is concerned with modeling an output variable as a function of a small subset of a large number of features. In certain applications, the data samples may even be coming from multiple subpopulations. In these cases, selecting the correct predictive features (variables) for each subpopulation is crucial. The classical best subset selection methods are computationally too expensive for many modern statistical applications. New variable selection methods have been successfully developed over the last decade to deal with large numbers of variables. They have been designed for simultaneously selecting important variables and estimating their effects in a statistical model. In this article, we present an overview of the recent developments in theory, methods, and implementations for the variable selection problem in finite mixture of regression models.
Full-Text [PDF 1033 kb]   (2703 Downloads)    
Subject: 60: Probability theory and stochastic processes
Received: 2011/11/7 | Accepted: 2015/09/12

Add your comments about this article : Your username or email:
Write the security code in the box

© 2015 All Rights Reserved | Journal of The Iranian Statistical Society