
Group Member: Peng Zhao - University of California, Berkeley
Bin Yu Professor. University of California Department of Statistics 367 Evans Hall #3860 Berkeley CA 94720-3860 phone: 510-642-2021. email: [email protected] homepage: www.stat.berkeley.edu/~binyu : Research Interests: Machine learning (boosting and support vector machines) classification and unmixing in remote sensing
We make this assessment by investigating Lasso’s model selection consistency. 2006 Peng Zhao and Bin Yu. under linear models, that is, when given a large amount of data under what conditions Lasso does and does not choose the true model. where en = (e1;:::;en)T is a vector of i.i.d. random variables with mean 0 and variance s2. Yn is an.
Zhao and Yu Sign consistency is stronger than the usual selection consistency which only requires the zeros to be matched, but not the signs. The reason for using sign consistency is technical. It is needed for proving the necessity of the Irrepresentable Condition (to be deflned) to
In this paper, we propose the BLasso algorithm that ties the FSF (e-Boosting) algo-rithm with the Lasso method that minimizes the L1 penalized L2 loss. BLasso is derived as a coordinate descent method with a ̄xed stepsize applied to the general Lasso loss function (L1 penalized L2 loss). It consists of both a forward step and a backward step.
Peng Zhao's research works | University of California, Berkeley, CA ...
Peng Zhao's 5 research works with 3,168 citations and 3,448 reads, including: The Composite Sbsolute Penalties Family for Grouped and Hierarchical Variable Selection
Peng Zhao | Department of Statistics
Department of Statistics 367 Evans Hall, University of California Berkeley, CA 94720-3860 T 510-642-2781 | F 510-642-7892 Accessibility | Nondiscrimination | Privacy
In this paper, we propose the BLasso algorithm that ties the FSF (e-Boosting) algorithm with the Lasso method that minimizes the L1 penalized L2 loss. BLasso is derived as a coordinate descent method with a fixed stepsize applied to the general Lasso loss function (L1 penalized convex loss). It consists of both a forward step and a backward step.
[0909.0411] The composite absolute penalties family for …
Sep 2, 2009 · In this paper, we combine different norms including L1 to form an intelligent penalty in order to add side information to the fitting of a regression or classification model to obtain reasonable estimates.
On Model Selection Consistency of Lasso - Journal of Machine …
Peng Zhao, Bin Yu; 7 (90):2541−2563, 2006. Sparsity or parsimony of statistical models is crucial for their proper interpretations, as in sciences and social sciences. Model selection is a commonly used method to find such models, but usually involves a …
On Model Selection Consistency of Lasso - jmlr.org
Peng Zhao, Bin Yu. Year: 2006, Volume: 7, Issue: 90, Pages: 2541−2563. Sparsity or parsimony of statistical models is crucial for their proper interpretations, as in sciences and social sciences. Model selection is a commonly used method to find such models, but usually involves a computationally heavy combinatorial search.
- Some results have been removed