A New Bayesian LASSO

ISBA 2012
Meeting Program Chair: 
ISBA 2012
Presenter First Name: 
Presenter Last Name: 
Presenter's Email: 
University of Alabama at Birmingham
Presentation Type: 
Regularization methods for co-efficient estimation and variable selection in linear regression, especially those based on the LASSO of Tibshirani (1996), have enjoyed a great deal of applicability in recent years. The LASSO estimate corresponds to the posterior mode when independent Laplace priors are assigned on the regression co-efficients. Park and Casella (2008) provided the Bayesian LASSO using scale mixtures of normal priors for the parameters and independent exponential priors on their variances. In this paper, we propose a new hierarchical representation of Bayesian LASSO. The new representation is based on the characterization of a Laplace distribution as a scale mixture of uniform, the mixing distribution being a particular gamma distribution. Our method borrows all the major advantages of hierarchical models. Firstly, as a Bayesian method, the estimates can be easily interpreted, resulting in easier statistical inference. Secondly, it provides valid standard errors as well as interval estimates (Bayesian credible intervals). We consider a fully Bayesian treatment which leads to a new efficient Gibbs sampler in the MCMC estimation with tractable full conditional distributions. Furthermore, we develop a fast expectation-maximization (EM) algorithm to estimate the co-efficients, extending the latent weighted least squares (LWLS) estimator (Walker et al, 1997) to penalized regression setting. Finally, we show how our approach can be extended to generalized linear models and Bayesian versions of other regularization methods e.g. Bridge estimator, Adaptive LASSO etc. We compare the performance of our method to the existing one as well as to its frequentist counterpart using simulations and real data.
LASSO, Bayesian LASSO, Scale Uniform, Hierarchical Model, Penalized Regression
Nengjun Yi
Wednesday, June 27, 2012 - 18:30