Bayesian News Feeds

keep the werewolves at Bayes…

Xian's Og - Sat, 2014-02-01 19:14

Following a light post by Rasmus Bååth on choosing a mascot for Bayesian statistics (even though the comments did not remain that light!), and suggesting puppies because of John Kruschke’s book, I commented that werewolves would be more appropriate! (Just to be perfectly clear, I do not think Bayesian analysis needs a mascot. And even object to the use of Reverend Bayes’ somewhat controversial picture.) To which Rasmus retaliated by sending me a customised cup with John Kruschke’s book‘s cover. Rising to the challenge, I designed my own cup by Googling a werewolf image and adding the text “keep the werewolves at Bayes… by sticking to your prior beliefs“. In retrospect, I should have instead used “to keep the werewolves at Bayes… run, little Markov chain, run!“. Maybe another cup in the making!


Filed under: Kids, pictures, Statistics, University life Tagged: customised cup, mascot, puppies, werewolf
Categories: Bayesian Bloggers

my week at War[wick]

Xian's Og - Fri, 2014-01-31 19:14

This was a most busy and profitable week in Warwick as, in addition to meeting with local researchers and students on a wide range of questions and projects, giving an extended seminar to MASDOC students, attending as many seminars as humanly possible (!), and preparing a 5k race by running in the Warwickshire countryside (in the dark and in the rain), I received the visits of Kerrie Mengersen, Judith Rousseau and Jean-Michel Marin, with whom I made some progress on papers we are writing together. In particular, Jean-Michel and I wrote the skeleton of a paper we (still) plan to submit to COLT 2014 next week. And Judith, Kerrie and I drafted new if paradoxical aconnections between empirical likelihood and model selection. Jean-Michel and Judith also gave talks at the CRiSM seminar, Jean-Michel presenting the latest developments on the convergence of our AMIS algorithm, Judith summarising several papers on the analysis of empirical Bayes methods in non-parametric settings.


Filed under: pictures, Running, Statistics, Travel, Uncategorized Tagged: ABC, AMIS, Bayesian asymptotics, COLT2014, empirical Bayes methods, empirical likelihood, MASDOC, University of Warwick, Warwickshire, Zeeman building
Categories: Bayesian Bloggers

Bayesian indirect inference

Xian's Og - Thu, 2014-01-30 19:14

The paper with above title by Chris Drovandi and Tony Pettitt was presented by Chris Drovandi at a seminar in Warwick last week (when I was not there yet). But the talk made me aware of the paper. It is mostly a review of existing works on ABC and indirect inference, which was already considered (as an alternative) in Fearnhead’s and Prangle’s Read Paper, with simulations to illustrate the differences. In particular, it seems to draw from the recent preprint by Gleim and Pigorsch (preprint that I need to read now!). Preprint that draws a classification of indirect inference versions of ABC.

Indirect inference and its connections with ABC have been on my radar for quite a while, even though I never went farther than thinking of it, as it was developed by colleagues (and former teachers) at CREST, Christian Gouriéroux, Alain Monfort, and Eric Renault in the early 1990′s. Since it relies on an auxiliary model, somewhat arbitrary, indirect inference is rather delicate to incorporate within a Bayesian framework. In an ABC setting, indirect inference provides summary statistics (as estimators or scores) and possibly a distance. In their comparison, Drovandi and Pettitt analyse the impact of increasing the pseudo sample size in the simulated data. Rather unsurprisingly, the performances of ABC comparing true data of size n with synthetic data of size m>n are not great. However, there exists another way of reducing the variance in the synthetic data, namely by repeating simulations of samples of size n and averaging the indicators for proximity, resulting in a frequency rather than a 0-1 estimator. See e.g. Del Moral et al. (2009). In this sense, increasing the computing power reduces the variability of the ABC approximation. (And I thus fail to see the full relevance of Result 1.)

The authors make several assumptions of unicity that I somewhat find unclear. While assuming that the MLE for the auxiliary model is unique could make sense (Assumption 2), I do not understand the corresponding indexing of this estimator (of the auxiliary parameter) on the generating (model) parameter θ. It should only depend on the generated/simulated data x. The notion of a noisy mapping is just confusing to me. The assumption that the auxiliary score function at the auxiliary MLE for the observed data and for a simulated dataset (Assumption 3) is unique proceeds from the same spirit. I however fail to see why it matters so much. If the auxiliary MLE is the result of a numerical optimisation algorithm, the numerical algorithm may return local modes. This only adds to the approximative effect of the ABC-I schemes. Given that the paper does not produce convergence results for those schemes, unless the auxiliary model contains the genuine model, such theoretical assumptions do not feel that necessary. The paper uses normal mixtures as an auxiliary model: the multimodality of this model should not be such an hindrance (and reordering is transparent, i.e. does not “reduce the flexibility of the auxiliary model”, and does not “increase the difficulty of implementation”, as stated p.16).

The paper concludes from a numerical study to the superiority of the Bayesian indirect inference of Gallant and McCulloch (2009). Which simply replaces the true likelihood with the maximal auxiliary model likelihood estimated from a simulated dataset. (This is somehow similar to our use of the empirical likelihood in the PNAS paper.) It is however moderated by the cautionary provision that “the auxiliary model [should] describe the data well”. As for empirical likelihood, I would suggest resorting to this Bayesian indirect inference as a benchmark, providing a quick if possibly dirty reference against which to test more elaborate ABC schemes. Or other approximations, like empirical likelihood or Wood’s synthetic likelihood.


Filed under: Statistics, University life Tagged: ABC, auxiliary model, indirect inference, score function, University of Warwick
Categories: Bayesian Bloggers

Bayesian Adaptive Smoothing Splines using Stochastic Differential Equations

Yu Ryan Yue, Daniel Simpson, Finn Lindgren and Havard Rue
Categories: Bayesian Analysis

Bayesian Analysis of Functional-Coefficient Autoregressive Heteroscedastic Model

Xin-Yuan Song, Jing-Heng Cai, Xiang-Nan Feng, and Xue-Jun Jiang
Categories: Bayesian Analysis

Local-Mass Preserving Prior Distributions for Nonparametric Bayesian Models

Juhee Lee, Steven N. MacEachern, Yiling Lu and Gordon B. Mills
Categories: Bayesian Analysis

Matrix-Variate Dirichlet Process Priors with Applications

Zhihua Zhang, Dakan Wang, Guang Dai, and Michael I. Jordan
Categories: Bayesian Analysis

A Bayesian Nonparametric Approach for Time Series Clustering

Luis E. Nieto-Barajas and Alberto Contreras-Cristan
Categories: Bayesian Analysis

Regularized Bayesian Estimation of Generalized Threshold Regression Models

Friederike Greb, Tatyana Krivobokova, Axel Munk, and Stephan von Cramon-Taubadel
Categories: Bayesian Analysis

Zero Variance Differential Geometric Markov Chain Monte Carlo Algorithms

Theodore Papamarkou, Antonietta Mira and Mark Girolami
Categories: Bayesian Analysis

Chain Event Graphs for Informed Missingness

Lorna M. Barclay, Jane L. Hutton and Jim Q. Smith
Categories: Bayesian Analysis

On Divergence Measures Leading to Jeffreys and Other Reference Priors

Ruitao Liu, Arijit Chakrabarti, Tapas Samanta, Jayanta.K. Ghosh, Malay Ghosh
Categories: Bayesian Analysis

Statistical frontiers (course in Warwick)

Xian's Og - Wed, 2014-01-29 19:14

Today I am teaching my yearly class at Warwick as a short introduction to computational techniques for Bayes factors approximation for MASDOC and PhD students in the Statistical Frontiers seminar, gathering several talks from the past years. Here are my slides:


Filed under: Books, Statistics, University life Tagged: Bayes factors, MASDOC, Monte Carlo Statistical Methods, Statistical frontiers, University of Warwick
Categories: Bayesian Bloggers