## Bayesian News Feeds

### Two-sample Bayesian Nonparametric Hypothesis Testing

**Chris C. Holmes**,

**François Caron**,

**Jim E. Griffin**,

**David A. Stephens**.

**Source: **Bayesian Analysis, Volume 10, Number 2, 297--320.

**Abstract:**

In this article we describe Bayesian nonparametric procedures for two-sample hypothesis testing. Namely, given two sets of samples
$\mathbf{y}^{\scriptscriptstyle(1)}\stackrel{\scriptscriptstyle{\text{iid}}}{\sim}F^{\scriptscriptstyle(1)}$ and $\mathbf{y}^{\scriptscriptstyle(2)}\stackrel{\scriptscriptstyle{\text{iid}}}{\sim}F^{\scriptscriptstyle(2)}$ , with $F^{\scriptscriptstyle(1)},F^{\scriptscriptstyle(2)}$
unknown, we wish to evaluate the evidence for the null hypothesis $H_{0}:F^{\scriptscriptstyle(1)}\equiv F^{\scriptscriptstyle(2)}$ versus the alternative $H_{1}:F^{\scriptscriptstyle(1)}\neq F^{\scriptscriptstyle(2)}$ . Our method is based upon a nonparametric Pólya tree prior centered either subjectively or using an empirical procedure. We show that the Pólya tree prior leads to an analytic expression for the marginal likelihood under the two hypotheses and hence an explicit measure of the probability of the null $\mathrm{Pr}(H_{0}|\{\mathbf{y}^{\scriptscriptstyle(1)},\mathbf{y}^{\scriptscriptstyle(2)}\}\mathbf{)}$

### Sensitivity Analysis for Bayesian Hierarchical Models

**Małgorzata Roos**,

**Thiago G. Martins**,

**Leonhard Held**,

**Håvard Rue**.

**Source: **Bayesian Analysis, Volume 10, Number 2, 321--349.

**Abstract:**

Prior sensitivity examination plays an important role in applied Bayesian analyses. This is especially true for Bayesian hierarchical models, where interpretability of the parameters within deeper layers in the hierarchy becomes challenging. In addition, lack of information together with identifiability issues may imply that the prior distributions for such models have an undesired influence on the posterior inference. Despite its importance, informal approaches to prior sensitivity analysis are currently used. They require repetitive re-fits of the model with ad-hoc modified base prior parameter values. Other formal approaches to prior sensitivity analysis suffer from a lack of popularity in practice, mainly due to their high computational cost and absence of software implementation. We propose a novel formal approach to prior sensitivity analysis, which is fast and accurate. It quantifies sensitivity without the need for a model re-fit. Through a series of examples we show how our approach can be used to detect high prior sensitivities of some parameters as well as identifiability issues in possibly over-parametrized Bayesian hierarchical models.

### Scaling It Up: Stochastic Search Structure Learning in Graphical Models

**Hao Wang**.

**Source: **Bayesian Analysis, Volume 10, Number 2, 351--377.

**Abstract:**

Gaussian concentration graph models and covariance graph models are two classes of graphical models that are useful for uncovering latent dependence structures among multivariate variables. In the Bayesian literature, graphs are often determined through the use of priors over the space of positive definite matrices with fixed zeros, but these methods present daunting computational burdens in large problems. Motivated by the superior computational efficiency of continuous shrinkage priors for regression analysis, we propose a new framework for structure learning that is based on continuous spike and slab priors and uses latent variables to identify graphs. We discuss model specification, computation, and inference for both concentration and covariance graph models. The new approach produces reliable estimates of graphs and efficiently handles problems with hundreds of variables.

### Predictions Based on the Clustering of Heterogeneous Functions via Shape and Subject-Specific Covariates

**Garritt L. Page**,

**Fernando A. Quintana**.

**Source: **Bayesian Analysis, Volume 10, Number 2, 379--410.

**Abstract:**

We consider a study of players employed by teams who are members of the National Basketball Association where units of observation are functional curves that are realizations of production measurements taken through the course of one’s career. The observed functional output displays large amounts of between player heterogeneity in the sense that some individuals produce curves that are fairly smooth while others are (much) more erratic. We argue that this variability in curve shape is a feature that can be exploited to guide decision making, learn about processes under study and improve prediction. In this paper we develop a methodology that takes advantage of this feature when clustering functional curves. Individual curves are flexibly modeled using Bayesian penalized B-splines while a hierarchical structure allows the clustering to be guided by the smoothness of individual curves. In a sense, the hierarchical structure balances the desire to fit individual curves well while still producing meaningful clusters that are used to guide prediction. We seamlessly incorporate available covariate information to guide the clustering of curves non-parametrically through the use of a product partition model prior for a random partition of individuals. Clustering based on curve smoothness and subject-specific covariate information is particularly important in carrying out the two types of predictions that are of interest, those that complete a partially observed curve from an active player, and those that predict the entire career curve for a player yet to play in the National Basketball Association.

### Approximate Bayesian Computation by Modelling Summary Statistics in a Quasi-likelihood Framework

**Stefano Cabras**,

**Maria Eugenia Castellanos Nueda**,

**Erlis Ruli**.

**Source: **Bayesian Analysis, Volume 10, Number 2, 411--439.

**Abstract:**

Approximate Bayesian Computation (ABC) is a useful class of methods for Bayesian inference when the likelihood function is computationally intractable. In practice, the basic ABC algorithm may be inefficient in the presence of discrepancy between prior and posterior. Therefore, more elaborate methods, such as ABC with the Markov chain Monte Carlo algorithm (ABC-MCMC), should be used. However, the elaboration of a proposal density for MCMC is a sensitive issue and very difficult in the ABC setting, where the likelihood is intractable. We discuss an automatic proposal distribution useful for ABC-MCMC algorithms. This proposal is inspired by the theory of quasi-likelihood (QL) functions and is obtained by modelling the distribution of the summary statistics as a function of the parameters. Essentially, given a real-valued vector of summary statistics, we reparametrize the model by means of a regression function of the statistics on parameters, obtained by sampling from the original model in a pilot-run simulation study. The QL theory is well established for a scalar parameter, and it is shown that when the conditional variance of the summary statistic is assumed constant, the QL has a closed-form normal density. This idea of constructing proposal distributions is extended to non constant variance and to real-valued parameter vectors. The method is illustrated by several examples and by an application to a real problem in population genetics.

### Bayesian Analysis, Volume 10, Number 2 (2015)

Contents:

**Zhihua Zhang**, **Jin Li**. Compound Poisson Processes, Latent Shrinkage Priors and Bayesian Nonconvex Penalization. 247--274.

**Stanley I. M. Ko**, **Terence T. L. Chong**, **Pulak Ghosh**. Dirichlet Process Hidden Markov Multiple Change-point Model. 275--296.

**Chris C. Holmes**, **François Caron**, **Jim E. Griffin**, **David A. Stephens**. Two-sample Bayesian Nonparametric Hypothesis Testing. 297--320.

**Małgorzata Roos**, **Thiago G. Martins**, **Leonhard Held**, **Håvard Rue**. Sensitivity Analysis for Bayesian Hierarchical Models. 321--349.

**Hao Wang**. Scaling It Up: Stochastic Search Structure Learning in Graphical Models. 351--377.

**Garritt L. Page**, **Fernando A. Quintana**. Predictions Based on the Clustering of Heterogeneous Functions via Shape and Subject-Specific Covariates. 379--410.

**Stefano Cabras**, **Maria Eugenia Castellanos Nueda**, **Erlis Ruli**. Approximate Bayesian Computation by Modelling Summary Statistics in a Quasi-likelihood Framework. 411--439.

### Overall Objective Priors

**James O. Berger**,

**Jose M. Bernardo**,

**Dongchu Sun**.

**Source: **Bayesian Analysis, Volume 10, Number 1, 189--221.

**Abstract:**

In multi-parameter models, reference priors typically depend on the parameter or quantity of interest, and it is well known that this is necessary to produce objective posterior distributions with optimal properties. There are, however, many situations where one is simultaneously interested in all the parameters of the model or, more realistically, in functions of them that include aspects such as prediction, and it would then be useful to have a single objective prior that could safely be used to produce reasonable posterior inferences for all the quantities of interest. In this paper, we consider three methods for selecting a single objective prior and study, in a variety of problems including the multinomial problem, whether or not the resulting prior is a reasonable overall prior.

### Comment on Article by Berger, Bernardo, and Sun

**Siva Sivaganesan**.

**Source: **Bayesian Analysis, Volume 10, Number 1, 223--226.

### Comment on Article by Berger, Bernardo, and Sun

**Manuel Mendoza**,

**Eduardo Gutiérrez-Peña**.

**Source: **Bayesian Analysis, Volume 10, Number 1, 227--231.

### Comment on Article by Berger, Bernardo, and Sun

**Judith Rousseau**.

**Source: **Bayesian Analysis, Volume 10, Number 1, 233--236.

### Comment on Article by Berger, Bernardo, and Sun

**Gauri Sankar Datta**,

**Brunero Liseo**.

**Source: **Bayesian Analysis, Volume 10, Number 1, 237--241.

### Rejoinder

**James O. Berger**,

**Jose M. Bernardo**,

**Dongchu Sun**.

**Source: **Bayesian Analysis, Volume 10, Number 1, 243--246.

### Updated: Bayesian Analysis, Volume 10, Number 1 (2015)

Contents:

**Trevelyan J. McKinley**, **Michelle Morters**, **James L. N. Wood**. Bayesian Model Choice in Cumulative Link Ordinal Regression Models. 1--30.

**Fumiyasu Komaki**. Asymptotic Properties of Bayesian Predictive Densities When the Distributions of Data and Target Variables are Different. 31--51.

**Harold Bae**, **Thomas Perls**, **Martin Steinberg**, **Paola Sebastiani**. Bayesian Polynomial Regression Models to Fit Multiple Genetic Models for Quantitative Traits. 53--74.

**Dimitris Fouskakis**, **Ioannis Ntzoufras**, **David Draper**. Power-Expected-Posterior Priors for Variable Selection in Gaussian Linear Models. 75--107.

**A. Mohammadi**, **E. C. Wit**. Bayesian Structure Learning in Sparse Gaussian Graphical Models. 109--138.

**Cyr Emile M’lan**, **Ming-Hui Chen**. Objective Bayesian Inference for Bilateral Data. 139--170.

**Fernando V. Bonassi**, **Mike West**. Sequential Monte Carlo with Adaptive Weights for Approximate Bayesian Computation. 171--187.

**James O. Berger**, **Jose M. Bernardo**, **Dongchu Sun**. Overall Objective Priors. 189--221.

**Siva Sivaganesan**. Comment on Article by Berger, Bernardo, and Sun. 223--226.

**Manuel Mendoza**, **Eduardo Gutiérrez-Peña**. Comment on Article by Berger, Bernardo, and Sun. 227--231.

**Judith Rousseau**. Comment on Article by Berger, Bernardo, and Sun. 233--236.

**Gauri Sankar Datta**, **Brunero Liseo**. Comment on Article by Berger, Bernardo, and Sun. 237--241.

**James O. Berger**, **Jose M. Bernardo**, **Dongchu Sun**. Rejoinder. 243--246.