## Xian's Og

### Overfitting Bayesian mixture models with an unknown number of components

**D**uring my Czech vacations, Zoé van Havre, Nicole White, Judith Rousseau, and Kerrie Mengersen1 posted on arXiv a paper on overfitting mixture models to estimate the number of components. This is directly related with Judith and Kerrie’s 2011 paper and with Zoé’s PhD topic. The paper also returns to the vexing (?) issue of label switching! I very much like the paper and not only because the author are good friends!, but also because it brings a solution to an approach I briefly attempted with Marie-Anne Gruet in the early 1990’s, just before finding about the reversible jump MCMC algorithm of Peter Green at a workshop in Luminy and considering we were not going to “beat the competition”! Hence not publishing the output of our over-fitted Gibbs samplers that were nicely emptying extra components… It also brings a rebuke about a later assertion of mine’s at an ICMS workshop on mixtures, where I defended the notion that over-fitted mixtures could not be detected, a notion that was severely disputed by David McKay…

What is so fantastic in Rousseau and Mengersen (2011) is that a simple constraint on the Dirichlet prior on the mixture weights suffices to guarantee that asymptotically superfluous components will empty out and signal they are truly superfluous! The authors here cumulate the over-fitted mixture with a tempering strategy, which seems somewhat redundant, the number of extra components being a sort of temperature, but eliminates the need for fragile RJMCMC steps. Label switching is obviously even more of an issue with a larger number of components and identifying empty components seems to require a lack of label switching for some components to remain empty!

When reading through the paper, I came upon the condition that *only* the priors of the weights are allowed to vary between temperatures. Distinguishing the weights from the other parameters does make perfect sense, as some representations of a mixture work without those weights. Still I feel a bit uncertain about the fixed prior constraint, even though I can see the rationale in not allowing for complete freedom in picking those priors. More fundamentally, I am less and less happy with independent identical or exchangeable priors on the components.

Our own recent experience with almost zero weights mixtures (and with Judith, Kaniav, and Kerrie) suggests not using solely a Gibbs sampler there as it shows poor mixing. And even poorer label switching. The current paper does not seem to meet the same difficulties, maybe thanks to (prior) tempering.

The paper proposes a strategy called *Zswitch* to resolve label switching, which amounts to identify a MAP for each possible number of components and a subsequent relabelling. Even though I do not entirely understand the way the permutation is constructed. I wonder in particular at the cost of the relabelling.

Filed under: Statistics Tagged: component of a mixture, Czech Republic, Gibbs sampling, label switching, Luminy, mixture estimation, Peter Green, reversible jump, unknown number of components

### Is Jeffreys’ prior unique?

*“A striking characterisation showing the central importance of Fisher’s information in a differential framework is due to Cencov (1972), who shows that it is the only invariant Riemannian metric under symmetry conditions.” *N. Polson, PhD Thesis, University of Nottingham, 1988

**F**ollowing a discussion on Cross Validated, I wonder whether or not the affirmation that Jeffreys’ prior was *the only prior construction rule that remains invariant* under arbitrary (if smooth enough) reparameterisation. In the discussion, Paulo Marques mentioned Nikolaj Nikolaevič Čencov’s book, *Statistical Decision Rules and Optimal Inference*, Russian book from 1972, of which I had not heard previously and which seems too theoretical [from Paulo’s comments] to explain why this rule would be the sole one. As I kept looking for Čencov’s references on the Web, I found Nick Polson’s thesis and the above quote. So maybe Nick could tell us more!

However, my uncertainty about the uniqueness of Jeffreys’ rule stems from the fact that, f I decide on a favourite or reference parametrisation—as Jeffreys indirectly does when selecting the parametrisation associated with a constant Fisher information—and on a prior derivation from the sampling distribution for this parametrisation, I have derived a parametrisation invariant principle. Possibly silly and uninteresting from a Bayesian viewpoint but nonetheless invariant.

Filed under: Books, Statistics, University life Tagged: cross validated, Harold Jeffreys, Jeffreys priors, NIck Polson, Nikolaj Nikolaevič Čencov, Russian mathematicians

### market static

*[Heard in the local market, while queuing for cheese:]*

– You took too much!

– Maybe, but remember your sister is staying for two days.

– My sister…, as usual, she will take a big serving and leave half of it!

– Yes, but she will make sure to finish the bottle of wine!

Filed under: Kids, Travel Tagged: farmers' market, métro static

### trans-dimensional nested sampling and a few planets

**T**his morning, in the train to Dauphine (train that was even more delayed than usual!), I read a recent arXival of Brendon Brewer and Courtney Donovan. Entitled Fast Bayesian inference for exoplanet discovery in radial velocity data, the paper suggests to associate Matthew Stephens’ (2000) birth-and-death MCMC approach with nested sampling to infer about the number N of exoplanets in an exoplanetary system. The paper is somewhat sparse in its description of the suggested approach, but states that the birth-date moves involves adding a planet with parameters simulated from the prior and removing a planet at random, both being accepted under a likelihood constraint associated with nested sampling. I actually wonder if this actually is the birth-date version of Peter Green’s (1995) RJMCMC rather than the continuous time birth-and-death process version of Matthew…

*“The traditional approach to inferring N also contradicts fundamental ideas in Bayesian computation. Imagine we are trying to compute the posterior distribution for a parameter a in the presence of a nuisance parameter b. This is usually solved by exploring the joint posterior for a and b, and then only looking at the generated values of a. Nobody would suggest the wasteful alternative of using a discrete grid of possible a values and doing an entire Nested Sampling run for each, to get the marginal likelihood as a function of a.”*

This criticism is receivable when there is a huge number of possible values of N, even though I see no fundamental contradiction with my ideas about Bayesian computation. However, it is more debatable when there are a few possible values for N, given that the exploration of the augmented space by a RJMCMC algorithm is often very inefficient, in particular when the proposed parameters are generated from the prior. The more when nested sampling is involved and simulations are run under the likelihood constraint! In the astronomy examples given in the paper, N never exceeds 15… Furthermore, by merging all N’s together, it is unclear how the evidences associated with the various values of N can be computed. At least, those are not reported in the paper.

The paper also omits to provide the likelihood function so I do not completely understand where “label switching” occurs therein. My first impression is that this is not a mixture model. However if the observed signal (from an exoplanetary system) is the sum of N signals corresponding to N planets, this makes more sense.

Filed under: Books, Statistics, Travel, University life Tagged: birth-and-death process, Chamonix, exoplanet, label switching, métro, nested sampling, Paris, RER B, reversible jump, Université Paris Dauphine

### ice-climbing Niagara Falls

**I** had missed these news that a frozen portion of the Niagara Falls had been ice-climbed. By Will Gadd on Jan. 27. This is obviously quite impressive given the weird and dangerous nature of the ice there, which is mostly frozen foam from the nearby waterfall. (I once climbed an easy route on such ice at the Chutes Montmorency, near Québec City, and it felt quite strange…) He even had a special ice hook designed for that climb as he did not trust the usual ice screws. Will Gadd has however climbed much more difficult routes like Helmcken Falls in British Columbia, which may be the hardest mixed route in the World!

Filed under: Mountains, pictures Tagged: British Columbia, Canada, Helmcken Falls, ice climbing, Niagara Falls, Niagara-on-the-Lake, USA

### Ubuntu issues

**I**t may be that weekends are the wrong time to tamper with computer OS… Last Sunday, I noticed my Bluetooth icon had a “turn off” option and since I only use Bluetooth for my remote keyboard and mouse when in Warwick, I turned it off, thinking I would turn it on again next week. This alas led to a series of problems, maybe as a coincidence since I also updated the Kubuntu 14.04 system over the weekend.

- I cannot turn Bluetooth on again! My keyboard and mouse are no longer recognised or detected. No Bluetooth adapter is found by the system setting. Similarly,
*sudo modprobe bluetooth*shows nothing. I have installed a new interface called Blueman but to no avail. The fix suggested on forums to run*rfkill unblock bluetooth*does not work either… Actually*rfkill list all*only returns the wireless device. Which is working fine. - My webcam vanished as well. It was working fine before the weekend.
- Accessing some webpages, including all New York Times articles, now takes forever on Firefox! If less on Chrome.

Is this a curse of sorts?!

As an aside, I also found this week that I cannot update Adobe reader from version 9 to version 11, as Adobe does not support Linux versions any more… Another bummer. If one wants to stick to acrobat.

**Update [03/02]**

Thanks to Ingmar and Thomas, I got both my problems solved! The Bluetooth restarted after I shut down my *unplugged* computer, in connection with an USB over-current protection. And Thomas figured out my keyboard had a key to turn the webcam off and on, key that I had pressed when trying to restart the Bluetooth device. Et voilà!

Filed under: Kids, Linux Tagged: Bluetooth, Kubuntu, Linux, Ubuntu 14.04

### je suis Avijit Roy

**আমরা শোকাহত**

**কিন্তু আমরা অপরাজিত**

[“We mourn but we are not defeated”]

Filed under: Uncategorized Tagged: atheism, Bangladesh, blogging, fanaticism, fascism, Mukto-Mona

### Unbiased Bayes for Big Data: Path of partial posteriors [a reply from the authors]

*[Here is a reply by Heiko Strathmann to my post of yesterday. Along with the slides of a talk in Oxford mentioned in the discussion.]*

Thanks for putting this up, and thanks for the discussion. Christian, as already exchanged via email, here are some answers to the points you make.

First of all, we don’t claim a free lunch — and are honest with the limitations of the method (see negative examples). Rather, we make the point that we *can* achieve computational savings in certain situations — essentially exploiting redundancy (what Michael called “tall” data in his note on subsampling & HMC) leading to fast convergence of posterior statistics.

Dan is of course correct noticing that if the posterior statistic does not converge nicely (i.e. all data counts), then truncation time is “mammoth”. It is also correct that it might be questionable to aim for an unbiased Bayesian method in the presence of such redundancies. However, these are the two extreme perspectives on the topic. The message that we want to get along is that there is a trade-off in between these extremes. In particular the GP examples illustrate this nicely as we are able to reduce MSE in a regime where posterior statistics have *not* yet stabilised, see e.g. figure 6.

*“And the following paragraph is further confusing me as it seems to imply that convergence is not that important thanks to the de-biasing equation.”*

To clarify, the paragraph refers to the *additional* convergence issues induced by alternative Markov transition kernels of mini-batch-based full posterior sampling methods by Welling, Bardenet, Dougal & co. For example, Firefly MC’s mixing time is increased by a factor of 1/q where q*N is the mini-batch size. Mixing of stochastic gradient Langevin gets worse over time. This is *not* true for our scheme as we can use standard transition kernels. It is still essential for the partial posterior Markov chains to converge (*if* MCMC is used). However, as this is a well studied problem, we omit the topic in our paper and refer to standard tools for diagnosis. All this is independent of the debiasing device.

**About MCMC convergence.**

Yesterday in Oxford, Pierre Jacob pointed out that if MCMC is used for estimating partial posterior statistics, the overall result is *not* unbiased. We had a nice discussion how this bias could be addressed via a two-stage debiasing procedure: debiasing the MC estimates as described in the “Unbiased Monte Carlo” paper by Agapiou et al, and then plugging those into the path estimators — though it is (yet) not so clear how (and whether) this would work in our case.

In the current version of the paper, we do not address the bias present due to MCMC. We have a paragraph on this in section 3.2. Rather, we start from a premise that full posterior MCMC samples are a gold standard. Furthermore, the framework we study is not necessarily linked to MCMC – it could be that the posterior expectation is available in closed form, but simply costly in N. In this case, we can still unbiasedly estimate this posterior expectation – see GP regression.

*“The choice of the tail rate is thus quite delicate to validate against the variance constraints (2) and (3).”*

It is true that the choice is crucial in order to control the variance. However, provided that partial posterior expectations converge at a rate n-β with n the size of a minibatch, computational complexity can be reduced to N1-α (α<β) without variance exploding. There is a trade-off: the faster the posterior expectations converge, more computation can be saved; β is in general unknown, but can be roughly estimated with the “direct approach” as we describe in appendix.

**About the “direct approach”**

It is true that for certain classes of models and φ functionals, the direct averaging of expectations for increasing data sizes yields good results (see log-normal example), and we state this. However, the GP regression experiments show that the direct averaging gives a larger MSE as with debiasing applied. This is exactly the trade-off mentioned earlier.

I also wonder what people think about the comparison to stochastic variational inference (GP for Big Data), as this hasn’t appeared in discussions yet. It is the comparison to “non-unbiased” schemes that Christian and Dan asked for.

Filed under: Statistics, University life Tagged: arXiv, bias vs. variance, big data, convergence assessment, de-biasing, Firefly MC, MCMC, Monte Carlo Statistical Methods, telescoping estimator, unbiased estimation

### Unbiased Bayes for Big Data: Path of partial posteriors

*“Data complexity is sub-linear in N, no bias is introduced, variance is finite.”*

**H**eiko Strathman, Dino Sejdinovic and Mark Girolami have arXived a few weeks ago a paper on the use of a telescoping estimator to achieve an unbiased estimator of a Bayes estimator relying on the entire dataset, while using only a small proportion of the dataset. The idea is that a sequence converging—to an unbiased estimator—of estimators φt can be turned into an unbiased estimator by a stopping rule T:

is indeed unbiased. In a “Big Data” framework, the components φt are MCMC versions of posterior expectations based on a proportion αt of the data. And the stopping rule cannot exceed αt=1. The authors further propose to replicate this unbiased estimator R times on R parallel processors. They further claim a reduction in the computing cost of

which means that a sub-linear cost can be achieved. However, the gain in computing time means higher variance than for the full MCMC solution:

*“It is clear that running an MCMC chain on the full posterior, for any statistic, produces more accurate estimates than the debiasing approach, which by construction has an additional intrinsic source of variance. This means that if it is possible to produce even only a single MCMC sample (…), the resulting posterior expectation can be estimated with less expected error. It is therefore not instructive to compare **approaches in that region. “*

I first got a “free lunch” impression when reading the paper, namely it sounded like using a random stopping rule was enough to overcome unbiasedness and large size jams. This is not the message of the paper, but I remain both intrigued by the possibilities the unbiasedness offers *and* bemused by the claims therein, for several reasons:

- the above estimator requires computing T MCMC (partial) estimators φt in parallel. All of those estimators have to be associated with Markov chains in a stationary regime and they all are associated with independent chains. While addressing the convergence of a single chain, the paper does not truly cover the
*simultaneous*convergence assessment on a group of T parallel MCMC sequences. And the paragraph below is further confusing me as it seems to imply that convergence is not that important thanks to the de-biasing equation. In fact, further discussion with the authors (!) led me to understand this relates to the existing alternatives for handling large data, like firefly Monte Carlo: Convergence to the stationary remains essential (and somewhat problematic) for all the partial estimators.

*“If a Markov chain is, in line with above considerations, used for computing partial posterior expectations *

*
Categories: Bayesian Bloggers
*

*
Categories: Bayesian Bloggers
Categories: Bayesian Bloggers
*

Filed under: Books, Kids, Statistics, University life Tagged: advanced Monte Carlo methods, classics, efficient importance sampling, evidence, Hamiltonian Monte Carlo, Monte Carlo Statistical Methods, nested sampling, seminar, slides, Université Paris Dauphine
Categories: Bayesian Bloggers

Filed under: Kids, pictures, Travel Tagged: Czech Republic, Gothic cathedral, Prague, Prague Castle, Saint Vitus cathedral
Categories: Bayesian Bloggers

Filed under: Books, Statistics, University life Tagged: Bayesian computing, Hamiltonian Monte Carlo, leapfrog generator, limited information inference, Monte Carlo Statistical Methods, subsampling, University of Warwick
Categories: Bayesian Bloggers

Filed under: Kids, pictures, Travel Tagged: baroque, church, Church of Saint Nicolas, Czech Republic, Kostel svatého Mikuláše, Prague
Categories: Bayesian Bloggers

Filed under: Statistics, University life Tagged: Benjamin Geen, David Spiegelhalter, Norman Fenton, Sally Clark, Sheila Bird, statistical evidence, Stephen Senn
Categories: Bayesian Bloggers
Categories: Bayesian Bloggers
Categories: Bayesian Bloggers

Filed under: Mountains, pictures, Running, Travel Tagged: Czech Republic, Giant Mountains, Snow Crash, Špindlerův Mlýn
Categories: Bayesian Bloggers

Filed under: Books, Statistics, Travel, University life Tagged: Alps, ANR, Bayesian testing, calibration, finite mixtures, France, improper priors, Nice, objective Bayes
Categories: Bayesian Bloggers

### Bayesian filtering and smoothing [book review]

**W**hen in Warwick last October, I met Simo Särkkä, who told me he had published an IMS monograph on Bayesian filtering and smoothing the year before. I thought it would be an appropriate book to review for CHANCE and tried to get a copy from Oxford University Press, unsuccessfully. I thus bought my own book that I received two weeks ago and took the opportunity of my Czech vacations to read it… *[A warning pre-empting accusations of self-plagiarism: this is a preliminary draft for a review to appear in CHANCE under my true name!]*

*“From the Bayesian estimation point of view both the states and the static parameters are unknown (random) parameters of the system.” (p.20)*

Bayesian filtering and smoothing is an introduction to the topic that essentially starts from ground zero. Chapter 1 motivates the use of filtering and smoothing through examples and highlights the naturally Bayesian approach to the problem(s). Two graphs illustrate the difference between filtering and smoothing by plotting for the same series of observations the successive confidence bands. The performances are obviously poorer with filtering but the fact that those intervals are point-wise rather than joint, i.e., that the graphs do not provide a confidence band. (The exercise section of that chapter is superfluous in that it suggests re-reading Kalman’s original paper and rephrases the Monty Hall paradox in a story unconnected with filtering!) Chapter 2 gives an introduction to Bayesian statistics in general, with a few pages on Bayesian computational methods. A first remark is that the above quote is both correct and mildly confusing in that the parameters can be consistently estimated, while the latent states cannot. A second remark is that justifying the MAP as associated with the 0-1 loss is incorrect in continuous settings. The third chapter deals with the batch updating of the posterior distribution, i.e., that the posterior at time t is the prior at time t+1. With applications to state-space systems including the Kalman filter. The fourth to sixth chapters concentrate on this Kalman filter and its extension, and I find it somewhat unsatisfactory in that the collection of such filters is overwhelming for a neophyte. And no assessment of the estimation error when the model is misspecified appears at this stage. And, as usual, I find the unscented Kalman filter hard to fathom! The same feeling applies to the smoothing chapters, from Chapter 8 to Chapter 10. Which mimic the earlier ones.

*“The degeneracy problem can be solved by a resampling procedure.” (p.123)*

By comparison, the seventh chapter on particle filters appears too introductory from my biased perspective. For instance, the above motivation for resampling in sequential importance (re)sampling is not clear enough. As stated it sounds too much like a trick, not mentioning the fast decrease in the number of first generation ancestors as the number of generations grows. And thus the need for either increasing the number of particles fast enough or checking for quick-forgetting. Chapter 11 is the equivalent of the above for particle smoothing. I would have like more details on the full posterior smoothing distribution, instead of the marginal posterior smoothing distribution at a given time t. And more of a discussion on the comparative merits of the different algorithms.

Chapter 12 is much longer than the other chapters as it caters to the much more realistic issue of parameter estimation. The chapter borrows at time from Cappé, Moulines and Rydèn (2007), where I contributed to the Bayesian estimation chapter. This is actually the first time in Bayesian filtering and smoothing when MCMC is mentioned. Including reference to adaptive MCMC and HMC. The chapter also covers some EM versions. And pMCMC à la Andrieu et al. (2010). Although a picture like Fig. 12.2 seems to convey the message that this particle MCMC approach is actually quite inefficient.

*“An important question (…) which of the numerous methods should I choose?”*

The book ends up with an Epilogue (Chapter 13). Suggesting to use (Monte Carlo) sampling only after all other methods have failed. Which implies assessing that those methods have indeed failed. Maybe the suggestion of running what seems like the most appropriate method first with synthetic data (rather than the real data) could be included. For one thing, it does not add much to the computing cost. All in all, and despite some criticisms voiced above, I find the book quite an handy and compact introduction to the field, albeit slightly terse for an undergraduate audience.

Filed under: Books, Statistics, Travel, University life Tagged: book review, CHANCE, EM algorithm, filtering, IMS Textbooks, Kalman filter, MAP estimators, particle filter, particle MCMC, plagiarism, Simo Särkkä, smoothing, The Monty Hall problem

### c’est reparti !

### reading classics (The End)

**T**oday was the final session of our Reading Classics Seminar for the academic year 2014-2015. I have not reported on this seminar much so far because it has had starting problems, namely hardly any student present on the first classes and therefore several re-starts until we reached a small group of interested students. And this is truly *The End* for this enjoyable experiment as this is the final year for my TSI Master at Paris-Dauphine, as it will become integrated within the new MASH Master next year.

As a last presentation for the entire series, my student picked John Skilling’s Nested Sampling, not that it was in my list of “classics”, but he had worked on the paper in a summer project and was thus reasonably fluent with the topic. As he did a good enough job (!), here are his slides.

Some of the questions that came to me during the talk were on how to run nested sampling sequentially, both in the data and in the number of simulated points, and on incorporating more deterministic moves in order to remove some of the Monte Carlo variability. I was about to ask about (!) the Hamiltonian version of nested sampling but then he mentioned his last summer internship on this very topic! I also realised during that talk that the formula (for positive random variables)

does not require absolute continuity of the distribution F.

Filed under: Books, Kids, Statistics, University life Tagged: advanced Monte Carlo methods, classics, efficient importance sampling, evidence, Hamiltonian Monte Carlo, Monte Carlo Statistical Methods, nested sampling, seminar, slides, Université Paris Dauphine

### Katedrála svatého Víta, Václava a Vojtěch

Filed under: Kids, pictures, Travel Tagged: Czech Republic, Gothic cathedral, Prague, Prague Castle, Saint Vitus cathedral

### the fundamental incompatibility of HMC and data subsampling

**L**ast week, Michael Betancourt, from Warwick, arXived a neat wee note on the fundamental difficulties in running HMC on a subsample of the original data. The core message is that using only one fraction of the data to run an HMC with the hope that it will preserve the stationary distribution does not work. The only way to recover from the bias is to use a Metropolis-Hastings step using the whole data, a step that both kills most of the computing gain and has very low acceptance probabilities. Even the strategy that subsamples for each step in a single trajectory fails: there cannot be a significant gain in time without a significant bias in the outcome. Too bad..! Now, there are ways of accelerating HMC, for instance by parallelising the computation of gradients but, just as in any other approach (?), the information provided by the whole data is only available when looking at the whole data.

Filed under: Books, Statistics, University life Tagged: Bayesian computing, Hamiltonian Monte Carlo, leapfrog generator, limited information inference, Monte Carlo Statistical Methods, subsampling, University of Warwick

### barokní Praha

Filed under: Kids, pictures, Travel Tagged: baroque, church, Church of Saint Nicolas, Czech Republic, Kostel svatého Mikuláše, Prague

### another Sally Clark?

*“I don’t trust my own intuition when an apparent coincidence occurs; I have to sit down and do the calculations to check whether it’s the kind of thing I might expect to occur at some time and place.” D. Spiegelhalter*

**I** just read in The Guardian an article on the case of the nurse Benjamin Geen, whose conviction to 30 years in jail in 2006 for the murder of two elderly patients rely on inappropriate statistical expertise. As for Sally Clark, the evidence was built around “unusual patterns” of deaths associated with a particular nurse, without taking into account the possible biases in building such patterns. The case against the 2006 expertise is based on reports by David Spiegelhalter, Norman Fenton, Stephen Senn and Sheila Bird, who constitute enough of a dream team towards reconsidering a revision of the conviction. As put forward by Prof Fenton, “at least one hospital in the country would be expected to see this many events over a four-year period, purely by chance.”

Filed under: Statistics, University life Tagged: Benjamin Geen, David Spiegelhalter, Norman Fenton, Sally Clark, Sheila Bird, statistical evidence, Stephen Senn

### back from Prague

### in a time lapse

### steps

Filed under: Mountains, pictures, Running, Travel Tagged: Czech Republic, Giant Mountains, Snow Crash, Špindlerův Mlýn

### a Nice talk

**T**oday, I give a talk on our testing paper in Nice, in a workshop run in connection with our Calibration ANR grant:

The slides are directly extracted from the paper but it still took me quite a while to translate the paper into those, during the early hours of our Czech break this week.

One added perk of travelling to Nice is the flight there, as it parallels the entire French Alps, a terrific view in nice weather!

Filed under: Books, Statistics, Travel, University life Tagged: Alps, ANR, Bayesian testing, calibration, finite mixtures, France, improper priors, Nice, objective Bayes