Bayesian Bloggers

approximate approximate Bayesian computation [not a typo!]

Xian's Og - Sun, 2015-01-11 18:15

“Our approach in handling the model uncertainty has some resemblance to statistical ‘‘emulators’’ (Kennedy and O’Hagan, 2001), approximative methods used to express the model uncertainty when simulating data under a mechanistic model is computationally intensive. However, emulators are often motivated in the context of Gaussian processes, where the uncertainty in the model space can be reasonably well modeled by a normal distribution.”

Pierre Pudlo pointed out to me the paper AABC: Approximate approximate Bayesian computation for inference in population-genetic models by Buzbas and Rosenberg that just appeared in the first 2015 issue of Theoretical Population Biology. Despite the claim made above, including a confusion on the nature of Gaussian processes, I am rather reserved about the appeal of this AA rated ABC…

“When likelihood functions are computationally intractable, likelihood-based inference is a challenging problem that has received considerable attention in the literature (Robert and Casella, 2004).”

The ABC approach suggested therein is doubly approximate in that simulation from the sampling distribution is replaced with simulation from a substitute cheaper model. After a learning stage using the costly sampling distribution. While there is convergence of the approximation to the genuine ABC posterior under infinite sample and Monte Carlo sample sizes, there is no correction for this approximation and I am puzzled by its construction. It seems (see p.34) that the cheaper model is build by a sort of weighted bootstrap: given a parameter simulated from the prior, weights based on its distance to a reference table are constructed and then used to create a pseudo-sample by weighted sampling from the original pseudo-samples. Rather than using a continuous kernel centred on those original pseudo-samples, as would be the suggestion for a non-parametric regression. Each pseudo-sample is accepted only when a distance between the summary statistics is small enough. This bootstrap flavour is counter-intuitive in that it requires a large enough sample from the true  sampling distribution to operate with some confidence… I also wonder at what happens when the data is not iid.  (I added the quote above as another source of puzzlement, since the book is about cases when the likelihood is manageable.)


Filed under: Books, Statistics, University life Tagged: ABC, bootstrap, Dirichlet prior, Gaussian processes, Monte Carlo methods, Theoretical Population Biology
Categories: Bayesian Bloggers

Bhattacharyya distance versus Kullback-Leibler divergence

Xian's Og - Fri, 2015-01-09 18:15

Another question I picked on Cross Validated during the Yule break is about the connection between the Bhattacharyya distance and the Kullback-Leibler divergence, i.e.,

and

Although this Bhattacharyya distance sounds close to the Hellinger distance,

the ordering I got by a simple Jensen inequality is

and I wonder how useful this ordering could be…


Filed under: Books, Kids, Statistics Tagged: Bhattacharyya distance, cross validated, Hellinger distance, Jensen inequality, Kullback-Leibler divergence
Categories: Bayesian Bloggers

ABC with emulators

Xian's Og - Thu, 2015-01-08 18:15

A paper on the comparison of emulation methods for Approximate Bayesian Computation was recently arXived by Jabot et al. The idea is to bypass costly simulations of pseudo-data by running cheaper simulation from a pseudo-model or emulator constructed via a preliminary run of the original and costly model. To borrow from the paper introduction, ABC-Emulation runs as follows:

  1. design a small number n of parameter values covering the parameter space;
  2. generate n corresponding realisations from the model and store the corresponding summary statistics;
  3. build an emulator (model) based on those n values;
  4. run ABC using the emulator in lieu of the original model.

A first emulator proposed in the paper is to use local regression, as in Beaumont et al. (2002), except that it goes the reverse way: the regression model predicts a summary statistics given the parameter value. The second and last emulator relies on Gaussian processes, as in Richard Wilkinson‘s as well as Ted Meeds’s and Max Welling‘s recent work [also quoted in the paper]. The comparison of the above emulators is based on an ecological community dynamics model. The results are that the stochastic version is superior to the deterministic one, but overall not very useful when implementing the Beaumont et al. (2002) correction. The paper however does not define what deterministic and what stochastic mean…

“We therefore recommend the use of local regressions instead of Gaussian processes.”

While I find the conclusions of the paper somewhat over-optimistic given the range of the experiment and the limitations of the emulator options (like non-parametric conditional density estimation), it seems to me that this is a direction to be pursued as we need to be able to simulate directly a vector of summary statistics instead of the entire data process, even when considering an approximation to the distribution of those summaries.


Filed under: Books, Statistics Tagged: ABC, ABC-Emulation, Approximate Bayesian computation, ecological models, emulation, Gaussian processes, Monte Carlo methods, summary statistics
Categories: Bayesian Bloggers

a day of mourning

Xian's Og - Wed, 2015-01-07 18:15

“Religion, a mediaeval form of unreason, when combined with modern weaponry becomes a real threat to our freedoms. This religious totalitarianism has caused a deadly mutation in the heart of Islam and we see the tragic consequences in Paris today. I stand with Charlie Hebdo, as we all must, to defend the art of satire, which has always been a force for liberty and against tyranny, dishonesty and stupidity. ‘Respect for religion’ has become a code phrase meaning ‘fear of religion.’ Religions, like all other ideas, deserve criticism, satire, and, yes, our fearless disrespect.” Salman Rushdie


Filed under: Books Tagged: Charlie Hebdo, Charpentier, Je suis Charlie, leçons de ténèbres, Salman Rushdie
Categories: Bayesian Bloggers

how many modes in a normal mixture?

Xian's Og - Tue, 2015-01-06 18:15

An interesting question I spotted on Cross Validated today: How to tell if a mixture of Gaussians will be multimodal? Indeed, there is no known analytical condition on the parameters of a fully specified k-component mixture for the modes to number k or less than k… Googling around, I immediately came upon this webpage by Miguel Carrera-Perpinan, who studied the issue with Chris Williams when writing his PhD in Edinburgh. And upon this paper, which not only shows that

  1. unidimensional Gaussian mixtures with k components have at most k modes;
  2. unidimensional non-Gaussian mixtures with k components may have more than k modes;
  3. multidimensional mixtures with k components may have more than k modes.

but also provides ways of finding all the modes. Ways which seem to reduce to using EM from a wide variety of starting points (an EM algorithm set in the sampling rather than in the parameter space since all parameters are set!). Maybe starting EM from each mean would be sufficient.  I still wonder if there are better ways, from letting the variances decrease down to zero until a local mode appear, to using some sort of simulated annealing…

Edit: Following comments, let me stress this is not a statistical issue in that the parameters of the mixture are set and known and there is no observation(s) from this mixture from which to estimate the number of modes. The mathematical problem is to determine how many local maxima there are for the function


Filed under: Books, Kids, Statistics, University life Tagged: Chris Williams, EM algorithm, Miguel Carrera-Perpinan, mixture estimation, modes of a mixture, Scotland, University of Edinburgh
Categories: Bayesian Bloggers

ABC by population annealing

Xian's Og - Mon, 2015-01-05 18:14

The paper “Bayesian Parameter Inference and Model Selection by Population Annealing in System Biology” by Yohei Murakami got published in PLoS One last August but I only became aware of it when ResearchGate pointed it out to me [by mentioning one of our ABC papers was quoted there].

“We are recommended to try a number of annealing schedules to check the influence of the schedules on the simulated data (…) As a whole, the simulations with the posterior parameter ensemble could, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference.”

Population annealing is a notion introduced by Y Iba, the very same IBA who introduced the notion of population Monte Carlo that we studied in subsequent papers. It reproduces the setting found in many particle filter papers of a sequence of (annealed or rather tempered) targets ranging from an easy (i.e., almost flat) target to the genuine target, and of an update of a particle set by MCMC moves and reweighing. I actually have trouble perceiving the difference with other sequential Monte Carlo schemes as those exposed in Del Moral, Doucet and Jasra (2006, Series B). And the same is true of the ABC extension covered in this paper. (Where the annealed intermediate targets correspond to larger tolerances.) This sounds like a traditional ABC-SMC algorithm. Without the adaptive scheme on the tolerance ε found e.g. in Del Moral et al., since the sequence is set in advance. [However, the discussion about the implementation includes the above quote that suggests a vague form of cross-validated tolerance construction]. The approximation of the marginal likelihood also sounds standard, the marginal being approximated by the proportion of accepted pseudo-samples. Or more exactly by the sum of the SMC weights at the end of the annealing simulation. This actually raises several questions: (a) this estimator is always between 0 and 1, while the marginal likelihood is not restricted [but this is due to a missing 1/ε in the likelihood estimate that cancels from both numerator and denominator]; (b) seeing the kernel as a non-parametric estimate of the likelihood led me to wonder why different ε could not be used in different models, in that the pseudo-data used for each model under comparison differs. If we were in a genuine non-parametric setting the bandwidth would be derived from the pseudo-data.

“Thus, Bayesian model selection by population annealing is valid.”

The discussion about the use of ABC population annealing somewhat misses the point of using ABC, which is to approximate the genuine posterior distribution, to wit the above quote: that the ABC Bayes factors favour the correct model in the simulation does not tell anything about the degree of approximation wrt the original Bayes factor. [The issue of non-consistent Bayes factors does not apply here as there is no summary statistic applied to the few observations in the data.] Further, the magnitude of the variability of the values of this Bayes factor as ε varies, from 1.3 to 9.6, mostly indicates that the numerical value is difficult to trust. (I also fail to explain the huge jump in Monte Carlo variability from 0.09 to 1.17 in Table 1.) That this form of ABC-SMC improves upon the basic ABC rejection approach is clear. However it needs to build some self-control to avoid arbitrary calibration steps and reduce the instability of the final estimates.

“The weighting function is set to be large value when the observed data and the simulated data are ‘‘close’’, small value when they are ‘‘distant’’, and constant when they are ‘‘equal’’.”

The above quote is somewhat surprising as the estimated likelihood f(xobs|xobs,θ) is naturally constant when xobs=xsim… I also failed to understand how the model intervened in the indicator function used as a default ABC kernel


Filed under: Statistics, University life Tagged: ABC, ABC model choice, ABC-SMC, evidence, marginal likelihood, sequential Monte Carlo, simulated annealing, tempering, tolerance
Categories: Bayesian Bloggers

my X-validated hats

Xian's Og - Mon, 2015-01-05 08:18
Categories: Bayesian Bloggers

O-Bayes15 [registration & call for papers]

Xian's Og - Sun, 2015-01-04 18:15

Both registration and call for papers have now been posted on the webpage of the 11th International Workshop on Objective Bayes Methodology, aka O-Bayes 15, that will take place in Valencia next June 1-5.  The spectrum of the conference is quite wide, as reflected by the range of speakers. In addition, this conference is dedicated to our friend Susie Bayarri, to celebrate her life and contributions to Bayesian Statistics. And in continuation of the morning jog in the memory of George Casella organised by Laura Ventura in Padova, there will be a morning jog for Susie. So register for the meeting and bring your running shoes!


Filed under: Kids, pictures, Statistics, Travel, University life Tagged: George Casella, O-Bayes 2015, Padova, registration, running, Spain, Susie Bayarri, València
Categories: Bayesian Bloggers

musical break

Xian's Og - Sat, 2015-01-03 18:15

During the Yule break, I listened mostly two CDs, the 2013  If you wait, by London Grammar, and The shape of a broken heart, by Imany. Both were unexpected discoveries, brought to me by family members, but I enjoyed those tremendously!


Filed under: Books, Kids Tagged: album, Imany, London Grammar, songs, video, Yule
Categories: Bayesian Bloggers

the travelling salesman

Xian's Og - Fri, 2015-01-02 18:15

A few days ago, I was grading my last set of homeworks for the MCMC graduate course I teach to both Dauphine and ENSAE graduate students. A few students had chosen to write a travelling salesman simulated annealing code (Exercice 7.22 in Monte Carlo Statistical Methods) and one of them included this quote

“And when I saw that, I realized that selling was the greatest career a man could want. ‘Cause what could be more satisfying than to be able to go, at the age of eighty-four, into twenty or thirty different cities, and pick up a phone, and be remembered and loved and helped by so many different people ?”
Arthur Miller, Death of a Salesman

which was a first!


Filed under: Statistics Tagged: Arthur Miller, Death of a Salesman, ENSAE, exercises, homework, MCMC, Monte Carlo Statistical Methods, travelling salesman Concorde, Université Paris Dauphine
Categories: Bayesian Bloggers

2014 in review

Xian's Og - Thu, 2015-01-01 18:15

The WordPress.com stats helper monkeys prepared a 2014 annual report for the ‘Og…

.. and among the collected statistics for 2014, what I found most amazing are the three accesses from Greenland and the one access from Afghanistan!

Click here to see the complete report. (Assuming you have nothing better to do on Boxing day…)


Filed under: Statistics Tagged: annual report, blogging, Boxing Day, Happy New Year, Wordpress
Categories: Bayesian Bloggers

the slow regard of silent things

Xian's Og - Wed, 2014-12-31 18:15

As mentioned previously, I first bought this book thinking it was the third and final volume in the Kingkiller’s Chronicles. Hence I was more than disappointed when Dan warned me that it was instead a side-story about Auri, an important but still secondary character in the story. More than disappointed as I thought Patrick Rothfuss was following the frustrating path of other fantasy authors with unfinished series (like Robert Jordan and George R.R. Martin) to write shorter novels set in their universe and encyclopedias instead of focussing on the real thing! However, when I started reading it, I was so startled by the novelty of the work, the beauty of the language, the alien features of the story or lack thereof, that I forgot about my grudge. I actually finished this short book very early a few mornings past Christmas, after a mild winter storm had awaken me for good. And look forward re-reading it soon.

“Better still, the slow regard of silent things had wafted off the moisture in the air.”

This is a brilliant piece of art, much more a poème en prose than a short story. There is no beginning and no end, no purpose and no rationale to most of Auri’s actions, and no direct connection with the Kingkiller’s Chronicles story other than the fact that it takes place in or rather below the University. And even less connection with the plot. So this book may come as a huge disappointment to most readers of the series, as exemplified by the numerous negative comments found on amazon.com and elsewhere. Especially those looking for clues about the incoming (?) volume. Or for explanations of past events… Despite all this, or because of it, I enjoyed the book immensely, in a way completely detached from the pleasure I took in reading Kingkiller’s Chronicles. There is genuine poetry in the repetition of words, in the many alliterations, in the saccade of unfinished sentences, in the symmetry of Auri’s world, in the making of soap and in the making of candles, in the naming and unaming of objects. Poetry and magic, even though it is not necessarily the magic found in the Kingkiller’s Chronicles. The Slow Regard of Silent Things is simply a unique book, an outlier in the fantasy literature, a finely tuned read that shows how much of a wordsmith Rothfuss can be, and a good enough reason to patiently wait for the third volume: “She could not rush and neither could she be delayed. Some things were simply too important.”


Filed under: Books, Kids Tagged: Auri, literature, Patrick Rothfuss, poème en prose, poetry, The Name of the Wind, The Slow Regard of Silent Things, The Wise Man's Fear
Categories: Bayesian Bloggers

foie gras fois trois

Xian's Og - Tue, 2014-12-30 18:14

As New Year’s Eve celebrations are getting quite near, newspapers once again focus on related issues, from the shortage of truffles, to the size of champagne bubbles, to the prohibition of foie gras. Today, I noticed an headline in Le Monde about a “huge increase in French people against force-fed geese and ducks: 3% more than last year are opposed to this practice”. Now, looking at the figures, it is based on a survey of 1,032 adults, out of which 47% were against. From a purely statistical perspective, this is not highly significant since

is compatible with the null hypothesis N(0,1) distribution.


Filed under: Statistics, Wines Tagged: champagne, foie gras, goose liver, Le Monde, significance test, survey sampling, truffles
Categories: Bayesian Bloggers

top posts for 2014

Xian's Og - Mon, 2014-12-29 18:14

Here are the most popular entries for 2014:

17 equations that changed the World (#2) 995 Le Monde puzzle [website] 992 “simply start over and build something better” 991 accelerating MCMC via parallel predictive prefetching 990 Bayesian p-values 960 posterior predictive p-values 849 Bayesian Data Analysis [BDA3] 846 Bayesian programming [book review] 834 Feller’s shoes and Rasmus’ socks [well, Karl’s actually…] 804 the cartoon introduction to statistics 803 Asymptotically Exact, Embarrassingly Parallel MCMC 730 Foundations of Statistical Algorithms [book review] 707 a brief on naked statistics 704 In{s}a(ne)!! 682 the demise of the Bayes factor 660 Statistical modeling and computation [book review] 591 bridging the gap between machine learning and statistics 587 new laptop with ubuntu 14.04 574 Bayesian Data Analysis [BDA3 – part #2] 570 MCMC on zero measure sets 570 Solution manual to Bayesian Core on-line 567 Nonlinear Time Series just appeared 555 Sudoku via simulated annealing 538 Solution manual for Introducing Monte Carlo Methods with R 535 future of computational statistics 531

What I appreciate from that list is that (a) book reviews [of stats books] get a large chunk (50%!) of the attention and (b) my favourite topics of Bayesian testing, parallel MCMC and MCMC on zero measure sets made it to the top list. Even the demise of the Bayes factor that was only posted two weeks ago!


Filed under: Books, R, Statistics, University life Tagged: book reviews, Le Monde, simulated annealing, Ubuntu 14.04
Categories: Bayesian Bloggers

partly virtual meetings

Xian's Og - Sun, 2014-12-28 18:14

A few weeks ago, I read in the NYT an article about the American Academy of Religion cancelling its 2021 annual meeting as a sabbatical year, for environmental reasons.

“We could choose to not meet at a huge annual meeting in which we take over a city. Every year, each participant going to the meeting uses a quantum of carbon that is more than considerable. Air travel, staying in hotels, all of this creates a way of living on the Earth that is carbon intensive. It could be otherwise.”

While I am not in the least interested in the conference or in the topics covered by this society or yet in the benevolent religious activities suggested as a substitute, the notion of cancelling the behemoths that are our national and international academic meetings holds some appeal. I have posted several times on the topic, especially about JSM, and I have no clear and definitive answer to the question. Still, there lies a lack of efficiency on top of the environmental impact that we could and should try to address. As I was thinking of those issues in the past week, I made another of my numerous “carbon footprints” by attending NIPS across the Atlantic for two workshops than ran in parallel with about twenty others. And hence could have taken place in twenty different places. Albeit without the same exciting feeling of constant intellectual simmering. And without the same mix of highly interactive scholars from all over the planet. (Although the ABC in Montréal workshop seemed predominantly European!) Since workshops are in my opinion the most profitable type of meeting, I would like to experiment with a large meeting made of those (focussed and intense) workshops in such a way that academics would benefit without travelling long distances across the World. One idea would be to have local nodes where a large enough group of researchers could gather to attend video-conferences given from any of the other nodes and to interact locally in terms of discussions and poster presentations. This should even increase the feedback on selected papers as small groups would more readily engage into discussing and criticising papers than a huge conference room. If we could build a World-wide web (!) of such nodes, we could then dream of a non-stop conference, with no central node, no gigantic conference centre, no terrifying beach-ressort…


Filed under: Kids, pictures, Statistics, Travel, University life Tagged: ABC in Montréal, Benidorm, carbon impact, flight, Montréal, NIPS, online meeting, Statistics conference, travel support, world meeting
Categories: Bayesian Bloggers

the dark defiles

Xian's Og - Sat, 2014-12-27 18:14

The final and long-awaited volume of a series carries so much expectation that it more often than not ends up disappointing [me]. The Dark Defiles somewhat reluctantly falls within this category… This book is the third instalment of Richard K. Morgan’s fantasy series, A Land Fit for Heroes. Of which I liked mostly the first volume, The Steel Remains. When considering that this first book came out in January 2009, about six years ago, this may explains for the somewhat desultory tone of The Dark Defiles. As well as the overwhelming amount of info-dump needed to close the many open threads about the nature of the Land Fit for Heroes.

“They went. They dug. Found nothing and came back, mostly in the rain.”

[Warning: some spoilers in the following!] The most striking imbalance in the story is the rather mundane pursuits of the three major heroes, from finding an old sword to avenging fallen friends here and there, against the threat of an unravelling of the entire Universe and of the disappearance of the current cosmology.  In addition, the absolute separation maintained by Morgan between Archeth and Ringil kills some of the alchemy of the previous books and increases the tendency to boring inner monologues. The volume is much, much more borderline science-fiction than the previous ones, which obviously kills some of the magic, given that the highest powers that be sound like a sort of meta computer code that eventually gives Ringil the ultimate decision. As often, this mix between fantasy and science-fiction is not much to my taste, since it gives too much power to the foreign machines, the Helmsmen, which sound like they are driving the main human players for very long term goals. And which play too often deus ex machina to save the “heroes” from unsolvable situations. Overall a wee bit of a lengthy book, with a story coming to an unexpected end in the very final pages, leaving some threads unexplained and some feeling that style prevailed over story. But nonetheless a page turner in its second half.


Filed under: Books Tagged: A Land Fit for Heroes, book reviews, heroic fantasy, Richard K. Morgan, science fiction, The Dark Defiles, trilogy
Categories: Bayesian Bloggers