## Xian's Og

### Feller’s shoes and Rasmus’ socks [well, Karl's actually...]

**Y**esterday, Rasmus Bååth [of puppies' fame!] posted a very nice blog using ABC to derive the posterior distribution of the total number of socks in the laundry when only pulling out orphan socks and no pair at all in the first eleven draws. Maybe not the most pressing issue for Bayesian inference in the era of Big data but still a challenge of sorts!

Rasmus set a prior on the total number m of socks, a negative Binomial Neg(15,1/3) distribution, and another prior of the proportion of socks that come by pairs, a Beta B(15,2) distribution, then simulated pseudo-data by picking eleven socks at random, and at last applied ABC (in Rubin’s 1984 sense) by waiting for the observed event, i.e. only orphans and no pair [of socks]. Brilliant!

The overall simplicity of the problem set me wondering about an alternative solution using the likelihood. Cannot be that hard, can it?! After a few computations rejected by opposing them to experimental frequencies, I put the problem on hold until I was back home and with access to my Feller volume 1, one of the few [math] books I keep at home… As I was convinced one of the exercises in Chapter II would cover this case. After checking, I found a partial solution, namely Exercice 26:

*A closet contains n pairs of shoes. If 2r shoes are chosen at random (with 2r<n), what is the probability that there will be (a) no complete pair, (b) exactly one complete pair, (c) exactly two complete pairs among them?*

This is not exactly a solution, but rather a problem, however it leads to the value

as the probability of obtaining j pairs among those 2r shoes. Which also works for an odd number t of shoes:

as I checked against my large simulations. So I solved Exercise 26 in Feller volume 1 (!), but not Rasmus’ problem, since there are those orphan socks on top of the pairs. If one draws 11 socks out of m socks made of f orphans and g pairs, with f+2g=m, the number k of socks from the orphan group is an hypergeometric H(11,m,f) rv and the probability to observe 11 orphan socks total (either from the orphan or from the paired groups) is thus the marginal over all possible values of f:

so it could be argued that we are facing a closed-form likelihood problem. Even though it presumably took me longer to achieve this formula than for Rasmus to run his exact ABC code!

Filed under: Books, Kids, R, Statistics, University life Tagged: ABC, capture-recapture, combinatorics, subjective prior, William Feller

### BibTool on the air

**Y**esterday night, just before leaving for Coventry, I realised I had about 30 versions of my “mother of all .bib” bib file, spread over directories and with broken links with the original mother file… (I mean, I always create bib files in new directories by a hard link,

but they eventually and inexplicably end up with a life of their own!) So I decided a Spring clean-up was in order and installed BibTool on my Linux machine to gather all those versions into a new encompassing all-inclusive bib reference. I did not take advantage of the many possibilities of the program, written by Gerd Neugebauer, but it certainly solved my problem: once I realised I had to set the variates

check.double = on check.double.delete = on pass.comments = offall I had to do was to call

bibtool -s -i ../*/*.bib -o mother.bib bibtool -d -i mother.bib -o mother.bib bibtool -s -i mother.bib -o mother.bibto merge all bib file and then to get rid of the duplicated entries in mother.bib (the -d option commented out the duplicates and the second call with -s removed them). And to remove the duplicated definitions in the preamble of the file. This took me very little time in the RER train from Paris-Dauphine (where I taught this morning, having a hard time to make the students envision the empirical cdf as an average of Dirac masses!) to Roissy airport, in contrast with my pedestrian replacement of all stray siblings of the mother bib into new proper hard links, one by one. I am sure there is a bash command that could have done it in one line, but I spent instead my flight to Birmingham switching all existing bib files, one by one…

Filed under: Books, Linux, Travel, University life Tagged: bash, BibTeX, BibTool, Birmingham, Charles de Gaulle, LaTeX, link, Linux, RER B, Roissy, University of Warwick

### delayed acceptance [alternative]

**I**n a comment on our *Accelerating Metropolis-Hastings algorithms: Delayed acceptance with prefetching* paper, Philip commented that he had experimented with an alternative splitting technique retaining the right stationary measure: the idea behind his alternative acceleration is again (a) to divide the target into bits and (b) run the acceptance step by parts, towards a major reduction in computing time. The difference with our approach is to represent the overall acceptance probability

and, even more surprisingly than in our case, this representation remains associated with the right (posterior) target!!! Provided the ordering of the terms is random with a symmetric distribution on the permutation. This property can be directly checked via the detailed balance condition*.*

In a toy example, I compared the acceptance rates (*acrat*) for our delayed solution (*letabin.R*), for this alternative (*letamin.R*), and for a non-delayed reference (*letabaz.R*), when considering more and more fractured decompositions of a Bernoulli likelihood.

A very interesting outcome since the acceptance rate does not change with the number of terms in the decomposition for the alternative delayed acceptance method… Even though it logically takes longer than our solution. However, the drawback is that detailed balance implies picking the order at random, hence loosing on the gain in computing the cheap terms first. If reversibility could be bypassed, then this alternative would definitely get very appealing!

Filed under: Books, Kids, Statistics, University life Tagged: acceleration of MCMC algorithms, delayed acceptance, detailed balance, MCMC, Monte Carlo Statistical Methods, reversibility, simulation

### control functionals for Monte Carlo integration

**T**his new arXival by Chris Oates, Mark Girolami, and Nicolas Chopin *(warning: they all are colleagues & friends of mine!, at least until they read those comments…)* is a variation on control variates, but with a surprising twist namely that the inclusion of a control variate functional may produce a *sub-root-n* (i.e., faster than √n) convergence rate in the resulting estimator. Surprising as I did not know one could get to sub-root-n rates..! Now I had forgotten that Anne Philippe and I used the score in an earlier paper of ours, as a control variate for Riemann sum approximations, with faster convergence rates, but this is indeed a new twist, in particular because it produces an unbiased estimator.

The control variate writes

where π is the target density and φ is a free function to be optimised. (Under the constraint that πφ is integrable. Then the expectation of ψφ is indeed zero.) The “explanation” for the sub-root-n behaviour is that ψφ is chosen as an L2 regression. When looking at the sub-root-n convergence proof, the explanation is more of a Rao-Blackwellisation type, assuming a first level convergent (or *presistent*) approximation to the integrand [of the above form ψφ can be found. The optimal φ is the solution of a differential equation that needs estimating and the paper concentrates on approximating strategies. This connects with Antonietta Mira’s zero variance control variates, but in a non-parametric manner, adopting a Gaussian process as the prior on the unknown φ. And this is where the huge innovation in the paper resides, I think, i.e. in assuming a Gaussian process prior on the control functional *and* in managing to preserve unbiasedness. As in many of its implementations, modelling by Gaussian processes offers nice features, like ψφ being itself a Gaussian process. Except that it cannot be shown to lead to presistency on a theoretical basis. Even though it appears to hold in the examples of the paper. Apart from this theoretical difficulty, the potential hardship with the method seems to be in the implementation, as there are several parameters and functionals to be calibrated, hence calling for cross-validation which may often be time-consuming. The gains are humongous, so the method should be adopted whenever the added cost in implementing it is reasonable, cost which evaluation is not clearly provided by the paper. In the toy Gaussian example where everything can be computed, I am surprised at the relatively poor performance of a Riemann sum approximation to the integral, wondering at the level of quadrature involved therein. The paper also interestingly connects with O’Hagan’s (1991) Bayes-Hermite [polynomials] quadrature and quasi-Monte Carlo [obviously!].

Filed under: Books, Statistics, University life Tagged: control variate, convergence rate, Gaussian processes, Monte Carlo Statistical Methods, simulation, University of Warwick

### Domaine Ollier Taillefer [Faugères]

Filed under: Travel, Wines Tagged: carignan, Faugères, grenache, Languedoc, mourvèdre, red wine, Syrah

### Shravan Vasishth at Bayes in Paris this week

**T**aking advantage of his visit to Paris this month, Shravan Vasishth, from University of Postdam, Germany, will give a talk at 10.30am, next Friday, October 24, at ENSAE on:

**Using Bayesian Linear Mixed Models in Psycholinguistics: Some open issues
**

*With the arrival of the probabilistic programming language Stan (and JAGS), it has become relatively easy to fit fairly complex Bayesian linear mixed models. Until now, the main tool that was available in R was lme4. I will talk about how we have fit these models in recently published work (Husain et al 2014, Hofmeister and Vasishth 2014). We are trying to develop a standard approach for fitting these models so that graduate students with minimal training in statistics can fit such models using Stan.*

*I will discuss some open issues that arose in the course of fitting linear mixed models. In particular, one issue is: should one assume a full variance-covariance matrix for random effects even when there is not enough data to estimate all parameters? In lme4, one often gets convergence failure or degenerate variance-covariance matrices in such cases and so one has to back off to a simpler model. But in Stan it is possible to assume vague priors on each parameter, and fit a full variance-covariance matrix for random effects. The advantage of doing this is that we faithfully express in the model how the data were generated—if there is not enough data to estimate the parameters, the posterior distribution will be dominated by the prior, and if there is enough data, we should get reasonable estimates for each parameter. Currently we fit full variance-covariance matrices, but we have been criticized for doing this. The criticism is that one should not try to fit such models when there is not enough data to estimate parameters. This position is very reasonable when using lme4; but in the Bayesian setting it does not seem to matter.*

Filed under: Books, Statistics, University life Tagged: Bayesian linear mixed models., Bayesian modelling, JAGS, linear mixed models, lme4, prior domination, psycholinguistics, STAN, Universität Potsdam

### more geese…

### a week in Warwick

**T**his past week in Warwick has been quite enjoyable and profitable, from staying once again in a math house, to taking advantage of the new bike, to having several long discussions on several prospective and exciting projects, to meeting with some of the new postdocs and visitors, to attending Tony O’Hagan’s talk on “wrong models”. And then having Simo Särkkä who was visiting Warwick this week discussing his paper with me. And Chris Oates doing the same with his recent arXival with Mark Girolami and Nicolas Chopin (soon to be commented, of course!). And managing to run in dry conditions despite the heavy rains (but in pitch dark as sunrise is now quite late, with the help of a headlamp and the beauty of a countryside starry sky). I also evaluated several students’ projects, two of which led me to wonder when using RJMCMC was appropriate in comparing two models. In addition, I also eloped one evening to visit old (1977!) friends in Northern Birmingham, despite fairly dire London Midlands performances between Coventry and Birmingham New Street, the only redeeming feature being that the connecting train there was also late by one hour! (Not mentioning the weirdest taxi-driver ever on my way back, trying to get my opinion on whether or not he should have an affair… which at least kept me awake the whole trip!) Definitely looking forward my next trip there at the end of November.

Filed under: Books, Kids, Running, Statistics, University life Tagged: Birmingham, control variate, Coventry, English train, goose, London Midlands, Mark Girolami, Nicolas Chopin, particle MCMC, simulation model, taxi-driver, Tony O'Hagan, University of Warwick

### art brut

### frankly, I did not read your papers in detail, but…

**A** very refreshing email from a PhD candidate from abroad:

*“Franchement j’ai pas lu encore vos papiers en détails, mais j’apprécie vos axes de recherche et j’aimerai bien en faire autant avec votre collaboration, bien sûr. Actuellement, je suis à la recherche d’un sujet de thèse et c’est pour cela que je vous écris. Je suis prêt à négocier sur tout point et de tout coté.”*

*[Frankly I have not yet read your papers in detail , but I appreciate your research areas and I would love to do the same with your help , of course. Currently, I am looking for a thesis topic and this is why I write to you. I am willing to negotiate on any point and any side.*]

Filed under: Kids, Statistics, University life Tagged: foreign students, PhD s, PhD topic

### insufficient statistics for ABC model choice

*[Here is a revised version of my comments on the paper by Julien Stoehr, Pierre Pudlo, and Lionel Cucala, now to appear [both paper and comments] in Statistics and Computing special MCMSki 4 issue.]*

**A**pproximate Bayesian computation techniques are 2000’s successors of MCMC methods as handling new models where MCMC algorithms are at a loss, in the same way the latter were able in the 1990’s to cover models that regular Monte Carlo approaches could not reach. While they first sounded like “quick-and-dirty” solutions, only to be considered until more elaborate solutions could (not) be found, they have been progressively incorporated within the statistican’s toolbox as a novel form of non-parametric inference handling partly defined models. A statistically relevant feature of those ACB methods is that they require replacing the data with smaller dimension summaries or statistics, because of the complexity of the former. In almost every case when calling ABC is the unique solution, those summaries are not sufficient and the method thus implies a loss of statistical information, at least at a formal level since relying on the raw data is out of question. This forced reduction of statistical information raises many relevant questions, from the choice of summary statistics to the consistency of the ensuing inference.

In this paper of the special MCMSki 4 issue of *Statistics and Computing*, Stoehr et al. attack the recurrent problem of selecting summary statistics for ABC in a hidden Markov random field, since there is no fixed dimension sufficient statistics in that case. The paper provides a very broad overview of the issues and difficulties related with ABC model choice, which has been the focus of some advanced research only for a few years. Most interestingly, the authors define a novel, local, and somewhat Bayesian misclassification rate, an error that is conditional on the observed value and derived from the ABC reference table. It is the posterior predictive error rate

integrating in both the model index m and the corresponding random variable Y (and the hidden intermediary parameter) given the observation. Or rather given the transform of the observation by the summary statistic S. The authors even go further to define the error rate of a classification rule based on a first (collection of) statistic, conditional on a second (collection of) statistic (see Definition 1). A notion rather delicate to validate on a fully Bayesian basis. And they advocate the substitution of the unreliable (estimates of the) posterior probabilities by this local error rate, estimated by traditional non-parametric kernel methods. Methods that are calibrated by cross-validation. Given a reference summary statistic, this perspective leads (at least in theory) to select the optimal summary statistic as the one leading to the minimal local error rate. Besides its application to hidden Markov random fields, which is of interest *per se*, this paper thus opens a new vista on calibrating ABC methods and evaluating their true performances conditional on the actual data. (The advocated abandonment of the posterior probabilities could almost justify the denomination of a *paradigm shift*. This is also the approach advocated in our random forest paper.)

Filed under: Books, Kids, Statistics, University life Tagged: ABC, arXiv, Cross Validation, Gibbs random field, hidden Markov models, Markov random field, Monte Carlo Statistical Methods, paradigm shift, Pierre Pudlo, predictive loss, simulation, summary statistics

### a bootstrap likelihood approach to Bayesian computation

This paper by Weixuan Zhu, Juan Miguel Marín [from Carlos III in Madrid, not to be confused with Jean-Michel Marin, from Montpellier!], and Fabrizio Leisen proposes an alternative to our 2013 PNAS paper with Kerrie Mengersen and Pierre Pudlo on empirical likelihood ABC, or BCel. The alternative is based on Davison, Hinkley and Worton’s (1992) bootstrap likelihood, which relies on a double-bootstrap to produce a non-parametric estimate of the distribution of a given estimator of the parameter θ. Including a smooth curve-fitting algorithm step, for which not much description is available from the paper.

*“…in contrast with the empirical likelihood method, the bootstrap likelihood doesn’t require any set of subjective constrains taking advantage from the bootstrap methodology. This makes the algorithm an automatic and reliable procedure where only a few parameters need to be specified.”*

The spirit is indeed quite similar to ours in that a non-parametric substitute plays the role of the actual likelihood, with no correction for the substitution. Both approaches are convergent, with similar or identical convergence speeds. While the empirical likelihood relies on a choice of parameter identifying constraints, the bootstrap version starts directly from the [subjectively] chosen estimator of θ. For it indeed needs to be *chosen*. And computed.

*“Another benefit of using the bootstrap likelihood (…) is that the construction of bootstrap likelihood could be done once and not at every iteration as the empirical likelihood. This leads to significant improvement in the computing time when different priors are compared.”*

This is an improvement that could apply to the empirical likelihood approach, as well, once a large enough collection of likelihood values has been gathered. But only in small enough dimensions where smooth curve-fitting algorithms can operate. The same criticism applying to the derivation of a non-parametric density estimate for the distribution of the estimator of θ. Critically, the paper only processes examples with a few parameters.

In the comparisons between BCel and BCbl that are produced in the paper, the gain is indeed towards BCbl. Since this paper is mostly based on examples and illustrations, not unlike ours, I would like to see more details on the calibration of the non-parametric methods and of regular ABC, as well as on the computing time. And the variability of both methods on more than a *single* Monte Carlo experiment.

I am however uncertain as to how the authors process the population genetic example. They refer to the composite likelihood used in our paper to set the moment equations. Since this is not the true likelihood, how do the authors select their parameter estimates in the double-bootstrap experiment? The inclusion of Crakel’s and Flegal’s (2013) bivariate Beta, is somewhat superfluous as this example sounds to me like an artificial setting.

In the case of the Ising model, maybe the pre-processing step in our paper with Matt Moores could be compared with the other algorithms. In terms of BCbl, how does the bootstrap operate on an Ising model, i.e. (a) how does one subsample pixels and (b)what are the validity guarantees?

A test that would be of interest is to start from a standard ABC solution and use this solution as the reference estimator of θ, then proceeding to apply BCbl for that estimator. Given that the reference table would have to be produced only once, this would not necessarily increase the computational cost by a large amount…

Filed under: Books, R, Statistics, University life Tagged: ABC, ABCel, bivariate Beta distribution, bootstrap, bootstrap likelihood, double bootstrap, empirical likelihood, Ising model, population genetics

### 6th French Econometrics Conference in Dauphine

On December 4-5, Université Paris-Dauphine will host the 6th French Econometric Conference, which celebrates Christian Gouriéroux and his contributions to econometrics. (Christian was my statistics professor during my graduate years at ENSAE and then Head of CREST when I joined this research unit, first as a PhD student and later as Head of the statistics group. And he has always been a tremendous support for me.)

Not only is the program quite impressive, with co-authors of Christian Gouriéroux and a few Nobel laureates (if not the latest, Jean Tirole, who taught economics at ENSAE when I was a student there), but registration is free. I will most definitely attend the talks, as I am in Paris-Dauphine at this time of year (the week before NIPS). In particular, looking forward to Gallant’s views on Bayesian statistics.

Filed under: Books, Kids, pictures, Statistics, University life Tagged: Bayesian econometrics, Christian Gouriéroux, CREST, econometrics, ENSAE, Jean Tirole, Nobel Prize, Université Paris Dauphine

### my ISBA tee-shirt designs

**H**ere are my tee-shirt design proposals for the official ISBA tee-shirt competition! (I used the facilities of CustomInk.com as I could not easily find a free software around. Except for the last one where I recycled my vistaprint mug design…)

While I do not have any expectation of seeing one of these the winner (!), what is your favourite one?!

Take Our Poll (function(d,c,j){if(!d.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src='http://s1.wp.com/wp-content/mu-plugins/shortcodes/js/polldaddy-shortcode.js';s=d.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);} else if(typeof jQuery !=='undefined')jQuery(d.body).trigger('pd-script-load');}(document,'script','pd-polldaddy-loader'));Filed under: Books, Kids, pictures, Statistics, University life Tagged: Bayesian statistics, competition, CustomInk, ISBA, poll, tee-shirt, Thomas Bayes' portrait, werewolf

### Le Monde puzzle [#882]

**A** terrific Le Monde mathematical puzzle:

*All integers between 1 and n² are written in an (n,n) ** matrix under the constraint that two consecutive integers **are adjacent (i.e. 15 and 13 are two of the four neighbours of 14). What is the maximal value for the sum of the diagonal of this matrix?
*

Indeed, when considering a simulation resolution (for small values of m), it constitutes an example of self-avoiding random walk: when inserting the integers one by one at random, one produces a random walk over the (n,n) grid.

While the solution is trying to stick as much as possible to the diagonal vicinity for the descending sequence n²,n²-1, &tc., leaving space away from the diagonal for the terminal values, as in this example for n=5,

25 22 21 14 13 24 23 20 15 12 01 02 19 16 11 04 03 18 17 10 05 06 07 08 09simulating such a random walk is a bit challenging as the brute force solution does not work extremely well:

n=5 n2=n^2 init=function(){ board=invoard=matrix(0,n,n) set=rep(0,n2) start=val=n2 #sample(1:n2,1) neigh=board[start]=val invoard[val]=start set[val]=1 return(list(board=board,invoard=invoard, set=set,neigh=neigh)) } voisi=function(i){ a=arrayInd(i,c(n,n)) b=a[2];a=a[1] if ((a>1)&(a<n)){ voizin=(a+c(-1,1))+(b-1)*n}else{ if (a==1) voizin=a+1+(b-1)*n if (a==n) voizin=a-1+(b-1)*n} if ((b>1)&(b<=n)){ voizin=c(voizin,a+(b+c(-2,0))*n)}else{ if (b==1) voizin=c(voizin,a+b*n) if (b>n) voizin=c(voizin,a+(b-2)*n)} voizin=voizin[(voizin>0)&(voizin<=n2)] return(voizin) } a=init() board=a$board invoard=a$invoard set=a$set neigh=a$neigh while (sum(set)<n2){ if (length(neigh)==1) neighb=neigh+c(-1,1) if (length(neigh)==2) neighb=c(min(neigh)-1, max(neigh)+1) neighb=neighb[(neighb>0)&(neighb<=n2)] neighb=neighb[set[neighb]==0] for (i in 1:length(neighb)){ j=1 if (i==2) j=1+(length(neigh)>1) loc=voisi(invoard[neigh[j]]) loc=loc[board[loc]==0] if (length(loc)==0) break() #no solution if (length(loc)==1){ board[loc]=neighb[i] invoard[neighb[i]]=loc set[neighb[i]]=1} if (length(loc)>1){ #2 or more solutions val=sample(loc,1) board[val]=neighb[i] invoard[neighb[i]]=val set[neighb[i]]=1} } if (min(set[neighb])==0){#start afresco a=init() board=a$board invoard=a$invoard set=a$set neigh=a$neigh }else{ if (length(neighb)==1) neigh=neighb if (length(neighb)>1){ neigh=sort(neighb) neigh=neigh[(neigh>0)&(neigh<=n2)] }} }the reason being that the chain often has to restart a fresco, the more the larger n is… For n=5, I still recovered the optimal solution:

> while (sum(diag(board))<93) + source("lemonde882.R") [,1] [,2] [,3] [,4] [,5] [1,] 9 8 7 6 1 [2,] 10 17 18 5 2 [3,] 11 16 19 4 3 [4,] 12 15 20 23 24 [5,] 13 14 21 22 25but running the R code for n=7 towards finding the maximum (259?) takes quite a while and 50 proposals for n=8 took the whole night… If I run a simple log-log regression on the values obtained for n=2,…,7, the prediction for n=10 is 768. A non-stochastic prediction is 870.

As a last ditch attempt to recover the sequence n²-2k along the diagonal for k=0,1,…, I modified my code to favour simulations close to the diagonal at the start, as

if (length(loc)>1){#two or more solutions val=sample(loc,1, proba=exp(-abs(loc%%n-1-loc%/%n)/sum(set)))which produces higher diagonal values but also more rejections.

Filed under: Books, Kids, Statistics, University life Tagged: Le Monde, mathematical puzzle, self-avoiding random walk

### how far can we go with Minard’s map?!

Like many others, I discovered Minard’s map of the catastrophic 1812 Russian campaign of Napoleon in Tufte’s book. And I consider it a masterpiece for its elegant way of summarising some many levels of information about this doomed invasion of Russia. So when I spotted Menno-Jan Kraak’s Mapping Time, analysing the challenges of multidimensional cartography through this map and this Naepoleonic campaign, I decided to get a look at it.

Apart from the trivia about Kraak‘s familial connection with the Russian campaign and the Berezina crossing which killed one of his direct ancestors, his great-great-grandfather, along with a few dozen thousand others (even though this was not the most lethal part of the campaign), he brings different perspectives on the meaning of a map and the quantity of information one could or should display. This is not unlike other attempts at competiting with Minard, including those listed on Michael Friendly’s page. Incl. the cleaner printing above. And the dumb pie-chart… A lot more can be done in 2013 than in 1869, indeed, including the use of animated videos, but I remain somewhat sceptical as to the whole purpose of the book. It is a beautiful object, with wide margins and nice colour reproductions, for sure, alas… I just do not see the added value in Kraak‘s work. I would even go as far as thinking this is an a-statistical approach, namely that by trying to produce as much data as possible into the picture, he forgets the whole point of the drawing which is I think to show the awful death rate of the Grande Armée along this absurd trip to and from Moscow and the impact of temperature (although the rise that led to the thaw of the Berezina and the ensuing disaster does not seem correlated with the big gap at the crossing of the river). If more covariates were available, two further dimensions could be added: the proportions of deaths due to battle, guerilla, exhaustion, desertion, and the counterpart map of the Russian losses. In the end, when reading Mapping Time, I learned more about the history surrounding this ill-planned military campaign than about the proper display of data towards informative and unbiased graphs.

Filed under: Books, Linux, pictures, Statistics, Travel Tagged: Berezina, book review, Grande Armée, Kraak, map, Minard, Napoléon, Patriotic war, Russian campaign, Russian winter, Tufte

### poor graph of the day

### Wien graffitis

Filed under: Kids, pictures, Running, Travel Tagged: bagpipes, graffitis, Panza, Scotland, Skirl, Vienna

### impression, soleil couchant

**A**s the sunset the day before had been magnificent [thanks to the current air pollution!], I attempted to catch it from a nice spot and went to the top of the nearby hill. The particles were alas (!) not so numerous that evening and so I only got this standard view of the sun going down. Incidentally, I had read a few days earlier in the plane from Vienna that the time and date when Monet’s Impression, Soleil Levant was painted had been identified by a U.S. astronomer as 7h35 on the 13th of Novembre 1872. In Le Havre, Normandy. An exhibit in the nearby Musée Marmottan retraces this quest…

Filed under: pictures, Running Tagged: Bagneux, Claude Monet, Le Havre, Marmottan, Paris, sunrise, sunset

### Combining Particle MCMC with Rao-Blackwellized Monte Carlo Data Association

**T**his recently arXived paper by Juho Kokkala and Simo Särkkä mixes a whole lot of interesting topics, from particle MCMC and Rao-Blackwellisation to particle filters, Kalman filters, and even bear population estimation. The starting setup is the state-space hidden process models where particle filters are of use. And where Andrieu, Doucet and Hollenstein (2010) introduced their particle MCMC algorithms. Rao-Blackwellisation steps have been proposed in this setup in the original paper, as well as in the ensuing discussion, like recycling rejected parameters and associated particles. The beginning of the paper is a review of the literature in this area, in particular of the Rao*-Blackwellized Monte Carlo Data Association* algorithm developed by Särkkä et al. (2007), of which I was not aware previously. (I alas have not followed closely enough the filtering literature in the past years.) Targets evolve independently according to Gaussian dynamics.

In the description of the model (Section 3), I feel there are prerequisites on the model I did not have (and did not check in Särkkä et al., 2007), like the meaning of targets and measurements: it seems the model assumes each measurement corresponds to a given target. More details or an example would have helped. The extension against the existing appears to be the (major) step of including unknown parameters. Due to my lack of expertise in the domain, I have no notion of the existence of similar proposals in the literature, but handling unknown parameters is definitely of direct relevance for the statistical analysis of such problems!

The simulation experiment based on an Ornstein-Uhlenbeck model is somewhat anticlimactic in that the posterior on the mean reversion rate is essentially the prior, conveniently centred at the true value, while the others remain quite wide. It may be that the experiment was too ambitious in selecting 30 simultaneous targets with only a total of 150 observations. Without highly informative priors, my beotian reaction is to doubt the feasibility of the inference. In the case of the Finnish bear study, the huge discrepancy between priors and posteriors, as well as the significant difference between the forestry expert estimations and the model predictions should be discussed, if not addressed, possibly via a simulation using the posteriors as priors. Or maybe using a hierarchical Bayes model to gather a time-wise coherence in the number of bear families. (I wonder if this technique would apply to the type of data gathered by Mohan Delampady on the West Ghats tigers…)

Overall, I am slightly intrigued by the practice of running MCMC chains in parallel and merging the outcomes with no further processing. This assumes a lot in terms of convergence and mixing on all the chains. However, convergence is never directly addressed in the paper.

Filed under: Books, Statistics, University life Tagged: -Blackwellized Monte Carlo Data Association, bear, data association, Finland, MCMC, particle Gibbs sampler, pMCMC, prior-posterior discrepancy, Rao-Blackwellisation, simulation, target tracking, tiger, Western Ghats