Bayesian Bloggers

the Flatland paradox

Xian's Og - Tue, 2015-05-12 18:15

Pierre Druilhet arXived a note a few days ago about the Flatland paradox (due to Stone, 1976) and his arguments against the flat prior. The paradox in this highly artificial setting is as follows:  Consider a sequence θ of N independent draws from {a,b,1/a,1/b} such that

  1. N and θ are unknown;
  2. a draw followed by its inverse and this inverse are removed from θ;
  3. the successor x of θ is observed, meaning an extra draw is made and the above rule applied.

Then the frequentist probability that x is longer than θ given θ is at least 3/4—at least because θ could be zero—while the posterior probability that x is longer than θ given x is 1/4 under the flat prior over θ. Paradox that 3/4 and 1/4 clash. Not so much of a paradox because there is no joint probability distribution over (x,θ).

The paradox was actually discussed at length in Larry Wasserman’s now defunct Normal Variate. From which I borrowed Larry’s graphical representation of the four possible values of θ given the (green) endpoint of x. Larry uses the Flatland paradox hammer to fix another nail on the coffin he contemplates for improper priors. And all things Bayes. Pierre (like others before him) argues against the flat prior on θ and shows that a flat prior on the length of θ leads to recover 3/4 as the posterior probability that x is longer than θ.

As I was reading the paper in the métro yesterday morning, I became less and less satisfied with the whole analysis of the problem in that I could not perceive θ as a parameter of the model. While this may sound a pedantic distinction, θ is a latent variable (or a random effect) associated with x in a model where the only unknown parameter is N, the total number of draws used to produce θ and x. The distributions of both θ and x are entirely determined by N. (In that sense, the flatland paradox can be seen as a marginalisation paradox in that an improper prior on N cannot be interpreted as projecting a prior on θ.) Given N, the distribution of x of length l(x) is then 1/4N times the number of ways of picking (N-l(x)) annihilation steps among N. Using a prior on N like 1/N , which is improper, then leads to favour the shortest path as well. (After discussing the issue with Pierre Druilhet, I realised he had a similar perspective on the issue. Except that he puts a flat prior on the length l(x).) Looking a wee bit further for references, I also found that Bruce Hill had adopted the same perspective of a prior on N.


Filed under: Books, Kids, R, Statistics, University life Tagged: combinatorics, Flatland, improper priors, Larry Wasserman, marginalisation paradoxes, paradox, Pierre Druilhet, subjective versus objective Bayes, William Feller
Categories: Bayesian Bloggers

terrible graph of the day

Xian's Og - Tue, 2015-05-12 08:18

A truly terrible graph in Le Monde about overweight and obesity in the EU countries (and Switzerland). The circle presentation makes no logical sense. Countries are ordered by 2030 overweight percentages, which implies the order differs for men and women. (With a neat sexist differentiation between male and female figures.)  The allocation of the (2010) grey bar to its country is unclear (left or right?). And there is no uncertain associated with the 2030 predictions. There is no message coming out of the graph, like the massive explosion in the obesity and overweight percentages in EU countries. Now, given that the data is available for women and men, ‘Og’s readers should feel free to send me alternative representations!


Filed under: Books, Kids, R, Statistics Tagged: bad graph, EU, Le Monde, obesity, OMS, overweight, prediction
Categories: Bayesian Bloggers

quantile functions: mileage may vary

Xian's Og - Mon, 2015-05-11 18:15

When experimenting with various quantiles functions in R, I was shocked [ok this is a bit excessive, let us say surprised] by how widely the execution times would vary. To the point of blaming a completely different feature of R. Borrowing from Charlie Geyer’s webpage on the topic of probability distributions in R, here is a table for some standard distributions: I ran

u=runif(1e7) system.time(x<-qcauchy(u))

choosing an arbitrary parameter whenever needed.

Distribution Function Time Cauchy qcauchy 2.2 Chi-Square qchisq 43.8 Exponential qexp 0.95 F qf 34.2 Gamma qgamma 37.2 Logistic qlogis 1.7 Log Normal qlnorm 2.2 Normal qnorm 1.4 Student t qt 31.7 Uniform qunif 0.86 Weibull qweibull 2.9

Of course, it does not mean much in that all the slow distributions (except for Weibull) are parameterised. Nonetheless, that a chi-square inversion take 50 times longer than a uniform inversion remains puzzling as to why it is not coded more efficiently. In particular, I was wondering why the chi-square inversion was slower than the Gamma inversion. Rerunning both inversions showed that they are equivalent:

> u=runif(1e7) > system.time(x<-qgamma(u,sha=1.5)) utilisateur système écoulé 21.534 0.016 21.532 > system.time(x<-qchisq(u,df=3)) utilisateur système écoulé 21.372 0.008 21.361

Which also shows how variable system.time can be.


Filed under: Books, R, Statistics Tagged: Charlie Geyer, execution time, pseudo-random generator, R, random simulation, standard quantile functions, system.time
Categories: Bayesian Bloggers

arbitrary distributions with set correlation

Xian's Og - Sun, 2015-05-10 18:15

A question recently posted on X Validated by Antoni Parrelada: given two arbitrary cdfs F and G, how can we simulate a pair (X,Y) with marginals  F and G, and with set correlation ρ? The answer posted by Antoni Parrelada was to reproduce the Gaussian copula solution: produce (X’,Y’) as a Gaussian bivariate vector with correlation ρ and then turn it into (X,Y)=(F⁻¹(Φ(X’)),G⁻¹(Φ(Y’))). Unfortunately, this does not work, because the correlation does not keep under the double transform. The graph above is part of my answer for a χ² and a log-Normal cdf for F amd G: while corr(X’,Y’)=ρ, corr(X,Y) drifts quite a  lot from the diagonal! Actually, by playing long enough with my function

tacor=function(rho=0,nsim=1e4,fx=qnorm,fy=qnorm) { x1=rnorm(nsim);x2=rnorm(nsim) coeur=rho rho2=sqrt(1-rho^2) for (t in 1:length(rho)){ y=pnorm(cbind(x1,rho[t]*x1+rho2[t]*x2)) coeur[t]=cor(fx(y[,1]),fy(y[,2]))} return(coeur) }

Playing further, I managed to get an almost flat correlation graph for the admittedly convoluted call

tacor(seq(-1,1,.01), fx=function(x) qchisq(x^59,df=.01), fy=function(x) qlogis(x^59))

Now, the most interesting question is how to produce correlated simulations. A pedestrian way is to start with a copula, e.g. the above Gaussian copula, and to twist the correlation coefficient ρ of the copula until the desired correlation is attained for the transformed pair. That is, to draw the above curve and invert it. (Note that, as clearly exhibited by the graph just above, all desired correlations cannot be achieved for arbitrary cdfs F and G.) This is however very pedestrian and I wonder whether or not there is a generic and somewhat automated solution…


Filed under: Books, Kids, pictures, R, Statistics, University life Tagged: chi-square density, copula, correlation, cross validated, inverse cdf, logistic distribution, Monte Carlo Statistical Methods, quantile function, R, random number generation, simulation
Categories: Bayesian Bloggers

the buried giant [book review]

Xian's Og - Sat, 2015-05-09 18:15

Last year, I posted a review of Ishiguro’s  “When we were orphans”, with the comment that, while I enjoyed the novel and appreciated its multiple layers, while missing a strong enough grasp on the characters… I brought back from New York Ishiguro’s latest novel, “The Buried Giant“, with high expectations, doubled by the location of the story in an Arthurian setting, at a time when Britons had not yet been subsumed into Anglo-Saxon culture or forced to migrate to little Britain (Brittany). Looking forward a re-creation of an Arthurian cycle, possibly with a post-modern twist. (Plus, the book as an object is quite nice, with a black slice.)

“I respect what I think he was trying to do, but for me it didn’t work. It couldn’t work. No writer can successfully use the ‘surface elements’ of a literary genre — far less its profound capacities — for a serious purpose, while despising it to the point of fearing identification with it. I found reading the book painful. It was like watching a man falling from a high wire while he shouts to the audience, “Are they going say I’m a tight-rope walker?”” Ursula Le Gun, March 2, 2015.

Alas, thrice alas, after reading it within a fortnight, I am quite disappointed by the book. Which, like the giant, would have better remained buried..  Ishiguro pursues his delving into the notion of memories and remembrances, with the twisted reality they convey. After the detective cum historical novel of “When we were orphans”, he moves to the allegory of the early medieval tale, where characters have to embark upon a quest and face supernatural dangers like pixies and ogres. But mostly suffer from a collective amnesia they cannot shake. The idea is quite clever and once again attractive, but the resulting story sounds too artificial and contrived to involve me into the devenir of its characters. As an aside, the two central characters, Beatrix and Axl, have hardly Briton names. Beatrix is of Latin origin and means traveller, while Axl is of Scandinavian origin and means father of peace. Appropriate symbols for their roles in the allegory, obviously. But this also makes me wonder how deep the allegory is, that is, how many levels of references and stories are hidden behind the bland trek of A & B through a fantasy Britain.

A book review in The Guardian links this book with Tolkien’s Lord of the Rings. I fail to see the connection: Tolkien was immersed for his whole life into Norse sagas and Saxon tales, creating his own myth out of his studies without a thought for parody or allegory. Here, the whole universe is misty and vague, and characters act with no reason or rationale. The whole episode in the monastery and the subsequent tunnel exploration do not make sense in terms of the story, while I cannot fathom what they are supposed to stand for. The theme of the ferryman carrying couples to an island where they may rest, together or not, sounds too obvious to just mean this. What else does it stand for?! The encounters of the rag woman, first in the Roman ruins where she threatens to cut a rabbit’s neck, then in a boat where she acts as a decoy, are completely obscure as to what they are supposed to mean. Maybe this accumulation of senseless events is the whole point of the book, but such a degree of deconstruction does not make for a pleasant read. Eventually, I came to hope that the mists rise again and carry away all past memories of “The Buried Giant“!


Filed under: Books, Kids, pictures, Travel Tagged: Britain, dragon, England, Gawain, Kazuo Ishiguro, King Arthur, Lord of the Rings, Roman Britain, Saxons, The Buried Giant, Tolkien
Categories: Bayesian Bloggers

bikes vs cars

Xian's Og - Fri, 2015-05-08 18:15

Trailer for a film by Frederik Gertten about the poor situation of cyclists in most cities. Don’t miss Rob Ford, infamous ex-mayor of Toronto, and his justification for closing bike lanes in the city, comparing cycling to swimming with sharks… and siding with the sharks.


Filed under: Kids, pictures, Running, Travel Tagged: Bikes vs Cars, Copenhagen, film, London, Los Angeles, Rob Ford, São Paulo, Toronto
Categories: Bayesian Bloggers

Le Monde puzzle [#910]

Xian's Og - Thu, 2015-05-07 18:15

An game-theoretic Le Monde mathematical puzzle:

A two-person game consists in choosing an integer N and for each player to successively pick a number in {1,…,N} under the constraint that a player cannot pick a number next to a number this player has already picked. Is there a winning strategy for either player and for all values of N?

for which I simply coded a recursive optimal strategy function:

gain=function(mine,yours,none){ fine=none if (length(mine)>0) fine=none[apply(abs(outer(mine,none,"-")), 2,min)>1] if (length(fine)>0){ rwrd=0 for (i in 1:length(fine)) rwrd=max(rwrd,1-gain(yours,c(mine,fine[i]), none[none!=fine[i]])) return(rwrd)} return(0)}

which returned a zero gain, hence no winning strategy for all values of N except 1.

> gain(NULL,NULL,1) [1] 1 > gain(NULL,NULL,1:2) [1] 0 > gain(NULL,NULL,1:3) [1] 0 > gain(NULL,NULL,1:4) [1] 0

Meaning that the starting player is always the loser!


Filed under: Books, Kids, Statistics, University life Tagged: Le Monde, mathematical puzzle, recursive function
Categories: Bayesian Bloggers

Hamming Ball Sampler

Xian's Og - Wed, 2015-05-06 18:15

Michalis Titsias and Christopher Yau just arXived a paper entitled the Hamming Ball sampler. Aimed at large and complex discrete latent variable models. The completion method is called after Richard Hamming, who is associated with code correcting methods (reminding me of one of the Master courses I took on coding, 30 years ago…), because it uses the Hamming distance in a discrete version of the slice sampler. One of the reasons for this proposal is that conditioning upon the auxiliary slice variable allows for the derivation of normalisation constants otherwise unavailable. The method still needs some calibration in the choice of blocks that partition the auxiliary variable and in the size of the ball. One of the examples assessed in the paper is a variable selection problem with 1200 covariates, out of which only 2 are relevant, while another example deals with a factorial HMM, involving 10 hidden chains. Since the paper compares each example with the corresponding block Gibbs sampling solution, it means this Gibbs sampling version is not intractable. It would be interesting to see a case where the alternative is not available…


Filed under: Books, Statistics, University life Tagged: auxiliary variable, error correcting codes, Hamming distance, intractable likelihood, MCMC, simulation
Categories: Bayesian Bloggers

corrected MCMC samplers for multivariate probit models

Xian's Og - Tue, 2015-05-05 18:15

“Moreover, IvD point out an error in Nobile’s derivation which can alter its stationary distribution. Ironically, as we shall see, the algorithms of IvD also contain an error.”

 Xiyun Jiao and David A. van Dyk arXived a paper correcting an MCMC sampler and R package MNP for the multivariate probit model, proposed by Imai and van Dyk in 2005. [Hence the abbreviation IvD in the above quote.] Earlier versions of the Gibbs sampler for the multivariate probit model by Rob McCulloch and Peter Rossi in 1994, with a Metropolis update added by Agostino Nobile, and finally an improved version developed by Imai and van Dyk in 2005. As noted in the above quote, Jiao and van Dyk have discovered two mistakes in this latest version, jeopardizing the validity of the output.

The multivariate probit model considered here is a multinomial model where the occurrence of the k-th category is represented as the k-th component of a (multivariate) normal (correlated) vector being the largest of all components. The latent normal model being non-identifiable since invariant by either translation or scale, identifying constraints are used in the literature. This means using a covariance matrix of the form Σ/trace(Σ), where Σ is an inverse Wishart random matrix. In their 2005 implementation, relying on marginal data augmentation—which essentially means simulating the non-identifiable part repeatedly at various steps of the data augmentation algorithm—, Imai and van Dyk missed a translation term and a constraint on the simulated matrices that lead to simulations outside the rightful support, as illustrated from the above graph [snapshot from the arXived paper].

Since the IvD method is used in many subsequent papers, it is quite important that these mistakes are signalled and corrected. [Another snapshot above shows how much both algorithm differ!] Without much thinking about this, I [thus idly] wonder why an identifying prior is not taking the place of a hard identifying constraint, as it should solve the issue more nicely. In that it would create less constraints and more entropy (!) in exploring the augmented space, while theoretically providing a convergent approximation of the identifiable parts. I may (must!) however miss an obvious constraint preventing this implementation.


Filed under: Books, pictures, R, Statistics, University life Tagged: Bayesian modelling, Data augmentation, identifiability, Journal of Econometrics, MNP package, multivariate probit model, probit model, R, Wishart distribution
Categories: Bayesian Bloggers

take those hats off [from R]!

Xian's Og - Mon, 2015-05-04 18:15

This is presumably obvious to most if not all R programmers, but I became aware today of a hugely (?) delaying tactic in my R codes. I was working with Jean-Michel and Natesh [who are visiting at the moment] and when coding an MCMC run I was telling them that I usually preferred to code Nsim=10000 as Nsim=10^3 for readability reasons. Suddenly, I became worried that this representation involved a computation, as opposed to Nsim=1e3 and ran a little experiment:

> system.time(for (t in 1:10^8) x=10^3) utilisateur système écoulé 30.704 0.032 30.717 > system.time(for (t in 1:1e8) x=10^3) utilisateur système écoulé 30.338 0.040 30.359 > system.time(for (t in 1:10^8) x=1000) utilisateur système écoulé 6.548 0.084 6.631 > system.time(for (t in 1:1e8) x=1000) utilisateur système écoulé 6.088 0.032 6.115 > system.time(for (t in 1:10^8) x=1e3) utilisateur système écoulé 6.134 0.029 6.157 > system.time(for (t in 1:1e8) x=1e3) utilisateur système écoulé 6.627 0.032 6.654 > system.time(for (t in 1:10^8) x=exp(3*log(10))) utilisateur système écoulé 60.571 0.000 57.103

 So using the usual scientific notation with powers is taking its toll! While the calculator notation with e is cost free… Weird!

I understand that the R notation 10^6 is an abbreviation for a power function that can be equally applied to pi^pi, say, but still feel aggrieved that a nice scientific notation like 10⁶ ends up as a computing trap! I thus asked the question to the Stack Overflow forum, getting the (predictable) answer that the R code 10^6 meant calling the R power function, while 1e6 was a constant. Since 10⁶ does not differ from ππ, there is no reason 10⁶ should be recognised by R as a million. Except that it makes my coding more coherent.

> system.time( for (t in 1:10^8) x=pi^pi) utilisateur système écoulé 44.518 0.000 43.179 > system.time( for (t in 1:10^8) x=10^6) utilisateur système écoulé 38.336 0.000 37.860

Another thing I discovered from this answer to my question is that negative integers are also requesting call to a function:

> system.time( for (t in 1:10^8) x=1) utilisateur système écoulé 10.561 0.801 11.062 > system.time( for (t in 1:10^8) x=-1) utilisateur système écoulé 22.711 0.860 23.098

This sounds even weirder.


Filed under: Books, Kids, R, Statistics, University life Tagged: exponent notation, exponentiation, functions in R, mantissa, power, R, scientific notation, system.time
Categories: Bayesian Bloggers

moonlight

Xian's Og - Mon, 2015-05-04 13:18


Filed under: pictures, Running Tagged: full moon, Sceaux
Categories: Bayesian Bloggers

ABC and cosmology

Xian's Og - Sun, 2015-05-03 18:15

Two papers appeared on arXiv in the past two days with the similar theme of applying ABC-PMC [one version of which we developed with Mark Beaumont, Jean-Marie Cornuet, and Jean-Michel Marin in 2009] to cosmological problems. (As a further coincidence, I had just started refereeing yet another paper on ABC-PMC in another astronomy problem!) The first paper cosmoabc: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation by Ishida et al. [“et al” including Ewan Cameron] proposes a Python ABC-PMC sampler with applications to galaxy clusters catalogues. The paper is primarily a description of the cosmoabc package, including code snapshots. Earlier occurrences of ABC in cosmology are found for instance in this earlier workshop, as well as in Cameron and Pettitt earlier paper. The package offers a way to evaluate the impact of a specific distance, with a 2D-graph demonstrating that the minimum [if not the range] of the simulated distances increases with the parameters getting away from the best parameter values.

“We emphasis [sic] that the choice of the distance function is a crucial step in the design of the ABC algorithm and the reader must check its properties carefully before any ABC implementation is attempted.” E.E.O. Ishida et al.

The second [by one day] paper Approximate Bayesian computation for forward modelling in cosmology by Akeret et al. also proposes a Python ABC-PMC sampler, abcpmc. With fairly similar explanations: maybe both samplers should be compared on a reference dataset. While I first thought the description of the algorithm was rather close to our version, including the choice of the empirical covariance matrix with the factor 2, it appears it is adapted from a tutorial in the Journal of Mathematical Psychology by Turner and van Zandt. One out of many tutorials and surveys on the ABC method, of which I was unaware, but which summarises the pre-2012 developments rather nicely. Except for missing Paul Fearnhead’s and Dennis Prangle’s semi-automatic Read Paper. In the abcpmc paper, the update of the covariance matrix is the one proposed by Sarah Filippi and co-authors, which includes an extra bias term for faraway particles.

“For complex data, it can be difficult or computationally expensive to calculate the distance ρ(x; y) using all the information available in x and y.” Akeret et al.

In both papers, the role of the distance is stressed as being quite important. However, the cosmoabc paper uses an L1 distance [see (2) therein] in a toy example without normalising between mean and variance, while the abcpmc paper suggests using a Mahalanobis distance that turns the d-dimensional problem into a comparison of one-dimensional projections.


Filed under: Books, pictures, Statistics, University life Tagged: ABC, ABC-PMC, abcpmc, astronomy, astrostatistics, cosmoabc, cosmology, likelihood-free methods, Mahalanobis distance, Python, semi-automatic ABC
Categories: Bayesian Bloggers

the 39 steps

Xian's Og - Sat, 2015-05-02 18:15

I had never read this classic that inspired Hitchcock’s 39 steps (which I neither watched before).  The setting of the book is slightly different from the film: it takes place in England and Scotland a few weeks before the First  World War. German spies are trying to kill a prominent Greek politician [no connection with the current Euro-crisis intended!] and learn about cooperative plans between France and Britain. The book involves no woman character (contrary to the film, where it adds a comical if artificial level). As in Rogue Male, most of the story is about an unlikely if athletic hero getting into the way of those spies and being pursued through the countryside by those spies. Even though the hunt has some intense moments, it lacks the psychological depth of Rogue Male, while the central notion that those spies are so good that they can play other persons’ roles without being recognised is implausible to the extreme, a feature reminding me of the Blake & Mortimer cartoons which may have been inspired by this type of books. Especially The Francis Blake Affair. (Trivia: John Buchan ended up Governor General of Canada.)


Filed under: Books, Kids, Mountains, Running, Travel Tagged: Alfred Hitchcock, Blake and Mortimer, England, first World War, Glencoe, Rogue Male, Scotland, The 39 steps
Categories: Bayesian Bloggers

the raven, the cormorant, and the heron

Xian's Og - Fri, 2015-05-01 18:15

This morning, on my first lap of the Grand Bassin in Parc de Sceaux, I spotted “the” heron standing at its usual place, on the artificial wetland island created at one end of the canal. When coming back to this spot during the second lap, I could hear the heron calling loudly and saw a raven repeatedly diving near it and a nearby cormorant, who also seemed unhappy with this attitude, judging from the flapping of its wings… After a few dozens of those dives, the raven landed at the other end of the island and this was the end of the canal drama! Unless there was a dead fish landed there, I wonder why the raven was having a go at those two larger birds.


Filed under: Kids, pictures, Running Tagged: cormorant, crow, heron, morning light, morning run, Parc de Sceaux, raven
Categories: Bayesian Bloggers

Le Monde puzzle [#909]

Xian's Og - Thu, 2015-04-30 18:15

Another of those “drop-a-digit” Le Monde mathematical puzzle:

Find all integers n with 3 or 4 digits, no exterior zero digit, and a single interior zero digit, such that removing that zero digit produces a divider of x.

As in puzzle #904, I made use of the digin R function:

digin=function(n){ as.numeric(strsplit(as.character(n),"")[[1]])}

and simply checked all integers up to 10⁶:

plura=divid=NULL for (i in 101:10^6){ dive=rev(digin(i)) if ((min(dive[1],rev(dive)[1])>0)& (sum((dive[-c(1,length(dive))]==0))==1)){ dive=dive[dive>0] dive=sum(dive*10^(0:(length(dive)-1))) if (i==((i%/%dive)*dive)){ plura=c(plura,i) divid=c(divid,dive)}}}

which leads to the output

> plura 1] 105 108 405 2025 6075 10125 30375 50625 70875 > plura/divid [1] 7 6 9 9 9 9 9 9 9

leading to the conclusion there is no solution beyond 70875. (Allowing for more than a single zero within the inner digits sees many more solutions.)


Filed under: Books, Kids, R Tagged: integers, Le Monde, mathematical puzzle
Categories: Bayesian Bloggers

the most patronizing start to an answer I have ever received

Xian's Og - Wed, 2015-04-29 18:15

Another occurrence [out of many!] of a question on X validated where the originator (primitivus petitor) was trying to get an explanation without the proper background. On either Bayesian statistics or simulation. The introductory sentence to the question was about “trying to understand how the choice of priors affects a Bayesian model estimated using MCMC” but the bulk of the question was in fact failing to understand an R code for a random-walk Metropolis-Hastings algorithm for a simple regression model provided in a introductory blog by Florian Hartig. And even more precisely about confusing the R code dnorm(b, sd = 5, log = T) in the prior with rnorm(1,mean=b, sd = 5, log = T) in the proposal…

“You should definitely invest some time in learning the bases of Bayesian statistics and MCMC methods from textbooks or on-line courses.” X

So I started my answer with the above warning. Which sums up my feelings about many of those X validated questions, namely that primitivi petitores lack the most basic background to consider such questions. Obviously, I should not have bothered with an answer, but it was late at night after a long day, a good meal at the pub in Kenilworth, and a broken toe still bothering me. So I got this reply from the primitivus petitor that it was a patronizing piece of advice and he prefers to learn from R code than from textbooks and on-line courses, having “looked through a number of textbooks”. Good luck with this endeavour then!


Filed under: Books, Kids, R, Statistics, University life Tagged: Bayesian statistics, cross validated, dnorm, MCMC, Metropolis-Hastings algorithm, Monte Carlo Statistical Methods, R
Categories: Bayesian Bloggers

a war[like] week

Xian's Og - Tue, 2015-04-28 18:15

This week in Warwick was one of the busiest ones ever as I had to juggle between two workshops, including one in Oxford, a departmental meeting, two paper revisions, two pre-vivas, and a seminar in Leeds. Not to mention a broken toe (!), a flat tire (!!), and a diner at the X. Hardly anytime for writing blog entries..! Fortunately, I managed to squeeze time for working with Kerrie Mengersen who was visiting Warwick this fortnight. Finding new directions for the (A)BCel approach we developed a few years ago with Pierre Pudlo. The workshop in Oxford was quite informal with talks from PhD students [I fear I cannot discuss here as the papers are not online yet]. And one talk by François Caron about estimating sparse networks with not exactly exchangeable priors and completely random measures. And one talk by Kerrie Mengersen on a new and in-progress approach to handling Big Data that I found quite convincing (if again cannot discuss here). The probabilistic numerics workshop was discussed in yesterday’s post and I managed to discuss it a wee bit further with the organisers at The X restaurant in Kenilworth. (As a superfluous aside, and after a second sampling this year, I concluded that the Michelin star somewhat undeserved in that the dishes at The X are not particularly imaginative or tasty, the excellent sourdough bread being the best part of the meal!) I was expecting the train ride to Leeds to be highly bucolic as it went through the sunny countryside of South Yorkshire, with newly born lambs running in the bright green fields surrounded by old stone walls…, but instead went through endless villages with their rows of brick houses. Not that I have anything against brick houses, mind! Only, I had not realised how dense this part of England was, this presumably getting back all the way to the Industrial Revolution with the Manchester-Leeds-Birmingham triangle.

My seminar in Leeds was as exciting as in Amsterdam last week and with a large audience, so I got many and only interesting questions, from the issue of turning the output (i.e., the posterior on α) into a decision rule, to making  decision in the event of a non-conclusive posterior, to links with earlier frequentist resolutions, to whether or not we were able to solve the Lindley-Jeffreys paradox (we are not!, which makes a lot of sense), to the possibility of running a subjective or a sequential version. After the seminar I enjoyed a perfect Indian dinner at Aagrah, apparently a Yorkshire institution, with the right balance between too hot and too mild, i.e., enough spices to break a good sweat but not too many to loose any sense of taste!


Filed under: Books, Kids, pictures, Running, Statistics, Travel, University life, Wines Tagged: Aagrah restaurants, ABCel, empirical likelihood, England, Kenilworth, Leeds, Oxford (Mississipi), The Cross, University of Oxford, University of Warwick, Yorkshire
Categories: Bayesian Bloggers