Xian's Og

Syndicate content Xi'an's Og
an attempt at bloggin, nothing more...
Updated: 11 hours 33 min ago

the forever war [book review]

Sat, 2015-04-25 18:15

Another book I bought somewhat on a whim, although I cannot remember which one… The latest edition has a preface by John Scalzi, author of Old Man’s War and its sequels, where he acknowledged he would not have written this series, had he previously read The Forever War. Which strikes me as ironical as I found Scalzi’s novels way better. Deeper. And obviously not getting obsolete so immediately! (As an aside, Scalzi is returning to the Old Man’s War universe with a new novel, The End of All Things.)

“…it’s easy to compute your chances of being able to fight it out for ten years. It comes to about two one-thousandths of one percent. Or, to put it another way, get an old-fashioned six-shooter and play Russian Roulette with four of the six chambers loaded. If you can do it ten times in a row without decorating the opposite wall, congratulations! You’re a civilian.”

This may be the main issue with The Forever War. The fact that it sounds so antiquated. And hence makes reading the novel like an exercise in Creative Writing 101, in order to spot how the author was so rooted in the 1970’s that he could not project far enough in the future to make his novel sustainable. The main issue in the suspension of belief required to proceed through the book is the low-tech configuration of Halderman’s future. Even though intergalactic travel is possible via the traditional portals found in almost every sci’-fi’ book, computers are blatantly missing from the picture. And so is artificial intelligence as well. (2001 A space odyssey was made in 1968, right?!) The economics of a forever warring Earth are quite vague and unconvincing. There is no clever tactics in the war against the Taurans. Even the battle scenes are far from exciting. Esp. the parts where they fight with swords and arrows. And the treatment of sexuality has not aged well. So all that remains in favour of the story (and presumably made the success of the book) is the description of the ground soldier’s life which could almost transcribe verbatim to another war and another era. End of the story. (Unsurprisingly, while being the first book picked for the SF MasterworksThe Forever War did not make it into the 2011 series…)


Filed under: Books, Kids Tagged: Joe Haldeman, John Scalzi, science fiction, space opera, The End of All Things, Vietnam
Categories: Bayesian Bloggers

bruggen in Amsterdam

Sat, 2015-04-25 08:18
Categories: Bayesian Bloggers

ontological argument

Fri, 2015-04-24 18:15


Filed under: Books, Kids, pictures Tagged: atheism, ontological argument, xkcd
Categories: Bayesian Bloggers

[non] Markov chains

Fri, 2015-04-24 08:18
Categories: Bayesian Bloggers

scale acceleration

Thu, 2015-04-23 18:15

Kate Lee pointed me to a rather surprising inefficiency in matlab, exploited in Sylvia Früwirth-Schnatter’s bayesf package: running a gamma simulation by rgamma(n,a,b) takes longer and sometimes much longer than rgamma(n,a,1)/b, the latter taking advantage of the scale nature of b. I wanted to check on my own whether or not R faced the same difficulty, so I ran an experiment [while stuck in a Thalys train at Brussels, between Amsterdam and Paris…] Using different values for a [click on the graph] and a range of values of b. To no visible difference between both implementations, at least when using system.time for checking.

a=seq(.1,4,le=25) for (t in 1:25) a[t]=system.time( rgamma(10^7,.3,a[t]))[3] a=a/system.time(rgamma(10^7,.3,1))[3]

Once arrived home, I wondered about the relevance of the above comparison, since rgamma(10^7,.3,1) forces R to use 1 as a scale, which may differ from using rgamma(10^7,.3), where 1 is known to be the scale [does this sentence make sense?!]. So I rerun an even bigger experiment as

a=seq(.1,4,le=25) for (t in 1:25) a[t]=system.time( rgamma(10^8,.3,a[t]))[3] a=a/system.time(rgamma(10^7,.3))[3]

and got the graph below. Which is much more interesting because it shows that some values of a are leading to a loss of efficiency of 50%. Indeed. (The most extreme cases correspond to a=0.3, 1.1., 5.8. No clear pattern emerging.)Update

As pointed out by Martyn Plummer in his comment, the C function behind the R rgamma function and Gamma generator does take into account the scale nature of the second parameter, so the above time differences are not due to this function but rather to whatever my computer was running at the same time…! Apologies to anyone I scared with this void warning!


Filed under: pictures, R, Statistics, Travel, University life Tagged: bayesf, Brussels, Matlab, R, rgamma, rgamma.c, scale, scale parameter, system.time
Categories: Bayesian Bloggers

I give an X

Thu, 2015-04-23 08:18
Categories: Bayesian Bloggers

capacity exceeded…

Wed, 2015-04-22 18:15

A silly LaTeX error took me a few minutes too many to solve: I defined

\renewcommand\theta{\boldsymbol{\theta}}

which got me the error message

TeX capacity exceeded , sorry [ grouping levels =255].

that I understood as a recursive definition. So I instead pre-defined the new θ as

\newcommand\btheta{\boldsymbol{\theta}} \renewcommand\theta\btheta

which did not work either… After google-ing the issue, I found this on-line LaTeX Wikibook that provided me with the solution:

\let\btheta{\boldsymbol{\theta}} \renewcommand\theta\btheta

which worked. Of course, a global change of \theta into \btheta would have been much much faster to execute….


Filed under: Books, University life Tagged: bold fonts, LaTeX, newcommand, renewcommand
Categories: Bayesian Bloggers

ISBA 2016 [logo]

Tue, 2015-04-21 18:15

Things are starting to get in place for the next ISBA 2016 World meeting, in Forte Village Resort Convention Center, Sardinia, Italy. June 13-17, 2016. And not only the logo inspired from the nuraghe below. I am sure the program will be terrific and make this new occurrence of a “Valencia meeting” worth attending. Just like the previous occurrences, e.g. Cancún last summer and Kyoto in 2012.

However, and not for the first time, I wonder at the sustainability of such meetings when faced with always increasing—or more accurately sky-rocketing!—registration fees… We have now reached €500 per participant for the sole (early reg.) fees, excluding lodging, food or transportation. If we bet on 500 participants, this means simply renting the convention centre would cost €250,000 for the four or five days of the meeting. This sounds enormous, even accounting for the processing costs of the congress organiser. (By comparison, renting the convention centre MCMSki in Chamonix for three days was less than €20,000.) Given the likely high costs of staying at the resort, it is very unlikely I will be able to support my PhD students  As I know very well of the difficulty to find dedicated volunteers willing to offer a large fraction of their time towards the success of behemoth meetings, this comment is by no means aimed at my friends from Cagliari who kindly accepted to organise this meeting. But rather at the general state of academic meetings which costs makes them out of reach for a large part of the scientific community.

Thus, this makes me wonder anew whether we should move to a novel conference model given that the fantastic growth of the Bayesian community makes the ideal of gathering together in a single beach hotel for a week of discussions, talks, posters, and more discussions unattainable. If truly physical meetings are to perdure—and this notion is as debatable as the one about the survival of paper versions of the journals—, a new approach would be to find a few universities or sponsors able to provide one or several amphitheatres around the World and to connect all those places by teleconference. Reducing the audience size at each location would greatly the pressure to find a few huge and pricey convention centres, while dispersing the units all around would diminish travel costs as well. There could be more parallel sessions and ways could be found to share virtual poster sessions, e.g. by having avatars presenting some else’s poster. Time could be reserved for local discussions of presented papers, to be summarised later to the other locations. And so on… Obviously, something would be lost of the old camaraderie, sharing research questions and side stories, as well as gossips and wine, with friends from all over the World. And discovering new parts of the World. But the cost of meetings is already preventing some of those friends to show up. I thus think it is time we reinvent the Valencia meetings into the next generation. And move to the Valenci-e-meetings.


Filed under: pictures, Statistics, Travel, University life, Wines Tagged: Bayesian Analysis, Cancún, Forte Village, ISBA, ISBA 2016, Italy, nuraghe, registration fees, Sardinia, Valencia meeting, world meeting
Categories: Bayesian Bloggers

sakura [#2]

Tue, 2015-04-21 08:18
Categories: Bayesian Bloggers

simulating correlated Binomials [another Bernoulli factory]

Mon, 2015-04-20 18:15

This early morning, just before going out for my daily run around The Parc, I checked X validated for new questions and came upon that one. Namely, how to simulate X a Bin(8,2/3) variate and Y a Bin(18,2/3) such that corr(X,Y)=0.5. (No reason or motivation provided for this constraint.) And I thought the following (presumably well-known) resolution, namely to break the two binomials as sums of 8 and 18 Bernoulli variates, respectively, and to use some of those Bernoulli variates as being common to both sums. For this specific set of values (8,18,0.5), since 8×18=12², the solution is 0.5×12=6 common variates. (The probability of success does not matter.) While running, I first thought this was a very artificial problem because of this occurrence of 8×18 being a perfect square, 12², and cor(X,Y)x12 an integer. A wee bit later I realised that all positive values of cor(X,Y) could be achieved by randomisation, i.e., by deciding the identity of a Bernoulli variate in X with a Bernoulli variate in Y with a certain probability ϖ. For negative correlations, one can use the (U,1-U) trick, namely to write both Bernoulli variates as

in order to minimise the probability they coincide.

I also checked this result with an R simulation

> z=rbinom(10^8,6,.66) > y=z+rbinom(10^8,12,.66) > x=z+rbinom(10^8,2,.66) cor(x,y) > cor(x,y) [1] 0.5000539

Searching on Google gave me immediately a link to Stack Overflow with an earlier solution with the same idea. And a smarter R code.


Filed under: Books, Kids, pictures, R, Running, Statistics, University life Tagged: binomial distribution, cross validated, inverse cdf, Jacob Bernoulli, Parc de Sceaux, R, random simulation, stackoverflow
Categories: Bayesian Bloggers

Mas del Perie

Mon, 2015-04-20 14:20


Filed under: pictures, Wines Tagged: Cahors, French wine, Malbec
Categories: Bayesian Bloggers

Bayesian propaganda?

Sun, 2015-04-19 18:15

“The question is about frequentist approach. Bayesian is admissable [sic] only by wrong definition as it starts with the assumption that the prior is the correct pre-information. James-Stein beats OLS without assumptions. If there is an admissable [sic] frequentist estimator then it will correspond to a true objective prior.”

I had a wee bit of a (minor, very minor!) communication problem on X validated, about a question on the existence of admissible estimators of the linear regression coefficient in multiple dimensions, under squared error loss. When I first replied that all Bayes estimators with finite risk were de facto admissible, I got the above reply, which clearly misses the point, and as I had edited the OP question to include more tags, the edited version was reverted with a comment about Bayesian propaganda! This is rather funny, if not hilarious, as (a) Bayes estimators are indeed admissible in the classical or frequentist sense—I actually fail to see a definition of admissibility in the Bayesian sense—and (b) the complete class theorems of Wald, Stein, and others (like Jack Kiefer, Larry Brown, and Jim Berger) come from the frequentist quest for best estimator(s). To make my point clearer, I also reproduced in my answer the Stein’s necessary and sufficient condition for admissibility from my book but it did not help, as the theorem was “too complex for [the OP] to understand”, which shows in fine the point of reading textbooks!


Filed under: Books, Kids, pictures, Statistics, University life Tagged: Abraham Wald, admissibility, Bayesian Analysis, Bayesian decision theory, Charles Stein, James-Stein estimator, least squares, objective Bayes, shrinkage estimation, The Bayesian Choice
Categories: Bayesian Bloggers

the luminaries [book review]

Fri, 2015-04-17 18:15

I bought this book by Eleanor Catton on my trip to Pittsburgh and Toronto in 2013 (thanks to Amazon associates’ gains!), mostly by chance (and also because it was the most recent Man Booker Prize). After a few sleepless nights last week (when I should not have been suffering from New York jet lag!, given my sleeping pattern when abroad), I went through this rather intellectual and somewhat contrived mystery. To keep with tradition (!), the cover was puzzling me until I realised those were phases of the moon, in line with [spoiler!] the zodiacal underlying pattern of the novel, pattern I did not even try to follow for it sounded so artificial. And presumably restricted the flow of the story by imposing further constraints on the characters’ interactions.

The novel has redeeming features, even though I am rather bemused at it getting a Man Booker Prize. (When compared with, say, The Remains of the Day…) For one thing, while a gold rush story of the 1860’s, it takes place on the South Island of New Zealand instead of Klondike, around the Hokitika gold-field, on the West Coast, with mentions of places that brings memory of our summer (well, winter!) visit to Christchurch in 2006… The mix of cultures between English settlers, Maoris, and Chinese migrants, is well-documented and information, if rather heavy at times, bordering on the info-dump, and a central character like the Maori Te Rau Tauwhare sounds caricaturesque. The fact that the story takes place in Victorian times call Dickens to mind, but I find very little connection in either style or structure, nor with Victorian contemporaries like Wilkie Collins, and Victorian pastiches like Charles Palliser‘s Quincunx…. Nothing of the sanctimonious and moral elevation and subtle irony one could expect from a Victorian novel!

While a murder mystery, the plot is fairly upside down (or down under?!): the (spoiler!) assumed victim is missing for most of the novel, the (spoiler!) extracted gold is not apparently stolen but rather lacks owner(s), and the most moral character of the story ends up being the local prostitute. The central notion of the twelve men in a council each bringing a new light on the disappearance of Emery Staines is a neat if not that innovative literary trick but twelve is a large number which means following many threads, some being dead-ends, to gather an appearance of a view on the whole story. As in Rashomon, one finishes the story with a deep misgiving as to who did what, after so many incomplete and biased accountings. Unlike Rashomon, it alas takes forever to reach this point!


Filed under: Books, Kids, Mountains, Travel Tagged: Charles Dickens, Christchurch, Dunedin, gold rush, Man Booker Prize, New Zealand, South Island, The Quincunx, Wilkie Collins
Categories: Bayesian Bloggers

vertical likelihood Monte Carlo integration

Thu, 2015-04-16 18:15

A few months ago, Nick Polson and James Scott arXived a paper on one of my favourite problems, namely the approximation of normalising constants (and it went way under my radar, as I only became aware of it quite recently!, then it remained in my travel bag for an extra few weeks…). The method for approximating the constant Z draws from an analogy with the energy level sampling methods found in physics, like the Wang-Landau algorithm. The authors rely on a one-dimensional slice sampling representation of the posterior distribution and [main innovation in the paper] add a weight function on the auxiliary uniform. The choice of the weight function links the approach with the dreaded harmonic estimator (!), but also with power-posterior and bridge sampling. The paper recommends a specific weighting function, based on a “score-function heuristic” I do not get. Further, the optimal weight depends on intractable cumulative functions as in nested sampling. It would be fantastic if one could draw directly from the prior distribution of the likelihood function—rather than draw an x [from the prior or from something better, as suggested in our 2009 Biometrika paper] and transform it into L(x)—but as in all existing alternatives this alas is not the case. (Which is why I find the recommendations in the paper for practical implementation rather impractical, since, were the prior cdf of L(X) available, direct simulation of L(X) would be feasible. Maybe not the optimal choice though.)

“What is the distribution of the likelihood ordinates calculated via nested sampling? The answer is surprising: it is essentially the same as the distribution of likelihood ordinates by recommended weight function from Section 4.”

The approach is thus very much related to nested sampling, at least in spirit. As the authors later demonstrate, nested sampling is another case of weighting, Both versions require simulations under truncated likelihood values. Albeit with a possibility of going down [in likelihood values] with the current version. Actually, more weighting could prove [more] efficient as both the original nested and vertical sampling simulate from the prior under the likelihood constraint. Getting away from the prior should help. (I am quite curious to see how the method is received and applied.)


Filed under: Books, pictures, Running, Statistics, Travel, University life Tagged: Chicago Booth School of Business, importance sampling, Monte Carlo integration, Monte Carlo Statistical Methods, nested sampling, normalising constant, slice sampling, Wang-Landau algorithm
Categories: Bayesian Bloggers

ah ces enseignants..!

Thu, 2015-04-16 13:18


Filed under: Kids, pictures, Travel
Categories: Bayesian Bloggers

abc [with brains]

Thu, 2015-04-16 08:18


Filed under: Statistics
Categories: Bayesian Bloggers

reis naar Amsterdam

Wed, 2015-04-15 18:15

On Monday, I went to Amsterdam to give a seminar at the University of Amsterdam, in the department of psychology. And to visit Eric-Jan Wagenmakers and his group there. And I had a fantastic time! I talked about our mixture proposal for Bayesian testing and model choice without getting hostile or adverse reactions from the audience, quite the opposite as we later discussed this new notion for several hours in the café across the street. I also had the opportunity to meet with Peter Grünwald [who authored a book on the minimum description length principle] pointed out a minor inconsistency of the common parameter approach, namely that the Jeffreys prior on the first model did not have to coincide with the Jeffreys prior on the second model. (The Jeffreys prior for the mixture being unavailable.) He also wondered about a more conservative property of the approach, compared with the Bayes factor, in the sense that the non-null parameter could get closer to the null-parameter while still being identifiable.

Among the many persons I met in the department, Maarten Marsman talked to me about his thesis research, Plausible values in statistical inference, which involved handling the Ising model [a non-sparse Ising model with O(p²) parameters] by an auxiliary representation due to Marc Kac and getting rid of the normalising (partition) constant by the way. (Warning, some approximations involved!) And who showed me a simple probit example of the Gibbs sampler getting stuck as the sample size n grows. Simply because the uniform conditional distribution on the parameter concentrates faster (in 1/n) than the posterior (in 1/√n). This does not come as a complete surprise as data augmentation operates in an n-dimensional space. Hence it requires more time to get around. As a side remark [still worth printing!], Maarten dedicated his thesis as “To my favourite random variables , Siem en Fem, and to my normalizing constant, Esther”, from which I hope you can spot the influence of at least two of my book dedications! As I left Amsterdam on Tuesday, I had time for a enjoyable dinner with E-J’s group, an equally enjoyable early morning run [with perfect skies for sunrise pictures!], and more discussions in the department. Including a presentation of the new (delicious?!) Bayesian software developed there, JASP, which aims at non-specialists [i.e., researchers unable to code in R, BUGS, or, God forbid!, STAN] And about the consequences of mixture testing in some psychological experiments. Once again, a fantastic time discussing Bayesian statistics and their applications, with a group of dedicated and enthusiastic Bayesians!


Filed under: Books, Kids, pictures, Running, Statistics, Travel, University life, Wines Tagged: Amsterdam, Bayesian statistics, BUGS, canals, Holland, Ising model, JASP, Marc Kac, minimal description length principle, normalising constant, psychology, R, STAN, UvA
Categories: Bayesian Bloggers