Xian's Og

Syndicate content Xi'an's Og
an attempt at bloggin, nothing more...
Updated: 2 hours 52 min ago

Le Monde puzzle [#905]

Tue, 2015-03-31 18:15

A recursive programming  Le Monde mathematical puzzle:

Given n tokens with 10≤n≤25, Alice and Bob play the following game: the first player draws an integer1≤m≤6 at random. This player can then take 1≤r≤min(2m,n) tokens. The next player is then free to take 1≤s≤min(2r,n-r) tokens. The player taking the last tokens is the winner. There is a winning strategy for Alice if she starts with m=3 and if Bob starts with m=2. Deduce the value of n.

Although I first wrote a brute force version of the following code, a moderate amount of thinking leads to conclude that the person given n remaining token and an adversary choice of m tokens such that 2m≥n always win by taking the n remaining tokens:

optim=function(n,m){ outcome=(n<2*m+1) if (n>2*m){ for (i in 1:(2*m)) outcome=max(outcome,1-optim(n-i,i)) } return(outcome) }

eliminating solutions which dividers are not solutions themselves:

sol=lowa=plura[plura&lt;100] for (i in 3:6){ sli=plura[(plura&gt;10^(i-1))&amp;(plura&lt;10^i)] ace=sli-10^(i-1)*(sli%/%10^(i-1)) lowa=sli[apply(outer(ace,lowa,FUN=&quot;==&quot;), 1,max)==1] lowa=sort(unique(lowa)) sol=c(sol,lowa)}

which leads to the output

> subs=rep(0,16) > for (n in 10:25) subs[n-9]=optim(n,3) > for (n in 10:25) if (subs[n-9]==1) subs[n-9]=1-optim(n,2) > subs [1] 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 > (10:25)[subs==1] [1] 18

Ergo, the number of tokens is 18!

Filed under: Books, Kids, R, Statistics, University life Tagged: Le Monde, mathematical puzzle, R, recursive function
Categories: Bayesian Bloggers

MCMskv, Lenzerheide, Jan. 5-7, 2016

Mon, 2015-03-30 18:15

Following the highly successful [authorised opinion!, from objective sources] MCMski IV, in Chamonix last year, the BayesComp section of ISBA has decided in favour of a two-year period, which means the great item of news that next year we will meet again for MCMski V [or MCMskv for short], this time on the snowy slopes of the Swiss town of Lenzerheide, south of Zürich. The committees are headed by the indefatigable Antonietta Mira and Mark Girolami. The plenary speakers have already been contacted and Steve Scott (Google), Steve Fienberg (CMU), David Dunson (Duke), Krys Latuszynski (Warwick), and Tony Lelièvre (Mines, Paris), have agreed to talk. Similarly, the nine invited sessions have been selected and will include Hamiltonian Monte Carlo,  Algorithms for Intractable Problems (ABC included!), Theory of (Ultra)High-Dimensional Bayesian Computation, Bayesian NonParametrics, Bayesian Econometrics,  Quasi Monte Carlo, Statistics of Deep Learning, Uncertainty Quantification in Mathematical Models, and Biostatistics. There will be afternoon tutorials, including a practical session from the Stan team, tutorials for which call is open, poster sessions, a conference dinner at which we will be entertained by the unstoppable Imposteriors. The Richard Tweedie ski race is back as well, with a pair of Blossom skis for the winner!

As in Chamonix, there will be parallel sessions and hence the scientific committee has issued a call for proposals to organise contributed sessions, tutorials and the presentation of posters on particularly timely and exciting areas of research relevant and of current interest to Bayesian Computation. All proposals should be sent to Mark Girolami directly by May the 4th (be with him!).
Filed under: Kids, Mountains, pictures, R, Statistics, Travel, University life Tagged: ABC, BayesComp, Bayesian computation, Blossom skis, Chamonix, Glenlivet, Hamiltonian Monte Carlo, intractable likelihood, ISBA, MCMSki, MCMskv, Monte Carlo Statistical Methods, Richard Tweedie, ski town, STAN, Switzerland, Zurich
Categories: Bayesian Bloggers

intuition beyond a Beta property

Sun, 2015-03-29 18:15

A self-study question on X validated exposed an interesting property of the Beta distribution:

If x is B(n,m) and y is B(n+½,m) then √xy is B(2n,2m)

While this can presumably be established by a mere change of variables, I could not carry the derivation till the end and used instead the moment generating function E[(XY)s/2] since it naturally leads to ratios of B(a,b) functions and to nice cancellations thanks to the ½ in some Gamma functions [and this was the solution proposed on X validated]. However, I wonder at a more fundamental derivation of the property that would stem from a statistical reasoning… Trying with the ratio of Gamma random variables did not work. And the connection with order statistics does not apply because of the ½. Any idea?

Filed under: Books, Kids, R, Statistics, University life Tagged: beta distribution, cross validated, moment generating function, Stack Echange
Categories: Bayesian Bloggers

off to New York

Sat, 2015-03-28 19:15

I am off to New York City for two days, giving a seminar at Columbia tomorrow and visiting Andrew Gelman there. My talk will be about testing as mixture estimation, with slides similar to the Nice ones below if slightly upgraded and augmented during the flight to JFK. Looking at the past seminar speakers, I noticed we were three speakers from Paris in the last fortnight, with Ismael Castillo and Paul Doukhan (in the Applied Probability seminar) preceding me. Is there a significant bias there?!

Filed under: Books, pictures, Statistics, Travel, University life Tagged: Andrew Gelman, Bayesian hypothesis testing, Columbia University, finite mixtures, New York city, Nice, Paris, slides, SMILE seminar
Categories: Bayesian Bloggers

a most curious case of misaddressed mail

Fri, 2015-03-27 19:15

Today, I got two FedEx envelopes in the mail, both apparently from the same origin, namely UF Statistics department reimbursing my travel expenses. However, once both envelopes opened, I discovered that, while one was indeed containing my reimbursement cheque, the other one contained several huge cheques addressed to… a famous Nova Scotia fiddler, Natalie MacMaster, for concerts she gave recently in South East US, and with no possible connection with either me or the stats department! So I have no idea how those cheques came to me (before I returned them to their rightful recipient in Nova Scotia!). Complete mystery! The only possible link is that I just found Natalie MacMaster and her band played in Gainesville two weeks ago. Hence a potential scenario: at the local FedEx sorting centre, the envelope intended for Natalie MacMaster lost its label and someone took the second label from my then nearby envelope to avoid dealing with the issue…  In any case, this gave me the opportunity to listen to pretty enticing Scottish music!

Filed under: Books, Travel, University life Tagged: Cape Breton, FedEx, fiddle, Gainesville, Irish music, Naatalie MacMaster, Nova Scotia, Scotland, University of Florida
Categories: Bayesian Bloggers

likelihood-free model choice

Fri, 2015-03-27 09:18

Jean-Michel Marin, Pierre Pudlo and I just arXived a short review on ABC model choice, first version of a chapter for the incoming Handbook of Approximate Bayesian computation edited by Scott Sisson, Yannan Fan, and Mark Beaumont. Except for a new analysis of a Human evolution scenario, this survey mostly argues for the proposal made in our recent paper on the use of random forests and [also argues] about the lack of reliable approximations to posterior probabilities. (Paper that was rejected by PNAS and that is about to be resubmitted. Hopefully with a more positive outcome.) The conclusion of the survey is  that

The presumably most pessimistic conclusion of this study is that the connections between (i) the true posterior probability of a model, (ii) the ABC version of this probability, and (iii) the random forest version of the above, are at best very loose. This leaves open queries for acceptable approximations of (i), since the posterior predictive error is instead an error assessment for the ABC RF model choice procedure. While a Bayesian quantity that can be computed at little extra cost, it does not necessarily compete with the posterior probability of a model.

reflecting my hope that we can eventually come up with a proper approximation to the “true” posterior probability…

Filed under: Books, pictures, Statistics, University life, Wines Tagged: ABC, ABC model choice, Handbook of Approximate Bayesian computation, likelihood-free methods, Montpellier, PNAS, random forests, survey
Categories: Bayesian Bloggers

importance weighting without importance weights [ABC for bandits?!]

Thu, 2015-03-26 19:15

I did not read very far in the recent arXival by Neu and Bartók, but I got the impression that it was a version of ABC for bandit problems where the probabilities behind the bandit arms are not available but can be generated. Since the stopping rule found in the “Recurrence weighting for multi-armed bandits” is the generation of an arm equal to the learner’s draw (p.5). Since there is no tolerance there, the method is exact (“unbiased”). As no reference is made to the ABC literature, this may be after all a mere analogy…

Filed under: Books, Statistics, University life Tagged: ABC, machine learning, multi-armed bandits, tolerance, Zurich
Categories: Bayesian Bloggers

the maths of Jeffreys-Lindley paradox

Wed, 2015-03-25 19:15

Cristiano Villa and Stephen Walker arXived on last Friday a paper entitled On the mathematics of the Jeffreys-Lindley paradox. Following the philosophical papers of last year, by Ari Spanos, Jan Sprenger, Guillaume Rochefort-Maranda, and myself, this provides a more statistical view on the paradox. Or “paradox”… Even though I strongly disagree with the conclusion, namely that a finite (prior) variance σ² should be used in the Gaussian prior. And fall back on classical Type I and Type II errors. So, in that sense, the authors avoid the Jeffreys-Lindley paradox altogether!

The argument against considering a limiting value for the posterior probability is that it converges to 0, 21, or an intermediate value. In the first two cases it is useless. In the medium case. achieved when the prior probability of the null and alternative hypotheses depend on variance σ². While I do not want to argue in favour of my 1993 solution

since it is ill-defined in measure theoretic terms, I do not buy the coherence argument that, since this prior probability converges to zero when σ² goes to infinity, the posterior probability should also go to zero. In the limit, probabilistic reasoning fails since the prior under the alternative is a measure not a probability distribution… We should thus abstain from over-interpreting improper priors. (A sin sometimes committed by Jeffreys himself in his book!)

Filed under: Books, Kids, Statistics Tagged: Bayesian tests of hypotheses, Capitaine Haddock, Dennis Lindley, Harold Jeffreys, improper priors, Jeffreys-Lindley paradox, model posterior probabilities, Tintin
Categories: Bayesian Bloggers

Le Monde puzzle [#904.5]

Wed, 2015-03-25 09:18

About this #904 arithmetics Le Monde mathematical puzzle:

Find all plural integers, namely positive integers such that (a) none of their digits is zero and (b) removing their leftmost digit produces a dividing plural integer (with the convention that one digit integers are all plural).

a slight modification in the R code allows for a faster exploration, based on the fact that solutions add one extra digit to solutions with one less digit:

First, I found this function on Stack Overflow to turn an integer into its digits:

pluri=plura=NULL #solutions with two digits for (i in 11:99){ dive=rev(digin(i)[-1]) if (min(dive)&gt;0){ dive=sum(dive*10^(0:(length(dive)-1))) if (i==((i%/%dive)*dive)) pluri=c(pluri,i)}} for (n in 2:6){ #number of digits plura=c(plura,pluri) pluro=NULL for (j in pluri){ for (k in (1:9)*10^n){ x=k+j if (x==(x%/%j)*j) pluro=c(pluro,x)} } pluri=pluro}

which leads to the same output

&gt; sort(plura) [1] 11 12 15 21 22 24 25 31 32 33 35 36 [13] 41 42 44 45 48 51 52 55 61 62 63 64 [25] 65 66 71 72 75 77 81 82 84 85 88 91 [37] 92 93 95 96 99 125 225 312 315 325 375 425 [49] 525 612 615 624 625 675 725 735 825 832 912 [61] 915 925 936 945 975 1125 2125 3125 3375 4125 [70] 5125 5625 [72] 6125 6375 7125 8125 9125 9225 9375 53125 [80] 91125 95625
Filed under: Books, Kids, R, Statistics, University life Tagged: arithmetics, Le Monde, mathematical puzzle, strsplit()
Categories: Bayesian Bloggers

Le Monde puzzle [#904]

Tue, 2015-03-24 19:15

An arithmetics Le Monde mathematical puzzle:

Find all plural integers, namely positive integers such that (a) none of their digits is zero and (b) removing their leftmost digit produces a dividing plural integer (with the convention that one digit integers are all plural).

An easy arithmetic puzzle, with no real need for an R code since it is straightforward to deduce the solutions. Still, to keep up with tradition, here it is!

First, I found this function on Stack Overflow to turn an integer into its digits:

digin=function(n){ as.numeric(strsplit(as.character(n),"")[[1]])}

then I simply checked all integers up to 10⁶:

plura=NULL for (i in 11:10^6){ dive=rev(digin(i)[-1]) if (min(dive)>0){ dive=sum(dive*10^(0:(length(dive)-1))) if (i==((i%/%dive)*dive)) plura=c(plura,i)}}

eliminating solutions which dividers are not solutions themselves:

sol=lowa=plura[plura<100] for (i in 3:6){ sli=plura[(plura>10^(i-1))&(plura<10^i)] ace=sli-10^(i-1)*(sli%/%10^(i-1)) lowa=sli[apply(outer(ace,lowa,FUN="=="), 1,max)==1] lowa=sort(unique(lowa)) sol=c(sol,lowa)}

which leads to the output

> sol [1] 11 12 15 21 22 24 25 31 32 33 35 36 [13] 41 42 44 45 48 51 52 55 61 62 63 64 [25] 65 66 71 72 75 77 81 82 84 85 88 91 [37] 92 93 95 96 99 125 225 312 315 325 375 425 [49] 525 612 615 624 625 675 725 735 825 832 912 [61] 915 925 936 945 975 1125 2125 3125 3375 4125 [70] 5125 5625 [72] 6125 6375 7125 8125 9125 9225 9375 53125 [80] 91125 95625

leading to the conclusion there is no solution beyond 95625.

Filed under: Books, Kids, Statistics, University life Tagged: Le Monde, mathematical puzzle, strsplit()
Categories: Bayesian Bloggers

light and widely applicable MCMC: approximate Bayesian inference for large datasets

Mon, 2015-03-23 19:15

Florian Maire (whose thesis was discussed in this post), Nial Friel, and Pierre Alquier (all in Dublin at some point) have arXived today a paper with the above title, aimed at quickly analysing large datasets. As reviewed in the early pages of the paper, this proposal follows a growing number of techniques advanced in the past years, like pseudo-marginals, Russian roulette, unbiased likelihood estimators. firefly Monte Carlo, adaptive subsampling, sub-likelihoods, telescoping debiased likelihood version, and even our very own delayed acceptance algorithm. (Which is incorrectly described as restricted to iid data, by the way!)

The lightweight approach is based on an ABC idea of working through a summary statistic that plays the role of a pseudo-sufficient statistic. The main theoretical result in the paper is indeed that, when subsampling in an exponential family, subsamples preserving the sufficient statistics (modulo a rescaling) are optimal in terms of distance to the true posterior. Subsamples are thus weighted in terms of the (transformed) difference between the full data statistic and the subsample statistic, assuming they are both normalised to be comparable. I am quite (positively) intrigued by this idea in that it allows to somewhat compare inference based on two different samples. The weights of the subsets are then used in a pseudo-posterior that treats the subset as an auxiliary variable (and the weight as a substitute to the “missing” likelihood). This may sound a wee bit convoluted (!) but the algorithm description is not yet complete: simulating jointly from this pseudo-target is impossible because of the huge number of possible subsets. The authors thus suggest to run an MCMC scheme targeting this joint distribution, with a proposed move on the set of subsets and a proposed move on the parameter set conditional on whether or not the proposed subset has been accepted.

From an ABC perspective, the difficulty in calibrating the tolerance ε sounds more accute than usual, as the size of the subset comes as an additional computing parameter. Bootstrapping options seem impossible to implement in a large size setting.

An MCMC issue with this proposal is that designing the move across the subset space is both paramount for its convergence properties and lacking in geometric intuition. Indeed, two subsets with similar summary statistics may be very far apart… Funny enough, in the representation of the joint Markov chain, the parameter subchain is secondary if crucial to avoid intractable normalising constants. It is also unclear for me from reading the paper maybe too quickly whether or not the separate moves when switching and when not switching subsets retain the proper balance condition for the pseudo-joint to still be the stationary distribution. The stationarity for the subset Markov chain is straightforward by design, but it is not so for the parameter. In case of switched subset, simulating from the true full conditional given the subset would work, but not simulated  by a fixed number L of MCMC steps.

The lightweight technology therein shows its muscles on an handwritten digit recognition example where it beats regular MCMC by a factor of 10 to 20, using only 100 datapoints instead of the 10⁴ original datapoints. While very nice and realistic, this example may be misleading in that 100 digit realisations may be enough to find a tolerable approximation to the true MAP. I was also intrigued by the processing of the probit example, until I realised the authors had integrated the covariate out and inferred about the mean of that covariate, which means it is not a genuine probit model.

Filed under: Books, Statistics, University life, Wines Tagged: ABC, big data, character recognition, delayed acceptance, Dublin, Ireland, Markov chains, MCMC algorithm, reversible jump MCMC, Russian roulette, subsampling

Categories: Bayesian Bloggers

ABC for copula estimation

Sun, 2015-03-22 19:15

Clara Grazian and Brunero Liseo (di Roma) have just arXived a note on a method merging copulas, ABC, and empirical likelihood. The approach is rather hybrid and thus not completely Bayesian, but this must be seen as a consequence of an ill-posed problem. Indeed, as in many econometric models, the model there is not fully defined: the marginals of iid observations are represented as being from well-known parametric families (and are thus well-estimated by Bayesian tools), while the joint distribution remains uncertain and hence so does the associated copula. The approach in the paper is to proceed stepwise, i.e., to estimate correctly each marginal, well correctly enough to transform the data by an estimated cdf, and then only to estimate the copula or some aspect of it based on this transformed data. Like Spearman’s ρ. For which an empirical likelihood is computed and aggregated to a prior to make a BCel weight. (If this sounds unclear, each BEel evaluation is based on a random draw from the posterior samples, which transfers some uncertainty in the parameter evaluation into the copula domain. Thanks to Brunero and Clara for clarifying this point for me!)

At this stage of the note, there are two illustrations revolving around Spearman’s ρ. One on simulated data, with better performances than a nonparametric frequentist solution. And another one on a Garch (1,1) model for two financial time-series.

I am quite glad to see an application of our BCel approach in another domain although I feel a tiny bit uncertain about the degree of arbitrariness in the approach, from the estimated cdf transforms of the marginals to the choice of the moment equations identifying the parameter of interest like Spearman’s ρ. Especially if one uses a parametric copula which moments are equally well-known. While I see the practical gain in analysing each component separately, the object created by the estimated cdf transforms may have a very different correlation structure from the true cdf transforms. Maybe there exist consistency conditions on the estimated cdfs… Maybe other notions of orthogonality or independence could be brought into the picture to validate further the two-step solution…

Filed under: Books, Kids, pictures, Statistics, Travel, University life Tagged: ABC, copula, empirical likelihood, GARCH model, Italia, La Sapienza, Roma, Spearman's ρ
Categories: Bayesian Bloggers

a marathon a day for… a year?!

Sun, 2015-03-22 09:18

“I think a lot of people do not push themselves enough.” Rob Young

I found this Guardian article about Rob Young and his goal of running the equivalent of 400 marathons in 365 days. Meaning there are days he runs the equivalent of three marathons. Hard to believe, isn’t it?! But his terrible childhood is as hard to believe. And how cool is running with a kilt, hey?! If you want to support his donation for disadvantaged children, go to his marathon man site. Keep running, Rob!

Filed under: Kids, Running, Travel Tagged: England, kilt, marathon, Rob Young, Scotland, The Guardian
Categories: Bayesian Bloggers

Dom Juan’s opening

Sat, 2015-03-21 19:15

The opening lines of the Dom Juan plan by Molière, a play with highly subversive undertones about free will and religion. And this ode to tobacco that may get it banned in Australia, if the recent deprogramming of Bizet’s Carmen is setting a trend! [Personal note to Andrew: neither Molière’s not my research are or were supported by a tobacco company! Although I am not 100% sure about Molière…]

“Quoi que puisse dire Aristote et toute la philosophie, il n’est rien d’égal au tabac: c’est la passion des honnêtes gens, et qui vit sans tabac n’est pas digne de vivre. Non seulement il réjouit et purge les cerveaux humains, mais encore il instruit les âmes à la vertu, et l’on apprend avec lui à devenir honnête homme.”

Dom Juan, Molière, 1665

[Whatever may be argued by Aristotle and the entire philosophy, there is nothing equal to tobacco; it is the passion of upright people, and whoever lives without tobacco does not deserve living. Not only it rejoices and purges human brains, but it also brings souls towards virtue, and teaches about becoming a gentleman.]

Filed under: Books, Kids Tagged: 17th Century theatre, Aristotle, Australia, Bizet, Carmen, Dom Juan, French literature, Molière, opera, tobacco
Categories: Bayesian Bloggers

a mad afternoon!

Sat, 2015-03-21 16:58

An insanely exciting final day and end to the 2015 Six Nations tournament! on the first game of the afternoon, Wales beat Italy in Rome by a sound 20-61!, turning them into likely champions. But then, right after, Ireland won against Scotland 10-40! In mythical Murrayfield. A feat that made them winners unless England won over France in Twickenham by at least 26 points. Which did not happen, in a completely demented rugby game, a game of antology where England dominated but France was much more inspired (if as messy as usual) than in the past games and fought fair and well, managing to loose 35-55 and hence block English victory of the Six Nations. Which can be considered as a victory of sorts…! Absolutely brilliant ending.

Filed under: pictures, Running, Travel Tagged: England, France, Ireland, Italy, Murrayfield, rugby, Six Nations 2015, Six Nations tournament, Twickenham, Wales
Categories: Bayesian Bloggers

more gray matters

Sat, 2015-03-21 09:18

Filed under: pictures Tagged: clouds, eclipse, INSEE, Malakoff
Categories: Bayesian Bloggers

Gray matters [not much, truly]

Fri, 2015-03-20 19:15

Through the blog of Andrew Jaffe, Leaves on the Lines, I became aware of John Gray‘s tribune in The Guardian, “What scares the new atheists“. Gray’s central points against “campaigning” or “evangelical” atheists are that their claim to scientific backup is baseless, that they mostly express a fear about the diminishing influence of the liberal West, and that they cannot produce an alternative form of morality. The title already put me off and the beginning of the tribune just got worse, as it goes on and on about the eugenics tendencies of some 1930’s atheists and on how they influenced Nazi ideology. It is never a good sign in a debate when the speaker strives to link the opposite side with National Socialist ideas and deeds. Even less so in a supposedly philosophical tribune! (To add injury to insult, Gray also brings Karl Marx in the picture with a similar blame for ethnocentrism…)

“What today’s freethinkers want is freedom from doubt.”

Besides this fairly unpleasant use of demeaning rhetoric, I am bemused by the arguments in the tribune. Especially when considering they come from an academic philosopher. At their core, Gray’s arguments meet earlier ones, namely that atheism has all the characteristics of a religion, in particular when preaching or “proselytising”. Except that it cannot define its own rand of morality. And that western atheism is deeply dependent on Judeo-Christian values. The last point is not much arguable as the Greek origins of philosophy can attest. So calling in Nietzsche to the rescue is not exactly necessary. But the remainder of Gray’s discourse is not particularly coherent. If arguments for atheism borrow from the scientific discourse, it is because no rational argument or scientific experiment can contribute to support the existence of a deity. That pro-active atheists argue more visibly against religions is a reaction against the rise and demands of those religions. Similarly, that liberalism (an apparently oversold and illusory philosophy) and atheism seem much more related now than they were in the past can be linked with the growing number of tyranical regimes based upon religion. At last, the morality argument (that is rather convincingly turned upside down by Dawkins) does not sell that well. Societies have run under evolving sets of rules, all called morality, that can be seen as a constituent of human evolution: there is no reason to buy that those rules were and will all be acceptable solely on religious grounds. Morality and immorality are only such in the eye of the beholder (and the guy next door).

Filed under: Books, University life Tagged: atheism, free will, Islington, John Gray, Karl Marx, liberalism, London, Richard Dawkins, The Guardian, United Kingdom
Categories: Bayesian Bloggers

Nero d’Avolla

Fri, 2015-03-20 15:20

Filed under: Wines Tagged: Italian wine, Sicily
Categories: Bayesian Bloggers