tag:blogger.com,1999:blog-7718462793516968883.post4073324824514637120..comments2023-03-02T12:12:05.847-05:00Comments on MadMath: Lindley's ParadoxDeltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-7718462793516968883.post-84445605420450859702013-07-21T13:19:14.836-04:002013-07-21T13:19:14.836-04:00Everything you said here makes sense to me. Not be...Everything you said here makes sense to me. Not being trained in Bayesian statistics, I had to throw up my hands at that example that seemed simply ludicrous to me. Thanks for writing your observations and helping convince me I'm not crazy, I appreciate it!Deltahttps://www.blogger.com/profile/00705402326320853684noreply@blogger.comtag:blogger.com,1999:blog-7718462793516968883.post-52860426608000605352013-07-20T10:58:18.210-04:002013-07-20T10:58:18.210-04:00I am rather surprised that this is being discussed...I am rather surprised that this is being discussed among the statisticians. <br /><br />As I see it, the Bayesian case is a natural consequence of extremely flawed prior probabilities. Bayesian logic is a tool, and like with any other tool, you should understand what it does.<br /><br />By setting a nonzero prior probability for a point value, the probability density becomes infinite. Compare that to the finite density in an infinitesimal interval I=<0.5, 0.5+epsilon>, infinitesimally close to the theta=0.5 point value. <br /><br />It is like saying "In this murder case, I already have a strong evidence that Mr. X could be the perpetrator. There are millions of other potential perpetrators, and the total probability of these others is not insignificant, but now that we have this new evidence about who where close to the site at the time, the few that were near, add up to next to nothing, and the rest of the potential suspects were far from the site, so their probability, weighted by their distance, remains very low even when added over the millions of individuals. <br /><br />This is sound logic - but only if the premises are true. Do you really have prior evidence that implicates Mr. X millions of times more strongly than his nearby neighbors?<br /><br />Even if the single-point null hypothesis is replaced with a very narrow range, the prior probability density becomes huge inside that range, while the density becomes comparatively abysmally small just outside. Does that reflect the prior knowledge about the question being studied? If so were the case, it would have to have an effect on your reasoning, and Bayesian logic takes that into account. But in the example, you just don't have any such knowledge. Garbage in - garbage out.Cacadrilhttps://www.blogger.com/profile/02283862893079397292noreply@blogger.comtag:blogger.com,1999:blog-7718462793516968883.post-47052461584343411282012-04-26T23:53:11.095-04:002012-04-26T23:53:11.095-04:00^ Very informative, thanks for posting that! Glad ...^ Very informative, thanks for posting that! Glad to know I'm not totally alone in my intuition that seems like a bungled example/prior.Deltahttps://www.blogger.com/profile/00705402326320853684noreply@blogger.comtag:blogger.com,1999:blog-7718462793516968883.post-23357387904289369912012-04-24T03:41:25.320-04:002012-04-24T03:41:25.320-04:00There's been some recent discussion of this ht...There's been some recent discussion of this http://www.science20.com/quantum_diaries_survivor/jeffreyslindley_paradox-87184<br />http://andrewgelman.com/2012/02/untangling-the-jeffreys-lindley-paradox/<br /><br />Also, I edited the Wikipedia article to remove "weakly", since that's obviously not the case, and to add a more rational comparison of the Bayesian and Frequentist approaches, in which they both give the same conclusion.<br /><br />And no, speaking as a Bayesian, this is not how one would usually choose a prior (at least, without a great deal of previous experience/evidence).thouishttps://www.blogger.com/profile/08805809527656089305noreply@blogger.comtag:blogger.com,1999:blog-7718462793516968883.post-77035055957006947832011-04-19T16:17:14.953-04:002011-04-19T16:17:14.953-04:00That makes a little more sense, but I'm having...That makes a little more sense, but I'm having trouble wrapping my head around the article's general-description statements "a prior distribution that favors H0 weakly" (either example seems like favoring it strongly) and "It is a result of the prior having a sharp feature at H0" (where I'd call your example seems to have a non-sharp feature). <br /><br />Thanks for addressing this -- do you have a link or citation to better presentation/example?Deltahttps://www.blogger.com/profile/00705402326320853684noreply@blogger.comtag:blogger.com,1999:blog-7718462793516968883.post-65527575154016945972011-04-19T11:35:37.160-04:002011-04-19T11:35:37.160-04:00It's just a contrived example to illustrate th...It's just a contrived example to illustrate the paradox. The paradox still works if you choose to have a prior that's a narrow Gaussian at 0.5 on top of a much broader distribution (flat, or anything wide).<br /><br />The point is that the paradox most often rears its head when the prior is broad with a high narrow region in addition, and a flat prior with a delta function is just the simplest in many respects.Unknownhttps://www.blogger.com/profile/06574892692367399186noreply@blogger.com