Lindley's Paradox

The Wikipedia description of Lindley's Paradox asserts an example of opposite hypothesis-testing results between the Frequentist approach and the Bayesian approach.

The example is one of testing a certain town for the ratio of boy-to-girl births. The thing that violently strikes me here is the choice of the Bayesian prior: P(theta = 0.5) = 0.5, i.e., the advance assumption that it's 50% likely for the ratio to be equal to 0.5 (the other 50% chance spread uniformly between all points from 0 to 1).

I mean: What? Why would I conceivably assume that? If I broadly picture real numbers as being continuous, then my instinct would be to assume that it's almost impossible for any given number to be exactly the parameter value, i.e., I'd assume P(theta = 0.5) = 0. Even if I didn't reason that way, I otherwise have copious evidence that human births aren't really 50/50, there's very clearly more boys born than girls -- so if anything I'd choose that as the most likely prior value.

Is that really how Bayesians are supposed to choose their prior? (It seems atrocious!) Or is this just a fantastically mangled example at Wikipedia?

The Amazing Lottery

Stats observation of the day -- After every lottery you can say, "That number had only 1 chance in 175 million of coming up!". But, there's a 100% chance you can say that, every time it's run. It's only interesting or significant if you can predict the result in advance. Otherwise you have a fallacy called "data dredging": http://en.wikipedia.org/wiki/Data_dredging

See also:
Other probabilistic fallacies.