Sampling Error Abbreviation?

Is there no commonly-used abbreviation for "sampling error"?

Can't use "SE" because that's the abbreviation for "standard error", which is a totally different thing (namely, the standard deviation of all possible sampling errors).

Damn it.


First-Day Statistics

Here's a demonstration that I was deliriously happy to cook up for the first day of my current statistics class. I think it worked extremely well when I first used it (actually as the second thing we do, immediately after looking at a professional research journal for its statistical notation).

Sampling a Deck of Cards: Let's act as a scientific researcher, and say that somehow we've encountered a standard deck of cards for the first time, and know practically nothing about it. We'd like to get a general idea of the contents of the deck, and for starters we'll estimate the average value (mean) of all the cards. Unfortunately, our research budget doesn't give us time to inspect the whole deck; we only have time to look at a random sample of just 4 cards.

Now, as an aside, let's cheat a bit and think about the structure of a deck of cards (not that our researcher would know any of this). For our purposes we'll let A=1, numbers 2-10 count face value, J=11, Q=12, K=13. We know that this population has size N=52; if you think about it you can derive that the actual mean is μ=7; and I'll just come out and tell the class that I already calculated the standard deviation as σ=3.74. (Again, our researcher probably wouldn't know any of this in advance.)

So granted that we wouldn't really know what μ is, what we're about to do is take a random sample and construct a standard 95% confidence interval for the most likely values it could be. In our case we'll be taking a sample size n=4, calculating the average (sample mean, here denoted x'), and construct our confidence interval. As a further aside, I'll point out that a 95% confidence level can be simplified into what we call a z-score, approximately z=2.

At this point I shuffle the deck, draw the top 4 cards, and look at them.

We take the values of the four cards and average them (for example, the last time I did this I got cards ranked 7, 3, 5, and 4; sample mean x' = 19/4 = 4.75). Then I explain that constructing a confidence interval usually involves taking our sample statistic and adding/subtracting some margin of error, thus: μ ≈ x'±E (again, x' is the "sample mean"; E is the "margin of error"). Then we turn to the formula card for the course and look up, near the end of the course, the fact that for us E = z*σ/√n. We substitute that into our formula and obtain μ ≈ x'±z*σ/√n.

So at this point we know the value of everything on the right side of the estimation, and substitute it all in and simplify (the sample mean x', z=2, σ=3.74, and n=4, all above). The arithmetic here is pretty simple, in this example:

μ ≈ x' ± z*σ/√n
= 4.75 ± 2*3.74/√4
= 4.75 ± 2*3.74/2
= 4.75 ± 3.74
= 1.01 to 8.49

So, there's our confidence interval in this case (95% CI: 1.01 to 8.49). Our researcher's interpretation of that: "There is a 95% chance that the mean value of the entire deck of cards is somewhere between 1.01 and 8.49". That's a pretty good, concentrated estimation for μ on the part of our researcher. And in this case we can step back and ask the question: Is the population mean value actually captured in this interval? Yes (based on our previous cheat), we do in fact know that μ=7, so our researcher has successfully captured where μ is with a sample of only 4 cards out of an entire deck.

That usually goes over quite well in my introductory statistics class.

Backstage -- The Ways In Which I Am Lying: Look, I'm always happy to dramatically simplify a concept if it gets the idea across (in this case, the overall process of inferential statistics, the ultimate goal of my course, as treated in the very first hour of class). Let's be upfront about what I've done here.

The primary thing that I'm abusing is that this formula for margin-of-error, and hence the confidence interval, is usually only valid if the sampling distribution follows a normal curve. There's two ways to obtain that: either (a) the original population is normally distributed, or (b) the sample size is large, triggering the Central Limit Theorem to turn our sampling distribution normal anyway.

Neither of those conditions apply here. The deck of cards has a uniform distribution, not normal (4 cards each in all the ranks A to K). And obviously our sample size n=4, necessary to make the demonstration digestible in the available time, is not remotely a "large enough" sample size for the CLT. But granted that the deck of cards has a uniform distribution, that does help us in it becoming "normal-like" a bit faster than some wack-ass massively skewed population, so the example is still going to work out for us most of the time (see more below).

At the same time, ironically enough, I also have too large of a sample size, in terms of a proportion to the overall population, for the usual margin-of-error formula. Here I'm sampling 4/52 = 7.69% of the population, and if that's more than around 5%, technically we're supposed to use a more complicated formula that corrects for that. Or we could legitimately avoid that if we were sampling with replacement, but we're not doing that, either (re-shuffling the deck after each single card draw is a real drag).

However, even without those technical guarantees, everything does in fact work out for us in this particular example anyway. I wrote a computer program to exhaustively evaluate all the possible samples of size 4 from a deck of cards, and the result is this: What I'm calling a 95% confidence interval above, will actually catch our population mean over 95.7% of the time; so if anything the "cheat" here is that we know the interval has more of a chance of catching μ than we're really admitting.

Some other things that may be obvious are the fact that we're assuming we know the population standard deviation σ in advance, but that's a pretty standard instructional warm-up before dealing with the more realistic case of unknown σ. And of course I've approximated the z-score for a 95% CI as z=2, when more accurately it's z=1.960 -- but you'll notice above that using z=2 magically cancels with the factor √n = √4 = 2 in the denominator of our formula, thus nicely abbreviating the number-crunching.

The other thing that might happen when you run this demonstration is there's a possibility of generating an interval with a negative endpoint (even while catching μ inside), which would be ugly and might warrant some grief from certain students (e.g., if x'=3.5, then the interval is -0.24 to 7.24). Nontheless, the numerical examination shows that there's a 94.8% chance of getting what I'd call a "good result" for the presentation -- both catching μ and avoiding any negative endpoint.

At first I considered a sample size of n=3, which would shorten the card-drawing part of the demonstration; this still results in (numerically exhausted) 95.4% chance to catch μ in the resulting interval. Alternatively, you might consider n=5, which guarantees avoidance of any negatives in the interval. In both those cases you lose the cancellation with the z-score, so there would be more calculator number-crunching involved if you did it that way.

Finally, I know that someone could technically dispute my interpretation of what a confidence interval means above as being incompatible with the frequentist interpretation of probability. But I've decided to emphasize this version in my classes, because it's at least comprehensible to both me and my students. I figure you can call me a Bayesian and we'll call it a day.


More Topology Explaining

Follow-up to yesterday's post on the standard crappy method of explaining topology:

I was at a presentation about a year ago, where someone tried to explain basic topology concepts to non-mathematicians. Here's they went about it: "Consider a cube of cheese and a donut," they said. "They are different shapes. If you draw a small circle on the surface of the cube of cheese, it can be shrunk down to a point. If you draw a circle on the surface of the donut the right way, it cannot be shrunk down to a point. Strange but true."

I almost fell out of my chair when I heard that explanation.

There's a whole slew of things wrong with explanation: (1) Why a "cube" of cheese? That's only going to serve to confuse people into thinking that the geometric "cube" shape is somehow important to the description, when it's not. Again, the only important thing is that one has a hole and the other doesn't. Use some kind of curved shape to avoid tricking people into thinking that the square-ness has anything to do with what you're explaining. (2) Why "drawing a circle"? Yes, as mathematicians we know that's one way of visualizing the important Poincaré conjecture, but here we have to look at it from the perspective of the non-expert listener. Drawings of things don't shrink and expand, so that only promotes further confusion. Use something from daily life that naturally expands and contracts for your analogy. (3) How the heck would anyone accomplish "drawing on a cube of cheese" in the first place?

Here's how I would explain this.

"Consider an orange and a donut. In topology, the only important difference in their shapes is that one has a hole and the other doesn't. Here's how a mathematician would demonstrate that: With the orange, if you wrap a rubber band around it, you can always flick the rubber band aside so it falls off. With the donut, there's a way to connect a rubber band through the hole-in-the-middle part so there's no way to just flick it off. (You'd have to cut & glue the rubber band back together, but then it would be always hang onto the donut.) Doing this mathematically is one way to detect exactly which shapes have holes in them."


Explaining Topology

(Revised from a prior commentary):

You know, every time someone gives an elementary description of Topology (a branch of modern mathematics), there's a very standard explanation of it, and I think it's a very, very bad one. They always say something complicated like this (from http://www.sciencenews.org/articles/20071222/bob11.asp ):

Topology studies shapes. Specifically, it studies shapes' properties that are not affected by stretching, moving, twisting, or pulling—anything that doesn't break up the object or fuse some of its parts. The proverbial example is that, to a topologist, a coffee mug is the same as a doughnut. In your imagination, you can squash the mug into a doughnut shape, and it will retain the property of having a hole, namely its handle. A sphere is different. You can stretch a sphere into a stick and bend the stick so its ends touch. But turning that open ring into a doughnut will involve fusing the ends, and that's forbidden.

Huh? What the hell does that mean? You start off saying it's about shapes, then start talking in the negative by saying it's not about a bunch of particular properties of shapes. Then there are two pretty poor examples (asking people to imagine stretching things where bulky parts become very thin pieces; it's unclear what corresponds to what). I've taken a full year in graduate Topology, and sometimes I still have trouble understanding that description. Worst of all is this -- that's not what is really important about Topology studies. No one is ever really interested in stretching anything in a topology course.

Here's what I say in the classes I teach: Topology is the study of connections. That's the real story; it's very simple. Yes, coffee cups and donuts are similar topologically, because they're both connected bodies with one hole through each of them. But topology is really useful for things like the following -- A road engineer categorizes intersections by how many streets meet there. A miniature figure modeller plans how complicated an item they can sculpt, knowing the resulting mold has to stay connected around their figure. A stencil-maker has to make stencils one way for letters that have holes in them, and another way for those that don't (e.g., cut out an "A", "B", or "D" normally from paper and those middle holes get disconnected and fall out; that's not a problem for letters like "C", "E", or "F", which keep the surrounding paper connected.) A subway-rider looks for the easiest route to an evening out on the town, knowing they're restricted to specific connecting trains at specific stations. A traveling salesman wants to plan the fastest, cheapest sales trip between a dozen cities, using available commercial connecting flights; or, my food delivery service wants to do the same thing with intersecting city streets.

These are all Topological problems, dealing with how things are connected (which might be solid shapes, but is even more likely to be cords, knots, network circuits, or car/plane/train paths). I suspect I know why most explainers use the big-complicated-useless explanation, instead of the short-simple-and-effective one -- when categorizing different shapes, mathematicians do utilize functions called "homeomorphisms", which somebody at some point thought was best visualized as "stretching" operations. But, seriously, nobody who's nontechnical is going to care about that technique (no more than say, people care about how completing-the-square is used to develop the quadratic formula).

The point of all that technical work in Topology is, again, pretty simple: How is this shape connected? And hence: Where can I go today with this shape? That should be the focus of our first introductions to Topology, I think, not the damn "stretching" analogy, which is practically a cancer on our attempts to explain the subject.


MadMath Manifesto

"Look at it this way. When I read a math paper it's no different than a musician reading a score. In each case the pleasure comes from the play of patterns, the harmonics and contrasts... The essential thing about mathematics is that it gives esthetic pleasure without coming through the senses." (Rudy Rucker, A New Golden Age)

"'I find herein a wonderful beauty,' he told Pandelume. 'This is no science, this is art, where equations fall away to elements like resolving chords, and where always prevails a symmetry either explicit or multiplex, but always of a crystalline serenity.'" (Jack Vance, The Dying Earth

The preceding dialogues are both from works of fiction. That being said, they may in fact truly represent how the majority of mathematicians experience their work. For example, Rudy Rucker is himself a retired professor of mathematics and computers (as well as a science fiction author). My own instructor of advanced statistics would end every proof with the heartfelt words, "And that's the beauty."

I've heard that kind of sentiment a lot. But I never experienced mathematics that way. I now have a graduate degree in mathematics and statistics, and currently teach full-time as a lecturer of college mathematics, and these kinds of declarations still mystify me. Math has never felt "beautiful" or "poetic". I would never in a million years think to describe math as "pleasurable" or "serene".

Math drives me mad.

My experience of mathematics is this: Math is a battle. It may be necessary, it may be demanding, it may even be heroic. But the existential reality is that if you're doing math, you've got a problem. You very literally have a problem, something that is bringing your useful work to a halt, a problem that needs solving. And personally, I don't like problems; I am not fond of them; I wish they were not there. I want them to be gone, eradicated, and out of my way. I don't like puzzles; I want solutions. And once you have a solution, then you're not doing math anymore. So the process of mathematics is an experience in infuriation.

So, again: Math is a battle. It is a battle that feels like it must be fought. It can feel like a violent addiction; hours and days and nights disappearing into a mental blackness, unable to track the time or bodily needs. Becoming aware again at the very edge of exhaustion, hunger, filth, and collapse.

At worst, math can feel like a horrible life-or-death struggle, clawing messily in the midst of muddy, bloody, poisonous trenches. At best, it may feel like an elegant martial-arts move, managing to use the enemy's weight against itself, to its destruction.

I love seeing a powerful new mathematical theorem. But not because it "gives esthetic pleasure"; I have yet to see that. Rather, because a powerful theorem is the mathematical equivalent to "Nuke it from orbit – It's the only way to be sure". A compelling philosophy.

On the day that you really need math it will be a high-explosive, demolishing the barrier between you and where you want to go. Is there a pleasure in that? Perhaps, but not from the "play of patterns, the harmonics and contrasts". Rather, it's because blowing up things is cool. Like at a monster-truck rally, crushing cars is cool. Math may not be beautiful or fun for us, but it is powerful, and that's what we need from it.

Of course, I also don't know how to a read a music score, so I'm similarly mystified if that's the operating analogy for most mathematicians. Perhaps I'm missing something essential, but I have to stay true to my own experience. If math is going to be useful or worthwhile then it must literally rock you in some way, relieve an unbearable tension, and change your perception of what is possible.

And so, the battle continues.