2014-10-27

Bloom's Taxonomy and Math Education

In the last year or so I've been attending seminars at our college's Center for Teaching and Learning. So far these have been on how to publish in scholarship of teaching and learning (SOTL) journals, and a few reading groups (Susan Ambrose's "How Learning Works", and Ken Bain's "What the Best College Teachers Do"). Frequently I'm the only STEM instructor at the table, with the rest of the room being instructors from English, philosophy, political science, history, women's studies, social science, etc.

One thing that keeps coming up in these books and discussions is a reference to Bloom's Taxonomy of Learning, a six-step hierarchy of tasks in cognitive development. Each step comes with a description, examples, and "key verbs". Here is a summary similar to what I've been seeing. Now, I'm perennially skeptical of these kinds of "N Distinct Types of P!" categorizations, as they've always struck me as at least somewhat flawed and intellectually dishonest in a real, messy world. But for argument's sake, let's say that we engage with the people who find this useful and temporarily accept the defined categories as given.

In every instance that I've seen, the discussion seems to turn on the following critique: "We are failing our students by perpetually being stuck in the lower stages of simple Knowledge and Comprehension recall (levels 1-2), and need to find ways to to lift our teaching into higher strata of Application, Analysis, etc. (levels 3-4 and above)". To a math instructor this sounds almost entirely vapid, because we never have time to test on levels 1-2 and entirely take those levels for granted without further commentary. In short, if Bloom's Taxonomy holds any weight at all, then I claim the following:

Math is hard because by its nature it's taught at TOO HIGH a level compared to other classes.

For example: I've never seen a math instructor testing students on simple knowledge recall of defined terms or articulated procedures. Which in a certain light is funny, because our defined terms have been hammered out over years and centuries, and it's important that they be entirely unambiguous and essential. I frequently tell my students, "All of your answers are back in the definitions". Richard Lipton has written something similar to this more than once (link one, two).

But in math education we basically don't have any friggin' time to spend drilling or testing on these definitions-of-terms. We say it, we write it, we just assume that you remember it for all time afterward. This may be somewhat exacerbated by the math and computer scientist's custom of knowing to remember those key terms, and maybe our memory being trained in that way. I know in my own teaching I was at one time very frustrated with my students not picking up on this obvious requirement, and I've evolved and trained myself to constantly pepper them with side-questions on what the proper name is for different elements day after day to get these terms machine-gunned into their heads. They're not initially primed for instantaneous recall in the ways that we take for granted. At any rate: the time spent on testing for these issues is effectively zero; it doesn't exist in the system. (Personally, I have actually inserted some early questions on my quizzes on definitions, but I simply can't find time or space to do it thereafter.)

So after the brief presentation of those colossally important defined terms, we will take for granted simple Recall and Comprehension (levels 1-2), and immediately launch in to using them logically in the form of theorems, proofs, and exercises -- that is, Application and Analysis (levels 3-4). Note the following "key verbs", specific to the math project, in Bloom's categorization: "computes, operates, solves" are among Applications (level 3), things like "calculates, diagrams" are put among Analysis (level 4). These of course are the mainstays of our expected skills, questions on tests, and time spent in the math class..

And then of course we get to "word problems", or what we really call "applications" in the context of a math class. Frequently some outside critic expects that these kinds of exercises will make the work easier for students by making it more concrete, perhaps "real-world oriented". But the truth is that this increases the difficulty for students who are already grappling with higher-level skills than they're accustomed to in other classes, and are now being called upon to scale even higher. These kinds of problems require: (1) high-quality English parsing skills, (2) ability to translate from the language of English to that of Math, (3) selection and application of the proper mathematical (level-3 and 4) procedures to solve the problem, and then (4) reverse translation from Math back to an English interpretation. (See what I did there? It's George Polya's How-To-Solve-It.) In other words, we might say: "Yo dawg, I heard you like applications? Well I made applications of your applications." Word problems boost the student effectively up to the Synthesis and Evaluation modes of thought (levels 5-6).

So perhaps this serves as the start of an explanation as to why the math class looks like a Brobdingnagian monster to so many students; if most of their other classes are perpetually operating at level 1 and 2 (as per the complaints of so many writers in the humanities and education departments), then the math class that is immediately using defined terms and logical reason to do stuff at level 3 to 4 does look like a foreign country (to say nothing of word problems a few hours after that). And perhaps this can serve as a bridge between disciplines; if the humanities are wrestling with being stuck in level 1, then they need to keep in mind that the STEM struggle is not the same, that inherently the work demands reasoning at the highest levels, and we don't have time for anything else. Or perhaps this argues to find some way of working in more emphasis on those simple vocabulary recall and comprehension issues which are so critically important that we don't even bother talking about them?


2014-10-20

Is Statway a Cargo Cult?

We all know that Algebra is the limiting factor for the millions of students attending community colleges throughout the U.S. That is: Colleges could double (or triple, or quadruple) their graduation numbers overnight if the 8th-grade algebra requirement were only removed. This makes for lots of institutional pressure these days to do so.

A common line of thought is: Get rid of the algebra requirement and pursue a primer on statistics instead. You can sort of see why someone might negotiate in this way: offer something apparently attractive (statistics, which many say is needed to understand the modern world) in place of the thing they're asking you to give up. For example, the Carnegie "Statway" program now at numerous colleges promises exactly that (the lede being "Statway triples student success in half the time"; link).

But as an instructor of statistics at a community college, I use algebra all the time to derive, and explain, and confirm various formulas and procedures. Without that, I think the intention (in fact I've heard this argued explicitly) is to get people to dump data into the SPSS program, click a button, and then send those results upstream or downstream to some other stake-holder without knowing how to verify or double-check them. Basically it advocates a faith-based approach to mathematical/statistical software tools.

This is a nontrivial, in fact really tough, philosophical angel with which to wrestle nowadays. We're long past the point where cheap calculating devices have been made ingrained throughout many elementary and high schools; convenient to be sure, but as a result at the college level we see a great many students who have no intuition of times tables, and are utterly unable to estimate, sanity-check, or spot egregious errors (e.g. I had a college student who hand-computed 56×9 = 54 and was totally baffled at my saying that couldn't possibly be the answer; even re-doing the same thing a second time around).

To a far greater degree, as I say in my classes, statistics is truly 20th century, space-age branch of math; it's a fairly tall edifice built on centuries of results in notation, algebra, probability, calculus, etc. Even in the best situation in my own general sophomore-level class, and as deeply committed as I am to rigorously demonstrating as much as possible, I'm forced to hand-wave a number of concepts from calculus classes which my students have not, and will never, take (notably regarding integrals, density curves, the area of any probability distribution being 1; to say nothing of a proof of the Central Limit Theorem). So if we accept that statistics are fundamental to understanding how the modern world is built and runs, and there is some amount of corner-shaving in presenting it to students who have never taken calculus, then perhaps it's okay to go whole-hog and just give them a technological tool that does the entire job for them? Without knowing where it comes from, and being told to just trust it? I can see (and have heard) arguments in both directions.

Here's an example of the kind of results you might get from a website that caught my attention the other day: Spurious Correlations. The site puts on a display a rather large number of graphs of data which is meant to be obviously, comically not really related, even though they have high correlation. Here's an example:


Something seemed fishy about this after I first looked at it. It's true that if you dump the numbers in the table into Excel or SPSS or whatever a correlation value of 0.870127 pops out. But here's the rub: those date-based tables used throughout the site are totally not how you visualize correlation, or related in any way to what the linear correlation coefficient (r) means. What it does mean is that if you take those data pairs and plot them as an (x, y) scatterplot, you can find a straight-line that gets pretty close to most of the points. That is entirely lost in the graph as presented; the numbers aren't even paired up as points in the chart, and the date values are entirely ignored in your correlation calculation. I'm a bit unclear if the creator of the website knows this, or is just applying some packaged tool -- but surely it will be opaque and rather misleading to most readers of the site. At any rate, it terminates out the ability to visually double-check some crazy error of the 56×9 = 54 ilk.

As a further point, there are some graphs on the site labelled as showing "inverse correlation", which I thought to be a correlation between x and 1/y -- but in truth what they mean is the more common [linear] "negative correlation", which is a whole different thing. Or at least I would presume it is; I'd never heard of "inverse correlation" as synonymous, and about the only place I can find it online is Investopedia (so maybe the finance community has its own somewhat-sloppy term for it; link).

I guess someone might call this knit-picking, but I have the intuition that that's a sign of somebody who can't actually distinguish between true and false interpretations of statistical results. Is this ultimately the kind of product we get if we wipe out all the algebra-based derivations from our statistics instruction, and treat it as a non-reasoning vocational exercise?

Let me be clear in saying that at this time I have not actually read the Carnegie Statway curriculum, so I can't say if it has some clever way of avoiding these pitfalls or not. Perhaps I should do that to be sure. But as years pass in my current career, and I get more opportunities to personally experience all the connections throughout our programs, I find myself becoming more and more of a booster and champion of the basic algebra class requirement for all, as perhaps the very finest tool in our kit for promoting clear-headedness, transparency, honesty, and truth in regards to what it means to be an educated, detail-oriented, and scientifically-literate person.


2014-10-13

How Do You Know It's a Proportion?

I've written in the past of the mystery of when you'd want to use a proportion for an application problem, and what the benefits are for doing so (link). Once again, last week, one of my basic algebra students asked the question:
"How do you know it's a proportion?"
And once again I was unable to answer her. I've searched all through several textbooks, and scoured the Web, and I still can't find even an attempt at a direct explanation of how you know a problem is proportional. (Examples, sure, nothing but examples.) I've asked other professors and no one could even take a stab at it. Perhaps the student was looking at any problem such as the following:
A can of lemonade comes with a measuring scoop and directions for mixing are 6 scoops of mix for every 12 cups of water. How much water is needed to make the entire can of lemonade if there are 40 scoops of mix?

On an architect's blueprint, 1 inch corresponds to 4 feet. Find the area of an actual room if the blueprint dimensions are 6 inches by 5 inches.

The ratio of the weight of an object on Earth to the weight of the same object on Pluto is 100 to 3. If a buffalo weighs 3568 pounds on Earth, find the buffalo's weight on Pluto.

Three out of 10 adults in a certain city buy their drugs at large drug stores. If this city has 138 ,000 adults, how many of these adults would you expect to buy their drugs at large drug stores?

The gasoline/oil ratio for a certain snowmobile is 50 to 1. If 1 gallon equals 128 fluid ounces, how many fluid ounces of oil should be mixed with 20 gallons of gasoline?

Concisely stated, what is the commonality here? What is a well-defined explanation for how we know that these are all proportional problems?


2014-10-01

On Comparing Decimals Like 0.999...

Today in my college algebra class will be the first time that I've provided space to actually discuss the 1 = 0.999... issue. Previously I mentioned this here on the blog. This became so contentious that it's actually the only post for which I've been forced to shut off comments. (Actually it attracted a stalker who'd post some aggressive nonsense every few days.)

Anyway, brushing up on some points for later today let me see a very obvious fact that I'd overlooked before and that is: students' customary procedure for comparing decimals fails spectacularly in this case. For example, here it is expressed at the first hit from a web search at a site called AAAMath:
Therefore, when decimals are compared start with tenths place and then hundredths place, etc. If one decimal has a higher number in the tenths place then it is larger than a decimal with fewer tenths. If the tenths are equal compare the hundredths, then the thousandths etc. until one decimal is larger or there are no more places to compare. If each decimal place value is the same then the decimals are equal.
So if students apply the "simple" decimal comparison technique ("if one decimal has a higher number in the X place"), even at just the ones place, then this algorithm reports back that 1.000 is greater than 0.999... It overlooks the fact that the lower places can actually "add up" to an extra unit in a higher place. And thus all sorts of confused mayhem immediately follow. 

So the simple decimal comparison algorithm is actually wrong! To fix it, you'd have to add this clause: unless either decimal ends with an infinitely repeating string of 9's. In that case the best thing to do would be to initially "reduce" it back to the terminating form of the decimal (this being the only case where one number has multiple representations in decimal), and only then apply the simple grade-school algorithm.