2015-01-26

On Old Books

A few weekends ago I set up a new bookcase and got to re-organize and take a bunch of books out of boxed storage and back on display in my room. One thing I came across was a very old copy of "Introductory College Algebra", 2nd Edition, by Rietz and Crathorne, copyright 1923/1933. This is something I obtained from my great-aunt, who was the head of the math department at an academy in Maine (at a time when that was very rare),and who died a few decades ago now. I actually started reading it from front-to-back this week for the first time, which seemed apropos because I'm currently teaching a winter-term course in college algebra.

The main uptake is that I'm really surprised how little has changed, how similar the work and presentation is to what we do today. That gives me a lot of confidence, actually; I'm glad to be in a discipline with "deep roots" that is stable and consistent. The presentation my be a bit more concise -- but that's kind of funny because everyone I know that's engaged in writing an in-house custom algebra text says that their goal is too write something "short, just what they need, with the extraneous parts cut out". Well, you don't get much more concise than a real math text. (Most of the theorems and presentations are all of 4 lines long at most.) One novelty I really like here is that instead of separate worked-out examples within the text, the protocol is to simply begin a block of exercises with the first few including fully worked-out solutions (which I think would clarify to the student what work we're expecting them to do; and as always you've got answers to the odd-numbered questions at the back for them to check). 

Sure, a couple pieces of terminology are just a bit different. Graphs of functions are are generally called "loci". What I've always seen as a "greatest common factor (GCF)" is herein called a "highest common factor "HCF)". And probably the single biggest difference is the claim that a statement like "x = x+1" or "0 = 1" does not count as an equation whatsoever (whereas I'd call it an equation with no solutions, i.e., an equation of the inconsistent variety).

But here's my point. Granted how relatively little has changed in this near century-old math textbook as compared to the class I teach each night right now; and granted the tremendous struggle we have these days to make good textbooks accessible and affordable to our students -- might we consider actually using out-of-copyright math textbooks as a resource? We could totally scan a brief, high-quality, public-domain text such as this and distribute it for free to anyone who wanted it. Do you think that would ever be workable?


2014-12-29

Academically Adrift

Going through an old copy of Thought & Action magazine today (Fall 2011), at the back I come across a review of the book Academically Adrift: Limited Learning on College Campuses, by Richard Arum and Josipa Roksa. The main thrust of the book seems to be use of the Collegiate Learning Assessment (CLA), a test of critical thinking, reading, and writing given at the end of the sophomore year to several thousand students at 24 different colleges. The upshot seems to be that in many cases, there is little difference in ability between when students first arrive on campus and two years afterward. I found the following paragraph of the review to be worth highlighting:
The ensuing chapters then detail the key findings related to changes in CLA scores, implicating students’ entering characteristics and campus experiences.  Students with stronger academic preparation and students who attended more selective institutions showed greater gains in critical thinking; initial disparities between white students and African American students were exacerbated.  Those who participated in fraternities and sororities showed fewer gains relative to their peers, as did those who were majoring in business, education, or social work.  Moreover, Arum and Roksa argue that understandings of student employment need to be nuanced, as working on-campus is beneficial only up to 10 hours per week.  They also question the trend toward collaborative learning, noting that more time studying alone is positively associated with gains in critical thinking, while time studying with peers is negatively associated with such gains.  Perhaps most strikingly, the authors concede that social integration might be related to retention but argue that its affects on learning are far less clear, and may be negative.
 http://www.nea.org/home/50459.htm

In calling out certain majors, I am reminded of the footnote in Burton R. Clark's famous paper "The 'Cooling-Out' Function in Higher Education" from 1960 (The American Journal of Sociology, May 1960, footnote 8): 
One study has noted that on many campuses the business school serves "as a dumping ground for students who cannot make the grade in engineering or some branch of the liberal arts," this being a consequence of lower promotion standards than are found in most other branches of the university (Frank C. Pierson, The Education of American Businessmen [New York: McGraw-Hill Book Co., 1959], p. 63). Pierson also summarizes data on intelligence of students by field of study which indicate that education, business, and social science rank near the bottom in quality of students (ibid., pp. 65-72).
http://www.jstor.org/discover/10.2307/2773649?sid=21105522739863&uid=2&uid=3739832&uid=3739256&uid=4

Isn't it interesting that we've effectively handed over control of our culture, our most powerful institutions, and education of the young, to the least proficient among us? And that this seems to be a stable pattern for over a half-century?


2014-10-27

Bloom's Taxonomy and Math Education

In the last year or so I've been attending seminars at our college's Center for Teaching and Learning. So far these have been on how to publish in scholarship of teaching and learning (SOTL) journals, and a few reading groups (Susan Ambrose's "How Learning Works", and Ken Bain's "What the Best College Teachers Do"). Frequently I'm the only STEM instructor at the table, with the rest of the room being instructors from English, philosophy, political science, history, women's studies, social science, etc.

One thing that keeps coming up in these books and discussions is a reference to Bloom's Taxonomy of Learning, a six-step hierarchy of tasks in cognitive development. Each step comes with a description, examples, and "key verbs". Here is a summary similar to what I've been seeing. Now, I'm perennially skeptical of these kinds of "N Distinct Types of P!" categorizations, as they've always struck me as at least somewhat flawed and intellectually dishonest in a real, messy world. But for argument's sake, let's say that we engage with the people who find this useful and temporarily accept the defined categories as given.

In every instance that I've seen, the discussion seems to turn on the following critique: "We are failing our students by perpetually being stuck in the lower stages of simple Knowledge and Comprehension recall (levels 1-2), and need to find ways to to lift our teaching into higher strata of Application, Analysis, etc. (levels 3-4 and above)". To a math instructor this sounds almost entirely vapid, because we never have time to test on levels 1-2 and entirely take those levels for granted without further commentary. In short, if Bloom's Taxonomy holds any weight at all, then I claim the following:

Math is hard because by its nature it's taught at TOO HIGH a level compared to other classes.

For example: I've never seen a math instructor testing students on simple knowledge recall of defined terms or articulated procedures. Which in a certain light is funny, because our defined terms have been hammered out over years and centuries, and it's important that they be entirely unambiguous and essential. I frequently tell my students, "All of your answers are back in the definitions". Richard Lipton has written something similar to this more than once (link one, two).

But in math education we basically don't have any friggin' time to spend drilling or testing on these definitions-of-terms. We say it, we write it, we just assume that you remember it for all time afterward. This may be somewhat exacerbated by the math and computer scientist's custom of knowing to remember those key terms, and maybe our memory being trained in that way. I know in my own teaching I was at one time very frustrated with my students not picking up on this obvious requirement, and I've evolved and trained myself to constantly pepper them with side-questions on what the proper name is for different elements day after day to get these terms machine-gunned into their heads. They're not initially primed for instantaneous recall in the ways that we take for granted. At any rate: the time spent on testing for these issues is effectively zero; it doesn't exist in the system. (Personally, I have actually inserted some early questions on my quizzes on definitions, but I simply can't find time or space to do it thereafter.)

So after the brief presentation of those colossally important defined terms, we will take for granted simple Recall and Comprehension (levels 1-2), and immediately launch in to using them logically in the form of theorems, proofs, and exercises -- that is, Application and Analysis (levels 3-4). Note the following "key verbs", specific to the math project, in Bloom's categorization: "computes, operates, solves" are among Applications (level 3), things like "calculates, diagrams" are put among Analysis (level 4). These of course are the mainstays of our expected skills, questions on tests, and time spent in the math class..

And then of course we get to "word problems", or what we really call "applications" in the context of a math class. Frequently some outside critic expects that these kinds of exercises will make the work easier for students by making it more concrete, perhaps "real-world oriented". But the truth is that this increases the difficulty for students who are already grappling with higher-level skills than they're accustomed to in other classes, and are now being called upon to scale even higher. These kinds of problems require: (1) high-quality English parsing skills, (2) ability to translate from the language of English to that of Math, (3) selection and application of the proper mathematical (level-3 and 4) procedures to solve the problem, and then (4) reverse translation from Math back to an English interpretation. (See what I did there? It's George Polya's How-To-Solve-It.) In other words, we might say: "Yo dawg, I heard you like applications? Well I made applications of your applications." Word problems boost the student effectively up to the Synthesis and Evaluation modes of thought (levels 5-6).

So perhaps this serves as the start of an explanation as to why the math class looks like a Brobdingnagian monster to so many students; if most of their other classes are perpetually operating at level 1 and 2 (as per the complaints of so many writers in the humanities and education departments), then the math class that is immediately using defined terms and logical reason to do stuff at level 3 to 4 does look like a foreign country (to say nothing of word problems a few hours after that). And perhaps this can serve as a bridge between disciplines; if the humanities are wrestling with being stuck in level 1, then they need to keep in mind that the STEM struggle is not the same, that inherently the work demands reasoning at the highest levels, and we don't have time for anything else. Or perhaps this argues to find some way of working in more emphasis on those simple vocabulary recall and comprehension issues which are so critically important that we don't even bother talking about them?


2014-10-20

Is Statway a Cargo Cult?

We all know that Algebra is the limiting factor for the millions of students attending community colleges throughout the U.S. That is: Colleges could double (or triple, or quadruple) their graduation numbers overnight if the 8th-grade algebra requirement were only removed. This makes for lots of institutional pressure these days to do so.

A common line of thought is: Get rid of the algebra requirement and pursue a primer on statistics instead. You can sort of see why someone might negotiate in this way: offer something apparently attractive (statistics, which many say is needed to understand the modern world) in place of the thing they're asking you to give up. For example, the Carnegie "Statway" program now at numerous colleges promises exactly that (the lede being "Statway triples student success in half the time"; link).

But as an instructor of statistics at a community college, I use algebra all the time to derive, and explain, and confirm various formulas and procedures. Without that, I think the intention (in fact I've heard this argued explicitly) is to get people to dump data into the SPSS program, click a button, and then send those results upstream or downstream to some other stake-holder without knowing how to verify or double-check them. Basically it advocates a faith-based approach to mathematical/statistical software tools.

This is a nontrivial, in fact really tough, philosophical angel with which to wrestle nowadays. We're long past the point where cheap calculating devices have been made ingrained throughout many elementary and high schools; convenient to be sure, but as a result at the college level we see a great many students who have no intuition of times tables, and are utterly unable to estimate, sanity-check, or spot egregious errors (e.g. I had a college student who hand-computed 56×9 = 54 and was totally baffled at my saying that couldn't possibly be the answer; even re-doing the same thing a second time around).

To a far greater degree, as I say in my classes, statistics is truly 20th century, space-age branch of math; it's a fairly tall edifice built on centuries of results in notation, algebra, probability, calculus, etc. Even in the best situation in my own general sophomore-level class, and as deeply committed as I am to rigorously demonstrating as much as possible, I'm forced to hand-wave a number of concepts from calculus classes which my students have not, and will never, take (notably regarding integrals, density curves, the area of any probability distribution being 1; to say nothing of a proof of the Central Limit Theorem). So if we accept that statistics are fundamental to understanding how the modern world is built and runs, and there is some amount of corner-shaving in presenting it to students who have never taken calculus, then perhaps it's okay to go whole-hog and just give them a technological tool that does the entire job for them? Without knowing where it comes from, and being told to just trust it? I can see (and have heard) arguments in both directions.

Here's an example of the kind of results you might get from a website that caught my attention the other day: Spurious Correlations. The site puts on a display a rather large number of graphs of data which is meant to be obviously, comically not really related, even though they have high correlation. Here's an example:


Something seemed fishy about this after I first looked at it. It's true that if you dump the numbers in the table into Excel or SPSS or whatever a correlation value of 0.870127 pops out. But here's the rub: those date-based tables used throughout the site are totally not how you visualize correlation, or related in any way to what the linear correlation coefficient (r) means. What it does mean is that if you take those data pairs and plot them as an (x, y) scatterplot, you can find a straight-line that gets pretty close to most of the points. That is entirely lost in the graph as presented; the numbers aren't even paired up as points in the chart, and the date values are entirely ignored in your correlation calculation. I'm a bit unclear if the creator of the website knows this, or is just applying some packaged tool -- but surely it will be opaque and rather misleading to most readers of the site. At any rate, it terminates out the ability to visually double-check some crazy error of the 56×9 = 54 ilk.

As a further point, there are some graphs on the site labelled as showing "inverse correlation", which I thought to be a correlation between x and 1/y -- but in truth what they mean is the more common [linear] "negative correlation", which is a whole different thing. Or at least I would presume it is; I'd never heard of "inverse correlation" as synonymous, and about the only place I can find it online is Investopedia (so maybe the finance community has its own somewhat-sloppy term for it; link).

I guess someone might call this knit-picking, but I have the intuition that that's a sign of somebody who can't actually distinguish between true and false interpretations of statistical results. Is this ultimately the kind of product we get if we wipe out all the algebra-based derivations from our statistics instruction, and treat it as a non-reasoning vocational exercise?

Let me be clear in saying that at this time I have not actually read the Carnegie Statway curriculum, so I can't say if it has some clever way of avoiding these pitfalls or not. Perhaps I should do that to be sure. But as years pass in my current career, and I get more opportunities to personally experience all the connections throughout our programs, I find myself becoming more and more of a booster and champion of the basic algebra class requirement for all, as perhaps the very finest tool in our kit for promoting clear-headedness, transparency, honesty, and truth in regards to what it means to be an educated, detail-oriented, and scientifically-literate person.


2014-10-13

How Do You Know It's a Proportion?

I've written in the past of the mystery of when you'd want to use a proportion for an application problem, and what the benefits are for doing so (link). Once again, last week, one of my basic algebra students asked the question:
"How do you know it's a proportion?"
And once again I was unable to answer her. I've searched all through several textbooks, and scoured the Web, and I still can't find even an attempt at a direct explanation of how you know a problem is proportional. (Examples, sure, nothing but examples.) I've asked other professors and no one could even take a stab at it. Perhaps the student was looking at any problem such as the following:
A can of lemonade comes with a measuring scoop and directions for mixing are 6 scoops of mix for every 12 cups of water. How much water is needed to make the entire can of lemonade if there are 40 scoops of mix?

On an architect's blueprint, 1 inch corresponds to 4 feet. Find the area of an actual room if the blueprint dimensions are 6 inches by 5 inches.

The ratio of the weight of an object on Earth to the weight of the same object on Pluto is 100 to 3. If a buffalo weighs 3568 pounds on Earth, find the buffalo's weight on Pluto.

Three out of 10 adults in a certain city buy their drugs at large drug stores. If this city has 138 ,000 adults, how many of these adults would you expect to buy their drugs at large drug stores?

The gasoline/oil ratio for a certain snowmobile is 50 to 1. If 1 gallon equals 128 fluid ounces, how many fluid ounces of oil should be mixed with 20 gallons of gasoline?

Concisely stated, what is the commonality here? What is a well-defined explanation for how we know that these are all proportional problems?


2014-10-01

On Comparing Decimals Like 0.999...

Today in my college algebra class will be the first time that I've provided space to actually discuss the 1 = 0.999... issue. Previously I mentioned this here on the blog. This became so contentious that it's actually the only post for which I've been forced to shut off comments. (Actually it attracted a stalker who'd post some aggressive nonsense every few days.)

Anyway, brushing up on some points for later today let me see a very obvious fact that I'd overlooked before and that is: students' customary procedure for comparing decimals fails spectacularly in this case. For example, here it is expressed at the first hit from a web search at a site called AAAMath:
Therefore, when decimals are compared start with tenths place and then hundredths place, etc. If one decimal has a higher number in the tenths place then it is larger than a decimal with fewer tenths. If the tenths are equal compare the hundredths, then the thousandths etc. until one decimal is larger or there are no more places to compare. If each decimal place value is the same then the decimals are equal.
So if students apply the "simple" decimal comparison technique ("if one decimal has a higher number in the X place"), even at just the ones place, then this algorithm reports back that 1.000 is greater than 0.999... It overlooks the fact that the lower places can actually "add up" to an extra unit in a higher place. And thus all sorts of confused mayhem immediately follow. 

So the simple decimal comparison algorithm is actually wrong! To fix it, you'd have to add this clause: unless either decimal ends with an infinitely repeating string of 9's. In that case the best thing to do would be to initially "reduce" it back to the terminating form of the decimal (this being the only case where one number has multiple representations in decimal), and only then apply the simple grade-school algorithm.


2014-08-25

Introducing Automatic-Algebra.org

Here in New York, it's back-to-school starting next week. Of course, if you're in the teaching profession (or really just know about it), you've probably been doing academic work and preparing for the fall semester throughout the summer and year-round. I'm in the very fortunate position that I'll have a new permanent position at CUNY this fall, so here's how I've spent my August:

I've developed a new website for practicing basic numerical skills that are prerequisite for algebra and other classes like statistics, calculus, and computer programming: Automatic-Algebra.org. I've written a few times in the past about the need for certain skills to be automatic, skills that have been taught but not mastered by most students who arrive in a remedial community-college algebra course, and therefore causes continual disruption and frustration when we're trying to deal with actual algebra concepts (links one, two). Like, for the algebra course itself: times tables, negative numbers, and order-of-operations. Or for a basic statistics course: operations on decimals like rounding, comparing, and converting to percent.

So what you get at the new site for each of these skills is a brief, 5-question quiz for each of these skills. Here's how I designed them:
  • Timed, so that students get a very clear portrayal of what the expectation is for mastery of each of these skills (15, 30, or 60 seconds per quiz). For example: sequential adding and counting on fingers for multiplications will not suffice.
  • Multiple-choice, so the site is usable on a variety of devices, including touch-screen mobile devices. For example: you can stand on a bus and drill yourself on a smartphone by just tapping with your thumb.
  • Randomized, so once you take a quiz you can click "Try Again" and get a new one, and drill yourself multiple times in just a few minutes each day.
  • Javascript, so the quiz runs entirely on your own device once you download the page the first time. The site doesn't require any login, accounts, server submissions, recording of attempts, or any transmitted or collected information whatsoever after you initially view the page.

This is something that I've wanted for a few years now, that no existing website really implemented and consolidated the way I wanted as a reference for students. Thanks to the new full-time track I feel I'm in the position to warrant developing it myself and leveraging it for numerous classes of my own in the future.

Feel free to check it out, and offer any comments and observations. If you feel it might help your incoming students this fall semester, give them the link and maybe it will lift a whole lot of our boats all at the same time. Do you think that will be of assistance?