2012-06-28

Homework on the Board

Where I teach in New York, I know that other instructors commonly do the following -- Prior to the start of a class session, require a few (2 or 3) students to write the results of a homework problem on the chalkboard. From what I can tell, these problems are not assessed or discussed in any way (the instructor just starts the class with some other lecture topic). My impression is that students are checked off for meeting this requirement a few times through the semester (but it's not attendance; that's done separately). This is not something I ever encountered in my schooling, nor have I ever read about as a suggested technique in any report or study.

Can someone explain to me, or link for me, what the rationale of this practice is?


2012-06-25

Look Closely at This Rectangle

And how it's labelled.


Shaking my fist at you, CVS, dummies!!

(Big thanks to BostonQuad for pointing this out.)

2012-06-21

A Very Personal War

A good friend pointed me to the start of a recent New York Times article:
In the early years of the 20th century, the great British mathematician Godfrey Harold Hardy used to take out a peculiar form of travel insurance before boarding a boat to cross the North Sea. If the weather looked threatening he would send a postcard on which he announced the solution of the Riemann hypothesis. Hardy, wrote his biographer, Constance Reid, was convinced ''that God -- with whom he waged a very personal war -- would not let Hardy die with such glory.''
A MadMath salute to G.H. Hardy!

2012-06-18

Tragic Examples


Can Our Real-World Statistical Examples Be More Optimistic?

At one point in the recent math conference, one of the better speakers was analyzing the famous graph of Napoleon's march into Russia (which some argue is the greatest statistical graph ever). And then this speaker ruminated:
"It's remarkable that some of our best examples of quantitative reasoning are based on tragedies... I'm looking for more optimistic examples."

The MadMath response would be: You won't find them. (None that are so intense and urgent, at any rate.) As a say near the start of my stats courses: many of our examples will be dealing with violent acts, or deaths from disease, or drug abuse, or other unpleasant circumstances. You wouldn't bother with this kind of math unless you could save someone's life with it. Math isn't a pleasant serenade; it's a battle of necessity.


2012-06-14

Remedial Classes in the News


Remedial Classes in the News; Possible End-Games


The AP article from this week, "Experts: Remedial college classes need fixing" is worth reading. The statistics are consistent with what I've seen lots of other places (including my own university's research publications). Let me focus on one line:
Legislation passed earlier this month in Kansas prohibits four-year universities from using state funds to provide remedial courses.
Probably the most heart-breaking part of teaching remedial college math are the very many students who tell me that they've completed every course they need for a degree, except for one remedial (non-credit) algebra course, which they may take and re-take without success. (You might ask, "Isn't passing remedial math required before taking, say, a science course?" -- the answer is yes, but someone is incented to keep giving waivers in that regard). How to avoid this trap?

In the past, I thought it was a financial-aid issue; funding is given (as I understand it) as long as a full-time course load is taken, which means students are required to take credit-bearing courses at the same time as they attempt remediation. Supposedly NY will soon start enforcing a rule to not pay if remediation is not completed after the first year.

But either way you go on that issue, students wind up committed (sunk cost) to a program that appears "mostly done" except for the math requirement. And they'll wind up in the same cycle of re-taking remedial math, now at their own out-of-pocket expense, with that being close to the last class they have to take. Even if you go the Kansas route and don't pay for remediation, you can pay for everything else first and wind up in the same sunk-cost situation (students paying for repeated remediation near the end -- exacerbating debt which is the other whipping-boy of the linked article).

I'd like to suggest that clear, up-front communications (perhaps mandatory reporting requirements) on passing and graduation rates would do the trick. But then you've got the Dunning-Kruger Effect (the weakest students overestimate their abilities/chances), and frankly the very weakest, in remedial arithmetic, are there precisely because they don't understand percentages (et. al.) enough to parse information like that.

So it seems like only two ultimate solutions remain: (a) Bar students with certain deficiency levels from college -- i.e., end "open admissions", or (b) Void these requirements and allow people to get associate's degrees without ever mastering basic algebra (at least). My guess is that in some form or other, the latter is nigh-inevitable.


2012-06-11

Reading, Writing, and Video-Watching


Personal Milestones in Learning & Teaching Math (via Programming and Reading); Struggle With Students; Superfluousness of Video Lectures

I think that there were two key, major developmental leaps in my own learning of math:

(1) Writing Computer Programs -- When I learned to program in BASIC on a TRS-80 color computer circa 6th-7th grade, working mostly on my own at home from a book that came with the computer, this increased my precision enormously. When writing code, the fact that the computer complained with a "syntax error" if any single character was malformed caused me to pay attention to the details, and likewise attend to the fact that every single symbol has important, condensed meaning in math/computers. You can have the top-level concepts basically down, but if your details aren't also correct, then your work will collapse into nonsense. As a free and added bonus, it simultaneously got me dealing with logical issues like if/then, not, or, and (I think the book just said "the meaning of these should be obvious"), as well as variables. In junior high school I was writing programs to spit out the results of numerous math homework assignments automatically, so the application was obvious and actually time-saving.

(2) Reading Math Textbooks -- When I was a senior in high school, I started taking a calculus course via experimental technology; namely, a course at the state college that was being delivered by closed-circuit video feed to the high school. In theory, we had microphones on our desks with which we could ask the college instructors questions and participate. In practice, it didn't work well at all: (a) we felt disconnected and intruding when we asked a question (which would hit the instructor by surprise, he'd appear startled, and have to ask where it was coming from), (b) the one TV in our classroom made it hard to see the instructor and what he was writing, (c) the lights in our room were turned off for visibility, as I recall, and (d) I don't think the instructor was terribly good in the first place. It was the first math class I took where I was routinely falling asleep at my desk. In desperation, the only way for me to learn calculus was on the weekend, go down to the basement alone, and read it from the book. And the transformative realization was that it was fundamentally readable, all the information was right there, if I just read it slowly and carefully enough. So while this wasn't how I always approached math classes from then on, it was always my backup plan -- you could learn all of math just from a book if you really wanted to.


Questions I ruminate on: Are these special skills? Can everyone pick up these abilities, or is there evidence that is not the case? Why is it so overwhelmingly difficult to get students to read a math book? Does anyone teach or assess reading a math book? Does anyone teach or assess reading anything in careful detail? Would math instructors everywhere be out of a job if everyone realized that the information is just sitting in a book that you could read on your own time? (What's that Good Will Hunting quote -- "... an education you could have got for a $1.50 in late charges at the public library"?) Part of me very much wants good, open source textbooks to conserve student money and resources -- but is text already dead?

Over the years, I've tried to share  these developmental leaps with my students. When I was a graduate student, I tried incorporating some BASIC programming into the algebra course I was teaching (actually, it was included in the book at that time, I think). About 10 years ago I was using MathXL online homework software. A few years ago I was assigning carefully-written algebra homeworks in the standard book format, and grading every symbol/character carefully (which got enormous resistance and hostility).

Again, it seems like the primary struggle we have, with any technique, is the attempt to get students to actually put in the study/ reading/ exercise time required for math (whether in-class, out-of-class, online, etc.), and I've seen all of these initiatives at some point fail when students simply gave up on them. Research seems to show that the more learning students do, the less they like it, because the more work they're doing for it. Perhaps we just have to admit that the primary factor is just how well students are situated in life to actually spend time studying. (Perhaps.)

The point I'm trying to get to is this -- There's a lot of scuttlebutt these days about Khan Academy, other online teaching initiatives, the "inverted classroom" where a video lecture watched before class (we hope) and exercises worked and coached in-class. But I honestly don't see any intuitive advantage to video lectures over having a textbook (like we've had for hundreds of years) and expecting that it be read prior to class. All the same information is there -- Just in a far more efficient format (textbooks). In fact, math notation is fundamentally symbolic manipulation created for the written page -- to me, video seems largely beside the point. If we just taught students to read properly (and truly, my math classes always seem to transmogrify into language-arts classes), wouldn't it be hundreds of times more efficient to just give them a (possibly digital) book? All the reputed advantages of self-pacing, being able to pause and rewind -- is that not inherently possessed by books as well, and more elegantly? Is the idea just that "video" is all the rage and kids are trained to respond better to it? I just flat-out don't see any advantage to video over books -- video lectures go too slowly and make me impatient and irritable -- am I crazy?
 
Amusingly, one of the recent CUNY research studies on video-lectures had the same problem with their videos as I did in my old calculus class; what the video-instructor was writing on the board behind them was apparently difficult to see, and students complained that they then had to turn to the textbook to understand what was going on (according to the speaker at the recent conference). This being approximately 25 years after my identical experience with closed-circuit video. Written language is still the uber-tool of humankind -- and mathematical writing the most intensely condensed and powerful -- and I'm not seeing any way to avoid embracing it and teaching it as such.


2012-06-07

Paucity of P-Values


How Effect Size is Immensely More Important than P-Value; Small Effect Sizes for Educational Techniques Explainable by the Pygmalion Effect; Reporting Bias; Positive Outlier in Hostos Study

One of the very nice things about the recent conference papers was getting a chance to dig into statistical research in a field that I'm actually knowledgeable about. Of course, I've taught college statistics for approximately 7 years at this point. In the last year I started assigning a research project to report and interpret a medical article of the student's choice from JAMA -- which is a fair bit of work for me, since I'm not previously aware of the medical issues or terminology involved (and as an accidental side-effect of this assignment, I've gotten a rapid-immersion pickup of lots more medical information than I ever expected). So here's an opportunity to see how our statistics apply to actual math education issues.

In my statistics course, P-value statements are among the "crown jewels" of the course (assessment of reasonableness of a hypothesis such as "do average test scores increase with this technique?"), and frequently the last thing we do in the course. It takes the whole semester as prep-work, and then about a week of lectures on the subject. It's an interesting and clever piece of math which can frequently establish whether results in the population are, on average, improved. For example: A JAMA medical journal might say, "Infection was... increased with number of sexual partners (P < .001 for trend) and cigarettes smoked per day (P < .001 for trend)." [Gillision, 2012, "Prevalence of Oral HPV Infection in the United States, 2009-2010"] As our textbook would say, this indicates extremely strong evidence that the claim was true for the population in general (lower P-values being better; sort of a probability of being wrong in some sense).

The somewhat thunderbolt-realization I got from the math-education articles I've been reading is that suddenly, I kind of don't give a crap about the P-values. What I really care about is the effect size; how much did scores go up (if any)? We want to make a cost-benefit analysis on completely overhauling our classes; is it worthwhile? Granted that some increase exists -- is it useful, or negligibly small?

A textbook will usually discuss this briefly; "Statistical significance is not the same as practical significance", but until now I didn't realize how immensely critical that was. Several of the papers I'm looking at take some delight in spending several paragraphs explaining what a P-value is, and how it can establish overwhelming likelihood for (some, possibly negligible?) increase in average test scores. Others get sloppy about saying "these findings are very significant!", not specifying statistical significance -- which is to say, possibly not actually significant at all. P-values for some change are somewhat interesting, and in the JAMA article I think they're worth the 3-or-so words expended on them ("P < .001 for trend"), but not any more than that. Noting that most of our math instructional papers gloss over this without highlighting effect size, Wikipedia says this:
Reporting effect sizes is considered good practice when presenting empirical research findings in many fields. The reporting of effect sizes facilitates the interpretation of the substantive, as opposed to the statistical, significance of a research result. Effect sizes are particularly prominent in social and medical research. (Wikipedia, "Effect Size")

This perhaps brings to mind a joke:
"There are many kinds of intelligence: practical, emotional. And then there’s actual intelligence, which is what I’m talking about." (Jack Donaghy, 30 Rock)

So the truth is, most of the recent conference papers are demonstrating fairly small effect sizes from any of the various techniques tried; for those that showed any significance at all, it's something like a 2, 5, or 7% increase to final exam scores or overall passing rates (as one said, "about two-thirds of a letter grade"). Is that worth the effort entirely overhauling an educational program for? It seems to me like effect sizes of this amount are quite likely to be accounted for by one or more of the following well-known testing phenomena:
  • Novelty Effect -- Subjects tend to perform better for a limited time after any kind of change in the environment.
  • Hawthorne Effect -- Subjects in an experiment may increase performance because they know they're being studied.
  • Pygmalion Effect -- If a teacher simply expects students to perform better, then students do in fact perform better.
Let's think about that last one a bit more. It seems pretty well-known that if researchers are invested in the outcome of their research in an educational setting, results tend to track their expectations/incentives (perhaps they will put more energy than usual into the new technique, etc.; something that won't scale to other instructors in general). In medicine, for example, that's analogous to the reason why you want double-blinded trials. Robert Rosenthal presented findings (in a meta-analysis of hundreds of experiments) that interpersonal expectancy by the teacher has a mean effect size on learning and ability of r = 0.26 -- that is, r^2 = 0.07, such that it alone explains 7% of the variation seen in outcomes (Rosenthal, Dec-1994, "Interpersonal Expectations: A 30-Year Perspective", in Current Directions in Psychological Science).

So now let's momentarily limit ourselves to considering the studies from this conference that that did not have the primary investigators specially teaching the experimental groups (my hypothesis being: those studies will show reduced effect sizes as regards to overall passing rates). There are 3 of the 10 studies where this can be established. First: BMCC (in Algebra) which had randomized instructor assignments -- control pass rate was 32.8% vs. treatment pass rate 36.4% (sample effect +3.6%; P = 0.3013, not statistically significant). Second: Laguardia (in Algebra) which only made extra outside tutors available (no change to in-class teaching) -- control pass rate 56.6% vs. treatment pass rate 58.9% (sample effect +2.3%; P = 0.471, not statistically significant). Third: Brooklyn College (in Precalculus), which scrupulously avoided having investigators teaching -- They report fail/withdraw rates and a somewhat questionable statistical procedure (assumes that control group counts as population, generating P = 0.0869), which I'll re-do myself here. Control pass (non-F/W) rate was 67.46%, treatment pass rate 77.55% (sample effect +10.09%; running a two-proportions z-test [per Weiss Sec 12.3] gives me P = 0.0749, only moderately-strong evidence -- not strong or very strong -- for the trend according to my book). A little more info on the last one: A two-proportion z-interval calculation for the improvement in population passing rate gives (95% C.I.: -2.42% to +22.60%), which is to say, the demonstrated effect size is less than the margin of error.



The other interesting lesson here is that uniformly, there's always "reasons why more research is called for" (or some-such). If a technique did not show improvement, then there's a paragraph explaining extenuating or confounding reasons for that, that could be fixed in a future round of research (but there's never a parallel explanation for why a significant result was accidental or one-time-only). Out of 10 research papers I'm looking at, no one ever said, "The results of this study show no evidence for this technique improving scores, and therefore we recommend not pursuing it in the future." (I guess that can be called "Reporting Bias".) Likewise, the introduction from the university goes on about how many "positive effects" of the various studies, but again, I'm not seeing effect sizes that are tremendously useful.

There is one outlier in all this. The study from Hostos claims a near-doubling of remedial class passing rates (from 24% to 43%) when online software is used for homework assignments (specifically, MathXL). It's a short report, and a bit unclear on the study setup, and whether these results are just for Arithmetic or also Algebra classes. (Note that the following report from City College on using Maple TA software for Precalculus classes showed showed no statistical difference in performance). I trail-blazed using MathXL myself at another college about 10 years ago, but didn't find as much improvement as I expected at the time (plus lots of technical complaints, can't-get-online excuses, not accessible to vision-impaired students, etc.) I'd like to see more clear information on exactly how this was achieved.

(Consider how this relates to recent troubling revelations that most published medical results cannot be reproduced.)


2012-06-04

Math Attitudes


A Contrarian's Take on the Issues of Math Anxiety, Confidence, Motivation, and "Fun"

This discussion strikes pretty close to the heart of this blog. From the recent math conference, here are some comments from speakers, reports, and power-point slides on the issue of math attitudes:
"Attitude is key; we must convince students that math can be fun and easy."

"The important thing [about this technique] is that your students are happy."

"There is no other way to make students mathematicians than to make them happy."

"This technique did not demonstrate an increase in test scores, but it did increase student confidence."

"In particular, [this teaching approach] stresses the principal importance of elimination of mathematics anxiety -- the main barrier to success in mathematics."

Okay; so as always, I'll be the contrarian and come down on the other side of this issue. While others talk about math being beautiful or elegant or fun or (classically) musical, my gut-sense of it is more of a horrible brutal war that you commit to out of dire necessity. Personally, I could find math important, or insightful, or transformative, or useful, or even an addiction, but rarely has it ever been "fun" to me. See the inaugural post for this blog -- The MadMath Manifesto. I don't see any evidence that confidence is a signal of ability (in fact, some of my most confident students are the most helpless -- see also Socrates), nor do I think that "fun" is either a requirement or even something very informative to talk about (which reiterates stuff I've said vehemently on my gaming blog). Intriguingly, neither does at least one specific research article from this conference:
On the first day of class in Spring 2011, all Precalculus students were given a customized 20 minute Diagnostic Test that check basic Pre-algebra concepts and a Motivation Test that measured student motivation as a prediction of success... However, as Figure 4 shows, there is no correlation between the Diagnostic Test and the Motivation Test (n = 159, r = -0.07). This counter-intuitive outcome indicates that student motivation is not a factor influencing the outcome of the Diagnostic Test. The math scores are low whether or not the students are motivated... it is reasonable to conclude that students are motivated, but genuinely do not know or cannot recall the prerequisites at the beginning of the semester. (Kingan, Clement, and Hu, 2012; "The Gap Project: Closing Gaps in Gateway Mathematics Courses")

Now, I don't find this result "counter-intuitive" in any way (again; in some sense it's the whole point of this blog since day one). To me, I think if anything, the perceptual problem for our remedial students is not that they are insufficiently confident and upbeat; I worry that they are not sufficiently aware of what grave peril that they're actually in (like, a 10% chance of getting a degree within the next 3 years; not something that is communicated to them in an honest fashion), and the amount of time and effort they need to devote to their math studies if this is truly a priority for them.


Likewise, the Dunning-Kruger Effect seems critically important -- the observation that tremendously unskilled people will radically overrate their ability at a certain task. Note in the analysis above, the value r = -0.07; if anything, in this sample high motivation is somewhat correlated with reduced math ability (not that the effect is very strong or significant). So to me, remedial students' "fear" of math is largely an honest and accurate assessment of their very weak skills. If we have an alternative teaching method that raises confidence, but leaves math ability unchanged, then is that not actively harming them -- changing their self-assessment from accurate to inaccurate (exacerbating Dunning-Kruger)?

I suspect that "math-anxiety" is a convenient whipping-boy that frequently lets students, teachers, and administrators avoid talking about actual, concrete math and quantitative reasoning deficiencies (which is frequently sad and hard). Consider another article by Clark: "Antagonism between Achievement and Enjoyment in ATI Studies", which found that when given the chance, both weak and strong learners chose the learning style which produced the least amount of learning (reversed for each type) as preferred -- i.e., the more enjoyable, the least work, the most "fun".

Finally, consider findings by Robert Rosenthal (Harvard researcher, famous for investigations on the Pygmalion Effect): 
"The most surprising finding in our research," says Rosenthal, "has to do with what we called the 'psychological hazards' of unexpected intellectual growth." When so-called "lower track" students in the control group at Oak School (students who were not expected to shine) began to show marked improvement and growth, their teacher evaluations on such things as "personal adjustment," "happiness," "affectionate" declined. ("Pygmalion in the Classroom")

Rosenthal sees this as society (teachers) punishing those who break its expectations, but I think an equally defensible interpretation is that actual learning growth is related to struggle and challenge and resistance, and inversely related to happiness.

Perhaps the real over-arching struggle in all of our different teaching techniques is the fight to get students to actually spend the required amount of time studying and practicing their math -- whether we try to do that in-class, out-of-class, with online software, outsourced video lectures, extra math tutors/coaches, or whatever. Ultimately (as I say on the first day in all my classes) it's patience and focus that are both the requirement and the end-goal of remedial math classes -- and, much like the Samurai-mindset, excitability-fun-confidence are likely to be either uncorrelated or negative indicators for those traits.