2014-12-29

Academically Adrift

Going through an old copy of Thought & Action magazine today (Fall 2011), at the back I come across a review of the book Academically Adrift: Limited Learning on College Campuses, by Richard Arum and Josipa Roksa. The main thrust of the book seems to be use of the Collegiate Learning Assessment (CLA), a test of critical thinking, reading, and writing given at the end of the sophomore year to several thousand students at 24 different colleges. The upshot seems to be that in many cases, there is little difference in ability between when students first arrive on campus and two years afterward. I found the following paragraph of the review to be worth highlighting:
The ensuing chapters then detail the key findings related to changes in CLA scores, implicating students’ entering characteristics and campus experiences.  Students with stronger academic preparation and students who attended more selective institutions showed greater gains in critical thinking; initial disparities between white students and African American students were exacerbated.  Those who participated in fraternities and sororities showed fewer gains relative to their peers, as did those who were majoring in business, education, or social work.  Moreover, Arum and Roksa argue that understandings of student employment need to be nuanced, as working on-campus is beneficial only up to 10 hours per week.  They also question the trend toward collaborative learning, noting that more time studying alone is positively associated with gains in critical thinking, while time studying with peers is negatively associated with such gains.  Perhaps most strikingly, the authors concede that social integration might be related to retention but argue that its affects on learning are far less clear, and may be negative.
 http://www.nea.org/home/50459.htm

In calling out certain majors, I am reminded of the footnote in Burton R. Clark's famous paper "The 'Cooling-Out' Function in Higher Education" from 1960 (The American Journal of Sociology, May 1960, footnote 8): 
One study has noted that on many campuses the business school serves "as a dumping ground for students who cannot make the grade in engineering or some branch of the liberal arts," this being a consequence of lower promotion standards than are found in most other branches of the university (Frank C. Pierson, The Education of American Businessmen [New York: McGraw-Hill Book Co., 1959], p. 63). Pierson also summarizes data on intelligence of students by field of study which indicate that education, business, and social science rank near the bottom in quality of students (ibid., pp. 65-72).
http://www.jstor.org/discover/10.2307/2773649?sid=21105522739863&uid=2&uid=3739832&uid=3739256&uid=4

Isn't it interesting that we've effectively handed over control of our culture, our most powerful institutions, and education of the young, to the least proficient among us? And that this seems to be a stable pattern for over a half-century?


2014-10-27

Bloom's Taxonomy and Math Education

In the last year or so I've been attending seminars at our college's Center for Teaching and Learning. So far these have been on how to publish in scholarship of teaching and learning (SOTL) journals, and a few reading groups (Susan Ambrose's "How Learning Works", and Ken Bain's "What the Best College Teachers Do"). Frequently I'm the only STEM instructor at the table, with the rest of the room being instructors from English, philosophy, political science, history, women's studies, social science, etc.

One thing that keeps coming up in these books and discussions is a reference to Bloom's Taxonomy of Learning, a six-step hierarchy of tasks in cognitive development. Each step comes with a description, examples, and "key verbs". Here is a summary similar to what I've been seeing. Now, I'm perennially skeptical of these kinds of "N Distinct Types of P!" categorizations, as they've always struck me as at least somewhat flawed and intellectually dishonest in a real, messy world. But for argument's sake, let's say that we engage with the people who find this useful and temporarily accept the defined categories as given.

In every instance that I've seen, the discussion seems to turn on the following critique: "We are failing our students by perpetually being stuck in the lower stages of simple Knowledge and Comprehension recall (levels 1-2), and need to find ways to to lift our teaching into higher strata of Application, Analysis, etc. (levels 3-4 and above)". To a math instructor this sounds almost entirely vapid, because we never have time to test on levels 1-2 and entirely take those levels for granted without further commentary. In short, if Bloom's Taxonomy holds any weight at all, then I claim the following:

Math is hard because by its nature it's taught at TOO HIGH a level compared to other classes.

For example: I've never seen a math instructor testing students on simple knowledge recall of defined terms or articulated procedures. Which in a certain light is funny, because our defined terms have been hammered out over years and centuries, and it's important that they be entirely unambiguous and essential. I frequently tell my students, "All of your answers are back in the definitions". Richard Lipton has written something similar to this more than once (link one, two).

But in math education we basically don't have any friggin' time to spend drilling or testing on these definitions-of-terms. We say it, we write it, we just assume that you remember it for all time afterward. This may be somewhat exacerbated by the math and computer scientist's custom of knowing to remember those key terms, and maybe our memory being trained in that way. I know in my own teaching I was at one time very frustrated with my students not picking up on this obvious requirement, and I've evolved and trained myself to constantly pepper them with side-questions on what the proper name is for different elements day after day to get these terms machine-gunned into their heads. They're not initially primed for instantaneous recall in the ways that we take for granted. At any rate: the time spent on testing for these issues is effectively zero; it doesn't exist in the system. (Personally, I have actually inserted some early questions on my quizzes on definitions, but I simply can't find time or space to do it thereafter.)

So after the brief presentation of those colossally important defined terms, we will take for granted simple Recall and Comprehension (levels 1-2), and immediately launch in to using them logically in the form of theorems, proofs, and exercises -- that is, Application and Analysis (levels 3-4). Note the following "key verbs", specific to the math project, in Bloom's categorization: "computes, operates, solves" are among Applications (level 3), things like "calculates, diagrams" are put among Analysis (level 4). These of course are the mainstays of our expected skills, questions on tests, and time spent in the math class..

And then of course we get to "word problems", or what we really call "applications" in the context of a math class. Frequently some outside critic expects that these kinds of exercises will make the work easier for students by making it more concrete, perhaps "real-world oriented". But the truth is that this increases the difficulty for students who are already grappling with higher-level skills than they're accustomed to in other classes, and are now being called upon to scale even higher. These kinds of problems require: (1) high-quality English parsing skills, (2) ability to translate from the language of English to that of Math, (3) selection and application of the proper mathematical (level-3 and 4) procedures to solve the problem, and then (4) reverse translation from Math back to an English interpretation. (See what I did there? It's George Polya's How-To-Solve-It.) In other words, we might say: "Yo dawg, I heard you like applications? Well I made applications of your applications." Word problems boost the student effectively up to the Synthesis and Evaluation modes of thought (levels 5-6).

So perhaps this serves as the start of an explanation as to why the math class looks like a Brobdingnagian monster to so many students; if most of their other classes are perpetually operating at level 1 and 2 (as per the complaints of so many writers in the humanities and education departments), then the math class that is immediately using defined terms and logical reason to do stuff at level 3 to 4 does look like a foreign country (to say nothing of word problems a few hours after that). And perhaps this can serve as a bridge between disciplines; if the humanities are wrestling with being stuck in level 1, then they need to keep in mind that the STEM struggle is not the same, that inherently the work demands reasoning at the highest levels, and we don't have time for anything else. Or perhaps this argues to find some way of working in more emphasis on those simple vocabulary recall and comprehension issues which are so critically important that we don't even bother talking about them?


2014-10-20

Is Statway a Cargo Cult?

We all know that Algebra is the limiting factor for the millions of students attending community colleges throughout the U.S. That is: Colleges could double (or triple, or quadruple) their graduation numbers overnight if the 8th-grade algebra requirement were only removed. This makes for lots of institutional pressure these days to do so.

A common line of thought is: Get rid of the algebra requirement and pursue a primer on statistics instead. You can sort of see why someone might negotiate in this way: offer something apparently attractive (statistics, which many say is needed to understand the modern world) in place of the thing they're asking you to give up. For example, the Carnegie "Statway" program now at numerous colleges promises exactly that (the lede being "Statway triples student success in half the time"; link).

But as an instructor of statistics at a community college, I use algebra all the time to derive, and explain, and confirm various formulas and procedures. Without that, I think the intention (in fact I've heard this argued explicitly) is to get people to dump data into the SPSS program, click a button, and then send those results upstream or downstream to some other stake-holder without knowing how to verify or double-check them. Basically it advocates a faith-based approach to mathematical/statistical software tools.

This is a nontrivial, in fact really tough, philosophical angel with which to wrestle nowadays. We're long past the point where cheap calculating devices have been made ingrained throughout many elementary and high schools; convenient to be sure, but as a result at the college level we see a great many students who have no intuition of times tables, and are utterly unable to estimate, sanity-check, or spot egregious errors (e.g. I had a college student who hand-computed 56×9 = 54 and was totally baffled at my saying that couldn't possibly be the answer; even re-doing the same thing a second time around).

To a far greater degree, as I say in my classes, statistics is truly 20th century, space-age branch of math; it's a fairly tall edifice built on centuries of results in notation, algebra, probability, calculus, etc. Even in the best situation in my own general sophomore-level class, and as deeply committed as I am to rigorously demonstrating as much as possible, I'm forced to hand-wave a number of concepts from calculus classes which my students have not, and will never, take (notably regarding integrals, density curves, the area of any probability distribution being 1; to say nothing of a proof of the Central Limit Theorem). So if we accept that statistics are fundamental to understanding how the modern world is built and runs, and there is some amount of corner-shaving in presenting it to students who have never taken calculus, then perhaps it's okay to go whole-hog and just give them a technological tool that does the entire job for them? Without knowing where it comes from, and being told to just trust it? I can see (and have heard) arguments in both directions.

Here's an example of the kind of results you might get from a website that caught my attention the other day: Spurious Correlations. The site puts on a display a rather large number of graphs of data which is meant to be obviously, comically not really related, even though they have high correlation. Here's an example:


Something seemed fishy about this after I first looked at it. It's true that if you dump the numbers in the table into Excel or SPSS or whatever a correlation value of 0.870127 pops out. But here's the rub: those date-based tables used throughout the site are totally not how you visualize correlation, or related in any way to what the linear correlation coefficient (r) means. What it does mean is that if you take those data pairs and plot them as an (x, y) scatterplot, you can find a straight-line that gets pretty close to most of the points. That is entirely lost in the graph as presented; the numbers aren't even paired up as points in the chart, and the date values are entirely ignored in your correlation calculation. I'm a bit unclear if the creator of the website knows this, or is just applying some packaged tool -- but surely it will be opaque and rather misleading to most readers of the site. At any rate, it terminates out the ability to visually double-check some crazy error of the 56×9 = 54 ilk.

As a further point, there are some graphs on the site labelled as showing "inverse correlation", which I thought to be a correlation between x and 1/y -- but in truth what they mean is the more common [linear] "negative correlation", which is a whole different thing. Or at least I would presume it is; I'd never heard of "inverse correlation" as synonymous, and about the only place I can find it online is Investopedia (so maybe the finance community has its own somewhat-sloppy term for it; link).

I guess someone might call this knit-picking, but I have the intuition that that's a sign of somebody who can't actually distinguish between true and false interpretations of statistical results. Is this ultimately the kind of product we get if we wipe out all the algebra-based derivations from our statistics instruction, and treat it as a non-reasoning vocational exercise?

Let me be clear in saying that at this time I have not actually read the Carnegie Statway curriculum, so I can't say if it has some clever way of avoiding these pitfalls or not. Perhaps I should do that to be sure. But as years pass in my current career, and I get more opportunities to personally experience all the connections throughout our programs, I find myself becoming more and more of a booster and champion of the basic algebra class requirement for all, as perhaps the very finest tool in our kit for promoting clear-headedness, transparency, honesty, and truth in regards to what it means to be an educated, detail-oriented, and scientifically-literate person.


2014-10-13

How Do You Know It's a Proportion?

I've written in the past of the mystery of when you'd want to use a proportion for an application problem, and what the benefits are for doing so (link). Once again, last week, one of my basic algebra students asked the question:
"How do you know it's a proportion?"
And once again I was unable to answer her. I've searched all through several textbooks, and scoured the Web, and I still can't find even an attempt at a direct explanation of how you know a problem is proportional. (Examples, sure, nothing but examples.) I've asked other professors and no one could even take a stab at it. Perhaps the student was looking at any problem such as the following:
A can of lemonade comes with a measuring scoop and directions for mixing are 6 scoops of mix for every 12 cups of water. How much water is needed to make the entire can of lemonade if there are 40 scoops of mix?

On an architect's blueprint, 1 inch corresponds to 4 feet. Find the area of an actual room if the blueprint dimensions are 6 inches by 5 inches.

The ratio of the weight of an object on Earth to the weight of the same object on Pluto is 100 to 3. If a buffalo weighs 3568 pounds on Earth, find the buffalo's weight on Pluto.

Three out of 10 adults in a certain city buy their drugs at large drug stores. If this city has 138 ,000 adults, how many of these adults would you expect to buy their drugs at large drug stores?

The gasoline/oil ratio for a certain snowmobile is 50 to 1. If 1 gallon equals 128 fluid ounces, how many fluid ounces of oil should be mixed with 20 gallons of gasoline?

Concisely stated, what is the commonality here? What is a well-defined explanation for how we know that these are all proportional problems?


2014-10-01

On Comparing Decimals Like 0.999...

Today in my college algebra class will be the first time that I've provided space to actually discuss the 1 = 0.999... issue. Previously I mentioned this here on the blog. This became so contentious that it's actually the only post for which I've been forced to shut off comments. (Actually it attracted a stalker who'd post some aggressive nonsense every few days.)

Anyway, brushing up on some points for later today let me see a very obvious fact that I'd overlooked before and that is: students' customary procedure for comparing decimals fails spectacularly in this case. For example, here it is expressed at the first hit from a web search at a site called AAAMath:
Therefore, when decimals are compared start with tenths place and then hundredths place, etc. If one decimal has a higher number in the tenths place then it is larger than a decimal with fewer tenths. If the tenths are equal compare the hundredths, then the thousandths etc. until one decimal is larger or there are no more places to compare. If each decimal place value is the same then the decimals are equal.
So if students apply the "simple" decimal comparison technique ("if one decimal has a higher number in the X place"), even at just the ones place, then this algorithm reports back that 1.000 is greater than 0.999... It overlooks the fact that the lower places can actually "add up" to an extra unit in a higher place. And thus all sorts of confused mayhem immediately follow. 

So the simple decimal comparison algorithm is actually wrong! To fix it, you'd have to add this clause: unless either decimal ends with an infinitely repeating string of 9's. In that case the best thing to do would be to initially "reduce" it back to the terminating form of the decimal (this being the only case where one number has multiple representations in decimal), and only then apply the simple grade-school algorithm.


2014-08-25

Introducing Automatic-Algebra.org

Here in New York, it's back-to-school starting next week. Of course, if you're in the teaching profession (or really just know about it), you've probably been doing academic work and preparing for the fall semester throughout the summer and year-round. I'm in the very fortunate position that I'll have a new permanent position at CUNY this fall, so here's how I've spent my August:

I've developed a new website for practicing basic numerical skills that are prerequisite for algebra and other classes like statistics, calculus, and computer programming: Automatic-Algebra.org. I've written a few times in the past about the need for certain skills to be automatic, skills that have been taught but not mastered by most students who arrive in a remedial community-college algebra course, and therefore causes continual disruption and frustration when we're trying to deal with actual algebra concepts (links one, two). Like, for the algebra course itself: times tables, negative numbers, and order-of-operations. Or for a basic statistics course: operations on decimals like rounding, comparing, and converting to percent.

So what you get at the new site for each of these skills is a brief, 5-question quiz for each of these skills. Here's how I designed them:
  • Timed, so that students get a very clear portrayal of what the expectation is for mastery of each of these skills (15, 30, or 60 seconds per quiz). For example: sequential adding and counting on fingers for multiplications will not suffice.
  • Multiple-choice, so the site is usable on a variety of devices, including touch-screen mobile devices. For example: you can stand on a bus and drill yourself on a smartphone by just tapping with your thumb.
  • Randomized, so once you take a quiz you can click "Try Again" and get a new one, and drill yourself multiple times in just a few minutes each day.
  • Javascript, so the quiz runs entirely on your own device once you download the page the first time. The site doesn't require any login, accounts, server submissions, recording of attempts, or any transmitted or collected information whatsoever after you initially view the page.

This is something that I've wanted for a few years now, that no existing website really implemented and consolidated the way I wanted as a reference for students. Thanks to the new full-time track I feel I'm in the position to warrant developing it myself and leveraging it for numerous classes of my own in the future.

Feel free to check it out, and offer any comments and observations. If you feel it might help your incoming students this fall semester, give them the link and maybe it will lift a whole lot of our boats all at the same time. Do you think that will be of assistance?



2014-08-11

When Are Parentheses Required for Substitution?

In my remedial algebra classes, on introducing the substitution of numerical values for variables, I've always said that it's safest to perform this substitution inside parentheses, especially for negative numbers. Of course, we all intuit times when that's not strictly necessary. So in this lecture I usually get one of the brighter students asking, "Exactly when is it necessary?". I've found this to be a surprisingly difficult question to answer. After a rather embarrassingly long consideration, here's what I've come up with.

Parentheses are basically required in the following two situations:
  1. Separating juxtaposed signs and numbers, and
  2. Collecting expressions with one operation under a higher-order operation that is not also a grouping symbol.

For situation #1, I'm assuming that we're not ever inserting new operational symbols like × or in cases of juxtaposed multiplication -- just the substituted expression and possibly parentheses. Parentheses are probably only needed for factors after the first one (i.e., after the coefficient).

For situation #2, we're mostly talking about multiplication and exponents, with some lower-order operation in the expression being substituted. Contrast with fraction bars (for division) and radicals, which have grouping built into the symbol, and thus no general requirement for new parentheses.

Here are a few examples of each. For the following, let x = 1, y = –2, z = ab, and w = a + b. Examples of separating juxtaposed signs and numbers:

Examples of collecting expressions with one operation under a higher-order operation:

We can explain the first example immediately above in that the negative sign acts the same as multiplying by –1, and therefore must be collected under the exponentiation operation. However, this does get slightly complicated by the use of the minus sign for both unary negation (i.e., multiplying by –1), and binary subtraction, which have different placements in the order of operations. For example, the following may be taken as a slightly ambiguous case:

Here, in substituting any numerical values at all for x and y, parentheses will definitely be necessary. However, this particular instance doesn't have juxtaposed numerals -- the real reason may be taken to be that without the parentheses, this would read as subtraction (lower order than the initial juxtaposed multiplication).

A few notes on specific cases of substitution:
  • If substituting one variable for another, then parentheses are never needed (the order of operations is clearly identical before and after).
  • If substituting a whole number, then only the situation of juxtaposed numbers after the coefficient can apply. Obviously a whole number has no written sign, and includes no operations to interfere with higher-order interactions.
  • If substituting a negative number, then any of the situations are possible. It does have an attached sign, may need separation from an advance factor (as above), and operates similarly to a multiplication (and thus needing collection under an exponent). 

Note that Wikipedia articles do show use of juxtaposed signs, e.g., 7 + –5 = 2, and discusses possibly superscripting the unary negation in elementary contexts and the computer language APL (link one, two), something that I've also seen on some calculators, in which cases parentheses would not be necessary. However, that's not something I've ever seen in textbooks (either college-level or otherwise), so I take that as nonstandard and not qualifying as well-written algebra. 

What do you think? Have I missed any important cases or examples?


2014-07-21

Automatic Negatives

In my developmental (remedial) courses, I have been thinking more lately about where and how to communicate which skills need to be automatic -- that is, instantaneous and always correct. As I wrote earlier (link), these are skills which we take for granted in higher level courses, but they frequently get lost in the underbrush of all the other math topics, and students expect that struggling with them for several minutes is acceptable and normal behavior. A list of these remedial automatic skills that students often lack includes:
  • Times tables
  • Signed operations (add, subtract, multiply, divide)
  • Rounding whole numbers & decimals
  • Comparing decimals
  • Converting between decimal & percent
  • Etc.
These are the kinds of things that seem "obvious" to properly-educated people, but if no one ever communicated them to a student in that way, they are opaque. For example, in each of the last few  semesters in my sophomore statistics class, I've had "A" students, otherwise doing very well, but who were completely mystified at how we were converting decimal probabilities to percent on the fly. Notice that these are all one-step, immediate mental tasks: I wanted to include order-of-operations knowledge here (it's so critically important), but the truth is those tasks are inherently multi-step problems, so they don't really belong on the list above.

But let me focus more on the issue of negative numbers (signed operations). I find these to be the greatest stumbling block in students getting through the bottleneck remedial algebra course -- I can pull up any test, including finals, and see that usually at least half of the errors are simply signed-number mistakes. A student can know everything in the algebra course, but if they routinely trip over negatives even after I've begged them to practice it for a whole semester, then they have practically no chance of passing the final.

In June, I had the opportunity to teach an immersive one-week workshop for students who narrowly missed passing our department's prealgebra final (basic arithmetic with different types of numbers: integers, fractions, decimals, percent). This was a great experience, the students were hard-working and highly appreciative, and it gave me a chance to further focus on this issue. I was trying to do frequent one-minute speed drills on things like negative operations, and some students were having what seemed like an inordinately difficult time with them -- particularly the subtractions. So that night I sat down at the bus stop and tried to think through really carefully what we really do in practice as proficient math people.

Here's the thing: Not all negative operations are single-step. In particular, consider subtracting a negative number, written inside parentheses. I find that a lot of students are taught this bumfungled "keep change change" methodology: they will transform expressions as follows:
  • 3−(−9) = 3+(+9)
  • 5+(−8) = 5−(+8)
  • 1−(+7) = 1+(−7)
  • 4+(+6) = 4−(−6)
Now, all of these are true statements. But only some of them are helpful. It's not like the students' prior instructors were lying to them, except in regards to when this fact is useful in simplifying an expression with signs (namely, the 1st and 3rd cases above). Let's look at how a math professional would really do it. These are two-step problems; really, we would follow the order-of-operations and get rid of the parentheses in what's really a multiplying step, then combine like terms in an addition step.
  • 3−(−9) = 3+9 = 12
  • 5+(−8) = 5−8 = −3
Etc. So the lesson is that if an instructor shows students how to mangle signs in & around parentheses, they are really missing the point; when simplifying (evaluating), we will remove the parentheses entirely in the multiply step, and then always perform add/subtracts without any parentheses in the picture.

So in the current discussion, this informs us as to what we should be drilling students for "automaticity" in terms of negative number operations: namely, combining terms with no parentheses has to be the automatic one-step skill. If you want to explain this as effectively adding terms that's fine; but don't fail to clearly communicate that this is expected to be instantaneous and immediate, in one mental step, in practice.

  • 2−6 ← Automatic drill ok
  • −7+1 ← Automatic drill ok
  •  6−7+2 ← Automatic drill ok
  • 5−(−2) ← Not automatic drill ok (2-step problem)
I feel like this was an important lesson I got to learn from this summer's immersion workshop. A speed drill can include automatically multiplying or dividing integers, or combining terms with no parentheses -- but add/subtracts with parentheses don't belong in the same category, because you really do need to apply two separate simplifying steps for them. And perhaps most important of all: clearly communicate that the one plan that always succeeds is the standard order-of-operations, not a score of different random manipulations to memorize for different situations.


2014-07-07

Multiple Choice Expectations

A while back I considered the chance to pass a standard multiple-choice final exam (link), granted a certain basis of actual knowledge of the material. Today let's look at it from the other perspective, i.e., what the expected score is for different levels of actual knowledge:



As you can see, if we pass a student with a 60% score on such a multiple-choice test, then the most likely bet (point estimate) for their true level of knowledge is around 11 or 12 of the questions, that is, less than half of the actual content for the course.


2014-06-28

Happy Tau Day!

Happy Tau Day! Celebrating Ï„ = 2Ï€ = 6.28..., which simplifies all the math formulas that include 2Ï€ to make one full turn around a circle. At our house, we'll be having tacos, and tequila, and tarts. Check out the Tau Day manifesto, especially the excellent video by Vi Hart, linked at the bottom of the page -- http://tauday.com/

(Coincidentally, it's also the Perfect Day, because 6 and 28 are the only "perfect numbers" that appear on a calendar --  http://en.wikipedia.org/wiki/Perfect_number.)


Edit: A full course of full circles for Tau Day!

2014-06-04

Reformatted Writing

A short observation that I sometimes use to my advantage: It helps me greatly if I write something in one format, and then look at it in some reformatted view, before finally distributing it publicly. This tends to dredge up a number of subtle errors or weaknesses which I otherwise can't see. In some sense it gives me "fresh eyes" to really read the draft from a new perspective (as opposed to one's memory blocking reception of what's actually on the page). In this sense we might say that "what you see is what you get" (WYSIWYG) is actually a hindrance instead of a help. A few cases:
  • Blogger. Editing blog posts here, one uses a rich-text editor that is different from the final HTML markup on the site. One clicks "preview" to actually see it reformatted as it will appear on the Blogger site. Something that happens occasionally is that I might have a duplicated "the the" (or something) across a line break, such that I don't initially see it; when the paragraphs and lines get moved around in Preview, I can catch this much more easily. More generally, I can pick up on weaknesses in sentence structure and places I can clarify much more easily in the second view perspective.

  • Lecture Extracts. For a couple of years now I've been providing one-page review guides to students in all of my math classes. I accomplish this by copy-and-pasting all of the special defined terms and procedures in my lecture notes to the review sheet. When I did this, the surprising side-benefit was that I discovered a lot of those definitions had varying formats, tenses, or parts-of-speech, that looked sort of ridiculous when lined up next to each other -- and then I could go back and fix them throughout my class lecture notes. (Arguably you could say this is bad practice due to data duplication; anytime I make changes in my lectures I have to edit the review sheet in parallel. But it turns out that this is not a burdensome task.)

  • Java Code. Just recently I've committed to formatting all my personal coding projects in the javadoc format (special comments that can be parsed out automatically to external documentation). This required a change in style that I thought would be irritating, but was much less painful than I expected, and more than compensated for by the benefits. Again, if I write my code once and then generate the documentation and look at that, I'm finding there's a whole lot of places I can improve on variable names, method names, comments on use, etc. Looking at it from the perspective of a true "outsider", with only the cues someone would start with to theoretically get into my code, gives my end product much greater depth and robustness.
So in summary: Write once, view in a totally different format, and then edit. Results are improved to a surprising degree.


2014-05-05

Basic Logic Errors

I constantly wish that students were taught rudimentary logic at an early age (links: one, two, three). Just musing about that today, here are three common stumbling blocks I see in different classes due to not being able to read logical statements properly:
  1. "If" Statement. In a basic algebra class, we have the rule "If the base is negative, then even powers are positive, but odd powers are negative". Immediately after that, I'll always have some students incorrectly evaluate something like: −5² = 25 (or worse, 2³ = −8) . Note that the base of the exponent is not negative, but some students overlook the check required for the "if" qualifier.

  2. "Or" Statement. In an elementary statistics class, we have the rule "To estimate a population mean, we must have either a normal population or a large sample size." Then when I ask the class "Do we need a normal population?", the entire class will always incorrectly respond with "Yes!" the first time. Of course that's not true; they're overlooking that only one case of the "or" needs to be satisfied -- most commonly by a large sample size. It takes several sessions of quizzing them on that before they are sensitive to the question being asked.

  3. "And" Statement. In practically any class, we might have the policy, "To pass this class you need at least a 60% weighted average, and a 60% score on the final exam." This constantly causes confusion and aggravation. Testy "So, the final exam doesn't count?", or "So, only the final exam counts?" are questions that I routinely have to address. Obviously, students are unclear on the fact that each of two requirements must be satisfied for an "and" statement like that.

2014-04-15

Multiple Choice Chances

Let's say you have a final-exam assessment that is a multiple-choice test, with 25 questions, each of which has 4 options, and requires a 60% score (15 questions correct) to pass. As one example, consider the uniform CUNY Elementary Algebra Final Exam (link).

How robust is this as an assessment of mastery in the discipline? As a simple model, let's say that any student definitely knows how to answer N types of questions, but is randomly guessing (uniform distribution over 4 options) for the other questions. Obviously this abstracts out the possibility that some students know parts of certain questions and can eliminate certain choices or guess based on the overall "shape" of the question, but it's a reasonable first-degree model. Then the chance to pass the exam for different levels of knowledge is as follows:


Obviously, if a student really "knows" how to answer 15 or more questions, then they will certainly pass this test (omitted from the table). But even if they only know half of the material in the course, then they will probably pass the test (12 questions known: 67% likely to pass). Of students who only ever know about one-third of the basic algebra content, but retake the class 3 times, about half can be expected to pass based on the strength of random guessing on the rest of the test (9 questions known: 19% likely to pass; over 3 attempts chance to pass is 1-(1-0.19)^3 = 1-0.53 = 0.47).


2014-04-03

Meta-Research Innovation Centre

Interesting article about Dr. John Ionnidis at Stanford founding the "Meta-Research Innovation Centre" to monitor and combat weak and flawed statistical methods in science research papers, especially medicine. Good luck to him!



2014-03-27

Gears of War

When I was a kid, one of my favorite pastimes was the Avalon Hill wargame Bismarck about fighting ships in World War II (see it reviewed on my gaming site here and here). In junior high school, at some point my English teacher asked me what I wanted to do as a career, and was completely apalled when I said "I want to join the Navy and control the main guns on a battleship". (I think I'd share her dismay if someone told me something like that today.)

Anyway, over at Ars Technica, a wonderful article has been written by Sean Gallagher (former Navy officer and IT editor) on exactly how the fire control systems on those ships did their jobs -- solving 20-variable calculus problems in real-time (accounting for moving, pitching, rolling, recoiling, Cariolis-spinning projectiles on both ends) with shafts and gears, with accuracy that is hard to beat even today with digital computers and GPS-driven rocketry. There are lots of insightful videos about the components and gears used to do input, sums, multiplies and divides, and spinning disks that can do complicated functions like trigonometry and more.

To me, this stuff is completely like crack. Check it out:


(Also: Further commentary and links at recently-established news site SoylentNews.)


2014-03-20

FiveThirtyEight

Nate Silver (statistician who famously predicted all 50 states voting in the last election) recently expanded his FiveThirtyeight blog to a full-blown "data journalism" site. His first post was a manifesto on data, science, statistics, politics, journalism, and honest storytelling in general. I agree with almost all of his observations here. They guy really knows his stuff and has a fiery passion for his particular mission. Great stuff.



2014-03-10

Faulty Factoring

Here's something I think I see a few times in any college algebra class: a really weird way of accomplishing quadratic factoring. (More generally, this might go in a larger file of "things students swear are taught by other instructors which are semi-insane" -- including whacked-out order-of-operations, keep-change-change for negatives, the idea that -42 means (-4)(4), etc.).

Anyway, let's say we want to factor what I call a "hard quadratic", i.e., Ax2+Bx+C, in integers, with A≠1 (hence "hard"). I prefer the method of grouping: i.e., factor AC so it sums to B, use those factors to split the term Bx, and then factor the four terms by grouping. Pretty straightforward.

But here's what a few students will insist on doing every semester: (1) Find factors of AC that sum to B; call these factors u & v (so that step is the same); (2) Write the expression (Ax+u)(Ax+v); (3) Look for a GCF of A in one of those binomials and strike it out.

Here's an example: Factor 5x2+7x−6.
Step (1): Note AC = −30 = (10)(−3), factors which sum to B = 7.
Step (2): Write (5x+10)(5x−3)
Step (3): Divide the first binomial by 5, producing (x+2)(5x−3).

So while this procedure does produce the right answer, what irks me tremendously is that the expression written in step (2) is not actually equal to either the original expression or the answer at the end. (Compounding this issue, students will nonetheless usually write equals signs by rote on either side of it.) Riffs on this procedure would be to write something like this on sequential lines, if you can follow it:

5x2+7x−6 → x2+7x−30 → (5x+10/5)(5x−3/5) → (x+2)(5x−3)

Again, the primary grief I have over this is that none of these expressions are equal to any of the others, and the students using this procedure are always oblivious to that fact. Second issue: They're likely to trip up over a non-elementary problem where the factor A does not appear in either of the binomials, e.g.: 4x2+4x+1 = (2x+1)(2x+1). Third issue: If there's a GCF in the quadratic itself and you overlook that, the standard grouping technique will still work (even if it's not the easiest way to do it), whereas I suspect users of this technique will be prone to incorrectly striking out any GCFs they discover at the end of the process.

Now, technically you could modify this and turn it into a correct procedure this way: Note that for quadratic Ax2+Bx+C, values u & v satisfy uv=AC and u+v=B if and only if Ax2+Bx+C = 1/A(Ax+u)(Ax+v). (Proof: 1/A(Ax+u)(Ax+v) = 1/A(A2x2 + (u+v)Ax + uv) = Ax2 + (u+v)x + uv/A and equate coefficients). So you could find u & v as usual, then write this latter expression, and simplify. The 1/A does always cancel out, but I've never seen a student actually write that factor in the second step.

So what I always do if I see this on a test in my college algebra class is to take half credit off for the problem and note that the intermediary expression is "false", i.e., not equal to what comes before or after. This then becomes an opportunity to discuss with the student why that's improperly written math -- went well in my most recent semester, but I can easily see that becoming more combative in a remedial algebra class.

Have you seen this (common) faulty factoring procedure in your classes? What do you as a correction for it, if anything?


2014-03-05

Presenting at Johns Hopkins

Here's one of these topics that merges my great interests in teaching & gaming, so I have no choice but to cross-post about it here and on my gaming blog.

Last week I had the opportunity to visit Johns Hopkins University, at the invitation of Peter Fröhlich to speak to his Video Game Design Project class in the computer science department there (run jointly with art students from the nearby MICA). A great talk and chance to meet with his students and network a bit with Peter, Jason from MICA, as well as one of my idols from old-school role-playing game publishing.

Bounce on over to my gaming blog for the details!

2014-02-24

Research in College Algebra Basic Skills

Here's something that I'm finding frustrating: for all the mountain of ink spilled on the issue of remedial math in colleges (including enormous numbers taking them, the fact that it's the critical determination of whether people get a college degree or not, dim prospects of existing placement tests, etc., etc., etc.), when I search for papers where someone has tried to correlate specific math skills of incoming students to success in college remedial algebra -- I come up totally empty.

Weirdly, I can find studies that correlate specific diagnostic test questions in basic math skills to other classes. Here's one relating specific math skills to success in college statistics classes. Here's another. Here's a study relating basic math skills to success in economics classes.

But predicting success in basic algebra classes? I'm coming up totally empty. I'm truly bewildered at this -- part of me can't possibly believe that no one has published results like that, but part of me is stewing from returning to this futile search many days over and over again.

Does anyone know of such research linking specific skill questions to success in college remedial algebra? Or any college algebra classes?


2014-02-17

Precipitation Probability

This winter module I've had a batch of students in my introductory statistics course who are so aggressively intelligent that they've spotted every single spot where I had any gray area or ambiguity in my lectures. In places I do this knowingly to simplify the subject, and prepare backup answers in case anyone asks -- this semester is the first time where every single one of backups got used, and then some. This will definitely benefit my classes in the future, and in fact, I learned a few things myself along the way. For example:

At the end of the probability concepts section, the major thing I want students to do is to interpret probability statements (which for some is the most difficult part of the course, never having encountered probability concepts before). I give a quiz question on the classic weather forecast precipitation probability: "Interpret this probability statement: 'There is a 40% chance of rain today in the New York area'". So personally, I always took this to mean that there was a 40% chance of getting any rain at all in New York today (40% chance for a drop of measurable rain somewhere in New York; i.e., over many days like today 40% of such days will get a drop of rain or more in New York).

But one of my students not only started researching this on her own, she actually called the New York weather service to ask a meteorologist how this was computed. She still didn't get the interpretation quite right (one of the few questions she missed all semester), but the discussion was enlightening for both of us.

The truth is that the weather forecast statement is in regards to rain at any random location in New York, not actually the rain for New York as a whole. I suppose that is really a more useful statement, after all. The publicized percentages are computed by multiplying the expected coverage area percent by the probability of rain occurring in that area (so if it's 40% likely that 100% of the area gets rain, you report the same result as an 80% chance that 50% of the ground gets rain). Therefore: What's being reported is the chance that any arbitrary point in New York gets measurable rain; i.e., 40% indicates that for any random point in New York, if we observe many days with conditions like today, 40% of such days get a measurable drop of rain on that point-location.

Links to more information:
  • Comments from the National Weather Service, reposted at the University of Texas at Austin website: here.
  • Video from a meterologist in Boulder, including citation to the 2005 Gigerenzer et. al. paper in Risk Analysis which surveyed people for their understanding of these statements (where I got my quiz question in the first place): here.

2014-02-03

Teacher Guilt and Grading Workload

Is this a closely-kept secret? I think that all of the college math instructors I know really intimately (I'm counting about four, including myself) have at some point admitted to an overwhelming stack of grading assignments that they've procrastinated on, and a painful amount of guilt that they've experienced over that apparent failure. One instructor told me in passing once that he basically had a nervous breakdown over Thanksgiving break, over not being able to accomplish all the grading he needed to. Which was freakishly familiar, because the exact same thing basically happened to me, several years ago.

I think this advice (like much of what I write here) goes out to new instructors, just starting their career, on the off chance they internet-search for this one day. So here we go: Your students absolutely deserve prompt feedback on their work, ASAP. But you also deserve a reasonable quality of life, not absolutely drowning in work and falling behind all the time. If you're not getting your grading done promptly, then you've got to be sensitive to that and change your assignments such that they're gradable in reasonable time.

Note this goes regardless of whatever pedagogical fashion is currently happening, or whatever suggestions you've received for proper assignments to give. It must be doable in the time that you have, full stop. The top priority basically always has to be honesty about time management; if you can't do it, then you can't do it, and you need to admit that and change it.

Here's how it started happening for me: When I was graduate student and given a few sections of college algebra to teach for the first time, my adviser (who was a really great guy) recommended what he did for homework -- have students keep a notebook journal of homework problems, turn in the notebook every week or two, and "check it off". Well, this turned into me lugging a giant box of notebooks home from school in the second week and it just sitting at home. Probably I attempted to "check it off" a few times and was so appalled and bewildered at the entirely undecipherable, jumbled scribblings in the first several books (possibly with answers from the back of the book transcribed at the end) that I just didn't see how I could possibly read, decipher, assess, and adjudicate this giant mass of outrageous nonsense (as I now interpret it). My mentor said it was "easy" for him, but it seemed utterly impossible for me.

Moreover, what counted as acceptable work? If a student came and challenged me on not being "checked off", how could I defend that, or explicate what line needed to be crossed? All absolutely reasonable concerns. Well, then I was committing to assessing each problem in some fashion individually, scoring up a particular ratio of acceptable problems for a pass/fail check, providing rubrics, peering closely at every line of written transformations, etc. Or at least in theory I was: it was impossible, and every day I'd slump into class and mumble a shameful excuse about why the notebooks weren't coming back, and probably get around to it about twice a semester. A task that put dread over me literally all semester long.

And you know -- this tradition more or less lasted for some years after I started teaching professionally for work. The process evolved by my first cutting down the number of problems to a fairly specific list that I expected to assess problem-by-problem. I went and ordered custom rubber pass/fail stamps to try and expedite the system; but an ever-present problem was even being able to find specific problems on some students' jumbled-up paper. Then I reduced it to about 3 specific problems I assigned each week on a one-page worksheet I designed, with dedicated space to put each problem, a completely worked-out example to show the correct format, and exactly 10 lines of work that I would assess line-by-line (passing would need 6/10 lines exactly correct). But even this requirement overwhelmed students in the basic algebra class, and there was constant combativeness around the assessment. Some could never learn to use an equals sign on each line. Most classes would find one person who understood the assignments and all be copying their page when I walked into the classroom.

There was some weekend a few years ago where it felt like I went basically berserk over the issue -- I just couldn't deal with it any more. I think this would be when I first switched from part-time to full-time, so my courseload doubled. Even the reduced one-page assignments were not manageable, I still had a sense of dread all the time, it didn't seem to help my students at all, and mostly all I got for my feedback was grief.

End of the story -- those assignments simply had to freaking stop. The most honest truth that I finally realized was this: my remedial students need a monumental amount of work and practice to overcome their deficiencies, and I don't have anywhere near the amount of time in my life to assess all of that vast amount of work that they have to do. The responsibility has to be put on them -- even if the majority of remedial math students are going to fail at the challenge.

The new protocol is this: There is a list of homework assignments that they're expected to do, all with answers to check at the back of the book, and if something doesn't work right or they have questions, then they can ask in class. I don't collect or grade this homework; there is too much for me to deal with. I'm usually needling just a bit at the start of every class; if no one asks about any exercises then there's some uncomfortable silence that I let settle. But usually I get one or two students who are asking questions, and then my time spent responding is actually helping someone who does want it, as well as the rest of the class, and also setting an example for proper study skills. As semesters pass, it seems like I get somewhat better traction and momentum with this, with more students actually participating. (I guess I write this tonight after multiple students in my statistics class asked about a bungled textbook problem that's been on my syllabus for 3 years now -- tonight was the first time anyone brought it up, slightly embarrassing for me, but otherwise beneficial to my future classes.) Also, I use our Blackboard system to deliver 5 multiple-choice quiz questions to the students every week -- entirely automated grading, so it pops up in my digital gradebook without any effort on my part, complete with comprehensive statistics on what the hardest parts were -- keeping the students to some required attention every week, without spending any more of my home or class time on the process.

As another example, this semester I switched my college algebra tests from multiple-choice to open-response (grading on quality of writing/justifications as well as raw answers). Right before I did this, I had another instructor warn me to not make it too burdensome on myself (a reasonable concern!). But I didn't just add work for myself: I cut the size of the tests to a level that would be easy for me to grade. Instead of 20-question multiple choice tests (like most instructors here use), I give 10-question open-response tests. Namely, the hardest 10 questions in the block -- no rinky-dink warm-up problems (like trivial linear equations or simply adding polynomials). But the scoring system is simply for each each question: 1st point for the correct answer, and a 2nd point for well-written justification (or maybe 1 point for a single error, 0 points on the second error). Each point is 1/20 = 5% of the test, absolutely laser-fast to score and add up. (There's no fiddling with granularities less than 5%; I don't have time for that.) It takes me about an hour, maybe two, to grade and give feedback to all the problems for all the students in a section on one test cycle.

So here's where I am today: When I give a test I cannot wait to get started grading it. I'm almost over-eager to see how my students are doing, and curious to see what's working well and what we can brush up and improve in the future. I know that the grading will go quickly and be productive, and I will be getting data about how the class is progressing very soon. Separately, I'm almost addicted to checking in on the online quiz progressions, following the statistics of which problems are hardest.

The workload has flipped from dreadful to highly desirable, and I look forward to getting student work whenever I can now. I think I've had some test problems that were hard to grade (I can't think of what they were right now), but then I pull them out of rotation and replace them with something more reasonable to assess. I basically don't have arguments about grading anymore; the points are specified in advance on a practice test, and it's all very easy to see where everything is coming from.

Last semester, I actually had one student express surprise and near-disbelief at how quickly I got graded tests back to students; namely, the very next day without fail. She asked me how I could do that when all of her other professors took at least a week to do the same thing. My answer was something like I was really curious about how my students were doing and couldn't wait to find out. (Truth be told, my tests are usually graded a few hours after I give them, and results are online usually around 2-3am after my night classes.)

So I offer that as a success story of someone who's gone from crushing years-long ever-present guilt and dread over grading, to where I almost can't get started at it soon enough to satisfy me. The key is to budget time first, to be honest about what you can do, and to cut and design assignments to a level where you can grade them with a sense of joy.

Have we all gone through this trip through the valley of dread? Have you?


Related question on Stack Exchange: Academia, regarding my ongoing inability to understand how anyone makes the "check for completion" protocol work: "What are the minimum criteria when checking homework for completion only?"

2014-01-27

Excellent Exercises − Completing the Square

The is the first of an occasional series that I'd like to post about intelligent exercise design for use in a math class (whether as part of a presentation, homework, or test). My primary point is that if someone just thinks that they can solve problems, and walks into a classroom and starts making up random problems to work on -- disaster is sure to strike. There are so many possible pitfalls and complications in problems, and such a limited time in class to build specific skills, that you really have to know absolutely every detail of how your exercises are going to work before you get in the classroom. Not expecting to do that is basically BS'ing the discipline.

So in this series I'd like to show my work process and objectives for specific sets of exercises that I've designed for my in-class presentations. Are my final products "excellent"? Maybe yes or no, but certainly that's the end-goal. The critical observation is that a great deal of attention needs to be paid, and the precise details of every exercise have to be investigated before using them in class. And that some subject areas are surprisingly hard to design non-degenerate problems for.

For this first post, I'll revisit my College Algebra class from last week, where I lectured on the method of "completing the square" (finding a quantity c to add to x^2+bx such that it factors to a binomial square, i.e.: x^2+bx+c = (x+m)^2... which of course is solved by adding c=(b/2)^2.). As per my usual rhythm, I had four exercises prepared: two for me and two for the students. Each pair had one that would be worked entirely with integers, and a second that required work with fractions. The first three went as expected, but the fourth one (worked on by the students) turned out much harder, such that only 3 students in the class were able to complete it (which sucks, because it failed to give the rest of the class confidence in the procedure). Why was that, and how can I fix it next time?

First thing I did at home was turn to our textbook and work out every problem in the book to see the scope of how they all worked. Here I'm looking at Michael Sullivan's College Algebra, 8th Edition (Section 1.2):


What we find here is that all of the problems in the book share a few key features. One is that after completing the square, when the square-root is applied to both sides of the equation, the right-side numerator never requires reduction (it's either a perfect square or it's prime).  Second and perhaps more important is that the denominator is in every case a perfect square -- so the square-root is trivial, and we never need to deal with reducing or rationalizing the denominators. Third is that with one exception, in the last step the denominators of the added fraction are always the same and need no adjustment (the exception is in #43, where we adjust 1/2 = 2/4; noting that even when combining fractions on the complete-the-square step, I had a few students flat-out not understand how to do that). That does simplify things quite a bit.

Now let's look at my fraction-inclusive exercises from class:


As you can see, item (b) (the one I did on the board) works out the same way, featuring a right-side fraction  with a prime numerator and a perfect-square denominator. But item (d) (that the students worked on) doesn't work that way. The denominator of 18, after the square root, needs to be reduced, then rationalized, and that causes another multiplication of radicals on the top; and then to finish things off we need to create common denominators to combine the fractions. That extends the formerly 8-step problem to about 14-steps, depending on how you're counting things.

You can see on the side of that work paper that I'm trying to figure out what parameters cause those problems to work out differently. One is that if there's any GCD between the first coefficient and any of the others, then some fraction will reduce and produce non-like denominators in the last step. And that in turn will result in a non-perfect-square denominator on the right after you complete the square (because of adding fractions with initially different denominators). So my primary problem in item (d) was using the coefficients 6, 4, and 9, which have GCDs between the first and each of the others.

Finally, here's me trying to find a reasonable replacement exercise, which is harder than it first sounds (of course, trying to avoid all the combinations previously used in the book or classroom);
 

It took me 4 tries before I was satisfied. The first attempt had a GCD in the coefficients (and thus a denominator radical needing reduction/rationalizing), before I figured that part out. The second attempt fixed that, but accidentally had a reducible numerator radical, which makes it unlike all the stuff before that (√44 = √4*11 = 2√11). The third worked out okay, but I was unhappy with the abnormally large numerator radical of √149, which is a little hard to confirm that it's not reducible (the "100" and "49" kind of deceptively suggest that it is). So on the fourth attempt I cut the coefficients down some more, so the final radical is √129, which I'm more comfortable with.

Now we could ask: shouldn't students be able to deal with those reducible and rationalizable denominators when they pop up? In theory, of course yes, but in this context I think it distracts from the primary subject of how completing-the-square works. More specifically, the primary (but not sole) reason we want completing the square is to use in the proof of the quadratic formula -- and coincidentally, neither of those complications appear if you work the proof out in detail (the numerator radical is irreducible, the denominator is a perfect square, and like denominators appear automatically). So as a first-time scaffolding exercise these are really the parts we need. If students were to encounter more complications in book homework on their own time, then that's great, too (although as we've seen in the Sullivan textbook, that doesn't happen).

In summary: Completing the square exercises can get extremely bogged down with lots of radical and fraction work if you're not careful about how they're structured in the first place, losing the thread of the presentation when that happens. More generally: It may be necessary to work out every exercise in a textbook, as well as all your in-class presentations, beforehand in order to scope out expectations and challenge level. Hopefully more examples of this on a later date.


2014-01-20

Show Work vs. Justify Answers

My current testing protocol is that all of my remedial math classes have multiple-choice tests, but all of my college credit-bearing classes have open-response tests (i.e., not multiple-choice). This is a minor change this year, as previously I felt completely constrained by the various department-level final exams in our system, which are multiple-choice for most everything up through calculus (so as to make it easy for the department staff to score them). 

Anyway, for the in-class tests that I personally give, I recently grappled a bit with exactly what direction I should give in this regard. Of course many instructors use the phrase "Show your work", so much so that students frequently anticipate that as the direction. But does that address a real issue? Some people's work process is just undeniably crappy: scattered, jumbled, incoherent. While that might indeed be their work process, does it really do them or anyone else any good?

What I've recently settled on is this direction: "Justify your answers with well-written math." This gets more to the heart of the matter, that one is using mathematical language to explain why something is to another person. There's a specific syntax and grammar to this (just like French or Russian or anything): any arbitrary "this is the way I do things" doesn't cut it, because we need a shared language to be understood. And it prepares students to read a math book on their own. And help other students in need, and be helped by them. And it allows the instructor to give useful feedback, by identifying a specific logical gap. And probably some other stuff that I'm overlooking right now.

So at the level of College Algebra and above, I've started to grade half-credit on this basis as of this semester (for full credit, students need both the correct answer and properly-written math statements showing small-scale steps). Later in Trigonometry they can deal with more formal identity-proofs, etc., but I think this frames the expectations for students properly at an early point.

Do you agree that this is a much better directive than "show your work"? Can you think of a better phrasing for the requirement?