2023-12-02

Nate Bargatze on Remedial Classes

Comedian Nate Bargatze talks about his experience taking remedial classes at a community college in Tennessee:

2023-08-21

The Iron Law of Stack Exchange

Stack Exchange logo
The Iron Law of Stack Exchange (Stack Overflow):

They hate hard questions from new users.

Fundamentally, voters and respondents on Stack Exchange like scoring points with answers that are obvious, easy to write, and don't take excessive amounts of thought. So there is some amount of irritation to questions that are fundamentally hard, and resist such easy answers. This perceived annoyance is exacerbated by a new (relatively low-ranked) user asking a question on any site, and the first instinct by members is often to look for some way in which the question can be rejected as being poorly-formed.

 Symptoms of this reaction include:

  • Closing or down-voting a question on poorly-justified grounds.
  • Editing the question to change it to an easier one.
  • Accusing the question of being an "X-Y problem", that is, the asker is confused and really meant to ask a different, easier question.
  • Complaining about interactions that are common across the Stack Exchange network, but which a new user might not know is commonplace, and so be cowed in that way.

As one personal example: the Stack Overflow coding site is not my top network destination, but over the years I have asked a number of questions there. As a CS faculty member and past professional developer, by the time I need to reach out externally for help, I've exhausted a rather deep search for answers, and my questions are likely to be fairly hard to crack. This almost always results in exasperation and negative votes from users of the site.

In my last question, I asked about a feature of a certain piece of software, which seemed like it should have a rather obvious behavior (based on how relative pathnames should work in the OS), but I couldn't get it to work right. Several comments suggested the obvious behavior, which I was pointing out was failing. The only actual answer came from the actual developer of the software, who again asserted how it should work -- and whom, after some back-and-forth, I ultimately convinced about there being a bug in their work, that they agreed to fix it in the next version.

Despite this rewarding result, no one else could successfully answer this question, and it was (as usual) downvoted into negative territory. Immediately thereafter my Stack Overflow account was actually locked out from asking further questions because of the history of negative votes it garnered.

Given the downward trend of traffic to Stack Exchange in the last year or so (even predating the earthquake of generative AI in that time), it seems like potentially a difficult problem for the site. Over time, it would seem that most of the low-hanging fruit will already be answered, really only leaving hard problems yet to be dealt with -- and these are specifically the ones that are met with more hostility and likely to be ejected by the most dedicated users of the site.

2023-02-02

ChatGPT Roundup

Cartoon bot chatting

Have we arguably stepped into the singularity? As of last November, OpenAI's release of the ChatGPT language-model system has upended most everything in sight, and in particular, sent educators everywhere scrambling to deal with the ramifications. This chatbot can seemingly craft custom essays, reports, scientific papers, newspaper articles, programming code, and solutions to many (although not all) mathematical problems. Immediately, for free, and in ways almost no human can detect.

Here's a roundup of news stories that I may update in the future:

Image courtesy Craiyon. :-)

2022-09-04

NY Regents: Trivial to Pass

Multiple choice with all C-answers

Ed Knight is a teacher in New York state. Writing at Medium, he points out the disturbing fact that the vaunted "New York Regents" exams required to graduate from high schools in the state have become completely trivial to pass. For example: In the Algebra Regents, you can ignore all of the (already simple) open-response questions, and just mindlessly mark "C" for all the multiple-choice questions, and you'll be given a passing grade.

Shame on NYSED and the Regents.

Really, the root of this problem is the insane scaling procedure that the NY Regents has been doing for years to fake up the test scores. Below is the most recent test's table for converting a "Raw Score to a reported "Scale Score". The scale score is 0-100, making recipients thinks it's a percentage, but it's not. For example: if you score a raw 27 out of the possible 86 points (that's 31% correct), this then gets converted to a reported Scale Score 65 -- i.e., a Performance Level of 3 out of 5, which is considered passing.

Think about that: for years, the NY Regents has considered a score of about 30% as passing for a basic (very simple!) algebra test. And yes, this was exacerbated because for the pandemic years (still ongoing), the policy was adjusted to accept even lower scores than that -- now as low as 20% (i.e., Raw 17, reported as a Scale 50). 

Scoring for Regents Exam in Algebra I: June 2022

Read more at Medium: Guessing C For Every Answer Is Now Enough To Pass The New York State Algebra Exam

2022-08-08

Curriculum in Califormia

Cover of California Math 6th-grade textbook
A nicely comprehensive article late last year outlines the plans for the next overhaul of school mathematics curriculum in California -- delaying any algebra until high school, cutting and compressing later classes to fit the reduced time, disposing of gifted & talented or accelerated programs, de-tracking, etc. As usual, the motivation for this to hopefully see higher pass rates from the easier courses, claim to better support inequalities among minorities, be better positioned for well-paying STEM college programs and careers, etc. Among the fonts of battle are particularly academic math professors vs. math-education faculty, who are generally on opposing sides of the issue. 

One thing that really stuck out to me was the case of one student, who's held out as being the one black student throughout her advanced math courses in school, and currently studying as a math major at UC Berkley. Here's how her story is presented: 

Mariah Rose, a third-year applied math major at UC Berkeley, said she didn’t have another Black classmate in any of her math classes until this semester.

“There’s one other Black student in my class right now, and that’s just crazy to me,” said Rose. “The number of Black and Brown people in math is so low.”

Rose, who is half Black and half Latino, said this is nothing new. She said she was the only Black female student in her advanced math classes during high school. And her successes in math make her an outlier in California’s public school system where Black and Latino students score lower on standardized tests...

Rose, the UC Berkeley math major, said she has mixed feelings. She agrees with the framework’s recommendation to delay more advanced math classes and avoid labeling students based on their math abilities at younger ages. But she isn’t sure if she would be where she is if she hadn’t been accelerated into a higher-level math class in 6th grade. 

“It was a game changer,” she said. “I don’t know if I would’ve pursued math if I hadn’t advanced so early.”

Read the full article at The San Fransisco Standard.

2022-08-01

Proofs and Applications

"Burden of Proof" on laptop

This is a quote that lives rent-free in my head, and comes up a lot in discussions I participate in.

From Stein/Barcellos, Calculus and Analytic Geometry, 5E, "To the Instructor", p. xxii (1992):

At the Tulane conference on "Lean and Lively Calculus" in 1986 we heard the engineers say, "Teach the concepts. We'll take care of the applications." Steve Whitaker, in the engineering department at Davis, advised us, "Emphasize proofs, because the ideas that go into the proofs are often the ideas that go into the applications." Oddly, mathematicians suggest that we emphasize applications, and the applied people suggest that we emphasize concepts. We have tried to strike a reasonable balance that gives the instructor flexibility to move in either direction.

2022-07-13

Willingham on Automaticity

Some solid thoughts from Daniel Willingham on the need for automaticity in basic mathematics skills (re: automatic-algebra.org) from his article "Is It True That Some People Just Can't Do Math?" (American Educator, Winter 2009-2010):

In its recent report, the National Mathematics Advisory Panel argued that learning mathematics requires three types of knowledge: factual, procedural, and conceptual. Let’s take a close look at each.

Factual knowledge refers to having ready in memory the answers to a relatively small set of problems of addition, subtraction, multiplication, and division. The answers must be well learned so that when a simple arithmetic problem is encountered (e.g., 2 + 2), the answer is not calculated but simply retrieved from memory. Moreover, retrieval must be automatic (i.e., rapid and virtually attention free). This automatic retrieval of basic math facts is critical to solving complex problems because complex problems have simpler problems embedded in them. For example, long division problems have simpler subtraction problems embedded in them. Students who automatically retrieve the answers to the simple subtraction problems keep their working memory (i.e., the mental “space” in which thought occurs) free to focus on the bigger long division problem. The less working memory a student must devote to the subtraction subproblems, the more likely that student is to solve the long division problem.

This interpretation of the importance of memorizing math facts is supported by several sources of evidence. First, it is clear that before they are learned to automaticity, calculating simple arithmetic facts does indeed require working memory. With enough practice, however, the answers can be pulled from memory (rather than calculated), thereby incurring virtually no cost to working memory. Second, students who do not have math facts committed to memory must instead calculate the answers, and calculation is more subject to error than memory retrieval. Third, knowledge of math facts is associated with better performance on more complex math tasks. Fourth, when children have difficulty learning arithmetic, it is often due, in part, to difficulty in learning or retrieving basic math facts. One would expect that interventions to improve automatic recall of math facts would also improve proficiency in more complex mathematics. Evidence on this point is positive but limited, perhaps because automatizing factual knowledge poses a more persistent problem than difficulties related to learning mathematics procedures
.

Get the whole article by Willingham (including citations for all the claims above) at the AFT website.