2009-07-29
Game Theory Lectures
However, I keep being reminded that this is a course being given in Yale's Economics department. And I've long held a few very key critiques about the foundations of standard economic theory, that I feel make the entire enterprise miserably inaccurate. What I didn't expect is for these Game Theory lectures to feature a high-intensity spotlight directly on those shortcomings, in practically every single session.
Critique #1 is that economics deals only with money, and wipes out our capacity to deal with other values. Critique #2, probably more important, is that economics fatally depends on a “rational actor” assumption for all involved, which is simply not true. Let's consider them in order:
Critique #1: Economic theory is all about money, and the widespread use of the theory destroys our other values like family, community, craftsmanship, healthy living, emotional satisfaction, and good samaritanhood. As one wise man said, “They don't take these things down at the bank,” and therefore, they get obliterated when economic theory is put in play.
Now, Professor Polak makes a good, painstaking show in Session #1 of trying to fend off this criticism. “You need to know what you want,” he says, and runs an extended example wherein, if one player really was interested in the well-being of his partner in a game, well, that could be accounted for by assessing the value of those feelings, and adding/subtracting to the payoff-matrix appropriately, and then running the same Game Theory analysis on the new matrix, finally arriving in a different result. (Of course, along the way he also snidely refers to this caring player as an “indignant angel"). See? Game Theory can handle all kinds of different values, not just money.
But lets look later in the same lecture, where he has the students play the “two-thirds the average game” (more on that later). He holds up a $5 bill and says that the winner will receive this as a prize. Now – does he do the analysis of “what people want” (payoffs), which he just said was so keenly important? No, he does not. Only 15 minutes after this front-line defense, it goes forward without comment, that obviously the only value for anyone in the game is the money. So, even though we just lectured on how economic theory can handle different values, we immediately thereafter turn around and act out exactly the opposite assumption. Maybe some people want the $5... maybe some want to corrupt the results for their snotty know-it-all classmates... But no, we get it played out right before our eyes, immediately following the defense that “all values can be handled”, that as soon as money comes in the picture, in practice, we dispense of all other values and speak of nothing except for the cash money.
Critique #2: Economic theory presumes “rational players”, where all the people involved knowingly work to their own best interests all the time. Frankly, that's just downright absurd. People are routinely (1) uneducated or uninformed about what's best for themselves, (2) barred from receiving key information by more powerful institutions or interests, (3) obviously non-rational in instances of emotional stress, drug use, mental failures, and modern Christmas purchasing behavior, and (4) proven by cognitive brain science to be unable to correctly gauge simple probabilities and risk-versus-reward.
Now, consider lecture #1, where Professor Polak introduces game payoff matrices, and the idea of avoiding dominated strategies (that is, a strategy where some other available choice always works out better). With exceeding care, he transcribes each “Lesson” along the way onto the board, including this one: “Lesson #1: Do not play a strictly dominated strategy”. Okay, that's a reasonable recommendation.
But about 10 minutes later, he pulls a devious sleight-of-hand. Analyzing another game, he asks what strategy we should play. “Ah,” he says, “Notice that for our opponent strategy A is dominated, so you know they won't play that, they must instead play strategy B, and thus we can respond with strategy C.” Well, no, that reasoning about our opponent (the 2nd logical step here) is completely spurious; it only make sense if our opponent is actually following our lesson #1. But, have they taken a Game Theory class? Do they know about “dominated strategy” theory? Do they actually follow received lessons? None of those things are necessarily (or even likely, I'd argue) true.
In other words, he assumes that all players are equally well-informed and “rational”, which isn't supportable. And, this assumption is kept secret and hidden. It would even be one thing if Professor Polak came out and said “For the rest of our lectures, let's also assume that our opponents are following the same lessons we are,” but no, he quite scrupulously avoids calling attention to the key logical gap.
And he does is it again, even more outrageously, in Session #2, when analyzing the class' play of the “two-thirds the average game” (a group of people all guess a number from 1-100; take the average; the winner is whoever guessed 2/3 of that average). He has a spreadsheet of everyone's guesses in front of him. Speaking of guesses above 67 (2/3 of 100), he says, "These strategies are dominated – We know, from the very first lesson of the class last time, that no one should choose these strategies." Except that, as he points out mere seconds later, several people did play them! (4 people in the class had guesses over 67; this occurs 46 minutes into lecture #2.) Nontheless, he continues: "We've eliminated the possibility that anyone in the room is going to choose a strategy bigger than 67...". But how can you possibly contend that you've “eliminated the possibility” when you have hard data literally in your hand that that's simply not true? Answer: It's the “rational player” requirement of all economic theory, which demonstrably collapses into sand if the logical gap is recognized and/or refuted. This infected logic continues throughout the class; in sessions #3 and #4 he repeats the same goose-step in regard to "best response" (1:10 into lecture #4: "Player 1 has no incentive to play anything different... therefore he will not play anything different."), and so on and so forth.
2009-07-27
Essay on Time Management
http://www.paulgraham.com/makersschedule.html
In brief -- Managers work in hour-long blocks through the day; great for meeting people and having a friendly chat. Makers, however (writers, artists, programmers, craftsmen) work in half-day blocks at the minimum. Interfacing the two -- e.g., managers calling an hour-long meeting at some random open slot in their schedule -- cause the makers to completely lose the in-depth concentration on a task they require. Call this "thrashing" or "interrupts" or "exceptions", if you like. This blows away a half or a full day of productive work when it happens.
Great observation, and it rings extremely true in my own experience. One of the reasons I'm so happy to be outside the corporate environment these days.
2009-06-05
Jury Selection
I was confused and mystified by this for a while. We put our heads together with my friend Collin, and I think we finally stumbled into an explanation.
The point is this: Everyone wants to avoid a hung jury (that is, a mistrial, forcing the court & lawyers to try the case all over again another time). The way a jury really works behind the scenes in a criminal trial is that you start with some yes-votes and some no-votes, and over the course of a day or so one side simply batters down the resistance of the other (often through insults and intimidation, as witnessed by another friend), until there is finally a unanimous vote. And who could possibly interrupt this process? You guessed it, the rare personality type who is willing to reject the mob mentality and stand out, disagreeing with everyone else in a crowded, public courtroom.
It seemed odd to me that when we disagreed with the rest of the pool like this, both the prosecution & defense got all jumpy with us about it. You would think (from an expected-value analysis) that if you asked a defense attorney the question, "Which would you rather have as a result of a trial: a conviction or a mistrial?", the answer would be "a mistrial" (since there's at least some probability that your client is found innocent in the next trial). But now I'm guessing that this fails to take into account the opportunity-cost to the attorney in their time; possibly they would actually, ultimately prefer the conviction, and be able to move to other more promising cases, rather than re-try a case which apparently is not a good cause in the first place. (This is similar to the well-known disconnect in incentives between a house seller and the broker working on a commission.) They're not making this loudly known, but I now suspect that avoiding a hung jury may be priority #1 for all the lawyers and judges in selecting a jury, even beyond winning the actual case. Therefore, the able-to-disagree-alone-with-a-room-full-of-people personalities have got to go.
For those of you who want to get out of jury duty, I therefore give a simple, completely foolproof and hassle-free procedure. There's absolutely nothing difficult about it and requires no creativity. Simply pick something, anything in the questions and disagree with everyone else, and you will be immediately released. If you're honest, in fact, it's practically impossible not to do this.
2009-05-24
Grading On a Curve Sucks
Back in Fall 2006 Thought & Action magazine published an article by Richard W. Francis (Professor Emeritus in Kinesiology, California State Fresno), asserting that grading on a curve is the only way to properly compute grades (titled in a propagandist fashion, "Common Errors in Calculating Final Grades"). Here's my letter to the editor from that time:
-------------------------------------------------
Dear Editor,
Richard W. Francis proposes a system for standardizing class grading (Thought and Action, Fall 2006, "Common Errors in Calculating Final Grades"). The system takes as its priority the relative class ranking of students, even though I've never seen that utilized for any purpose in any class I've been involved with.
Mr. Francis responds to some criticism of his system effectively grading on a curve. His response is that instructors can "use good judgment and the option to draw the cutoff point for each grade level, as they deem appropriate". In other words, after numbers are crunched at the end of the term, the grade awarded is based on a final, subjective decision by the instructor. Moreover, there is no way to tell students clearly at the start of the term what is required of them to achieve an "A", or any other grade, in the course.
The example presented in the article of a problem in test weighting seems unpersuasive. We are presented with a midterm (100 points, student performance drops off by 10 points each), and a final exam (200 points, student performance drops off by 5 points each). It is presented as an "error" that the class ranking matches the midterm results. But since the relative difference in the midterm is so large (10% difference each step) and the final so small (2.5% difference each step; even scaled double-weight that's only 5% per step) this seems to me like a fair end result.
Take student A in the example, who receives an "A" on the midterm and a "C+" on the final (by the most common letter grade system). In the "erroneous" weighting he receives a final grade of "B", while in the standardized system he has the T-score for a "D+". Clearly the former is the more legitimate reflection of his overall performance.
As an aside, I have a close relation who was denied an "A" grade in professional school due to an instructor grading on the curve. He still complains bitterly about the effect of this one grade on his schooling, now 40 years after the fact. Any subjective or curve-based system for awarding student grades at the end of a term damages the public esteem for our profession.
Daniel R. Collins
Adjunct Lecturer
Kingsborough Community College
2009-05-23
Winning Solitaire?
Okay, I admit it: Sometimes I play Microsoft Solitaire (i.e., "Klondike" Solitaire: draw 3, with 3 re-deals, Vegas scoring). Of course, it's the most widely-played computer game of all time. Occasionally I go on these benders and play it quite a bit for a few days.
Most games are lost, but I can usually eke out a win in about 20-30 minutes of playing. However, just today I probably lost 30+ games in a row over maybe 2 hours. Still no win so far today. I have to be careful, because I get in a habit of quickly hitting "deal" instantly after a loss (my "hit", if you will), and after an extended time by hand starts to go numb and I start making terrible mistakes because my eyesight starts getting all wonky. (Is it fun? No, I feel a vague sense of irritation the whole time I'm playing, until I actually win and can finally close the application. Hopefully.)
So this brings up the question: What percentage of games should you be able to win? Obviously I don't know, but my intuition says around ~20% or so maximum. I'm also entertaining the idea of building a robot solver, improving its play, and seeing what fraction of games it can win. Apparently this an outstanding research problem; Professor Yan at MIT wrote that this is in fact “one of the embarrassments of applied mathematics” in 2005.
The other thing is that all of the work done on the problem apparently uses some astoundingly variant definitions for the game. First, the "solvers" that I see are all based on the variant game of "Thoughtful Solitaire", apparently preferred by mathematicians because it gives you full information (i.e., known location of all cards), and are therefore encouraged to spend hours of time considering the next few moves (gads, save me from these frickin' mathematicians like that! Deal with real-world incomplete information, for god's sake!).
Secondly, they use the results from this "Thoughtful Solitaire" (full information, recall; claiming 82% to 91% success rate) simultaneously for the percentage of regular Solitaire games that are "solvable". But this meaning of "solvable" is only a hypothetical solution rate for an all-knowing player; that is, there are many moves during a regular game of Solitaire that lead to dead-ends, that can only be avoided by sheer luck, for the non-omniscient player. If they're careful the researchers correctly call this an "upper bound on the solution rate of regular Solitaire" (and my intuition tells me that it's a very distant bound); if they're really, really sloppy then they use the phrases "odds of winning" and "percent solvable" interchangeably (when they're not remotely the same thing).
So currently we're completely in the dark about what the success rate of the best (non-omniscient) player would be in regular Solitaire. I'll still conjecture that it's got to be under 50%.
Edit: Circa 2012 I wrote a lightweight Solitaire-solving program in Java. Success of course varies greatly by the rule parameters selected: for my preferred draw-3, pass-3 game it wins about 7.6% of the games (based on N = 100,000 games played; margin of error 0.3% at 95% confidence). My own manual play on the MS Windows 7 solitaire wins over 8% (N = 3365), so it seems clear that there's still room for improvement. See code repository on GitHub for full details.
2009-05-15
Expected Values
I've found that probability is enormously alien to a surprising number of students. (Just last week I had students in a basic math class fairly howling at the thought that they might be expected to be familiar with standard dice or a deck of cards). Therefore, I find that I actually have to motivate these discussions with an actual physical game, of the most basic simplicity. If I did cover expected values, here's the rudimentary demonstration I'd use:
The Game: Roll one die.
Player A wins $10 if die rolls {1}.
Player B wins $1 if die rolls {2, 3, 4, 5, 6}
Calculate probabilities (P(A) = 1/6, P(B) = 5/6).
Let a student pick A or B to play, roll die 12 times (say), keep tally of money won on board (use I's & X's). Likely player A wins more money.
Expected Value: The “average” amount you win on each roll.
E = X*P (X = prize if you win; P = probability to win)
Calculate expected values.
Ex.: Poker situation.
If you bet $4K, then you have 20% chance to win $30K. Bet or fold? (A: You should bet. E = $30K * 20% = $6K. If you do this 5 times, pay $20K, expect to win once for $30K, profit $10K)
2009-05-11
Speaker for the Dead
Of course, I loved Ender's Game. This second book is possibly even more emotionally moving in places (and Card seems to have said he considers it to be the more "important" book to him), but there's a number of notable structural flaws that I'm not able to shake off.
First is that it's very much working to set up further sequels; there's a whole number of major plot threads left hanging, and you can start detecting that about halfway through the book (furthermore, I see now that both this and Ender's Game were revised from their original format, so as to set up sequels, which takes away from the narrative thrust at the end of each). Second is that there's a central core mystery that the whole book is set up around, and in places people have to be unrealistically tight-lipped to their closest friends so as to prolong the mystery (I got really super-sick of this move from watching Lost). Third is that the central theme seems like a rehash of Ender's Game (you can very much feel Card wrestling with the rationale to the plot of Ender's Game; you can almost hear him musing "why would an alien race feel like killing is socially acceptable or necessary, anyway?", a central premise of the first book, and then constructing this second book so as to have an actual satisfying reason). There's also some obvious clues that the aliens should have been able to pick up when they kill humans (namely the visually obvious results of the "planting", as witnessed at the end of the book), that would have told them it's a good idea to stop doing such a thing, but apparently they miss them entirely.
But fourth is something that bothers me about lots of science fiction. Although the story spans many years, by way of relativistic time travel (over 3 thousand years, actually), technology never changes during that time. Ender can set off on a 22-year space flight, and when he lands, apparently all the exact same technology is in use for communications, video, computer keyboards, record-keeping, spaceflight landing, government, publishing literature, etc.
In fact, I've never seen any science-fiction literature that manages to deal with Moore's Law (the observation that computing power doubles every 2 years or so). It would be one thing if they conjectured that "Moore's Law ended on date such-and-such because of so-and-so...", but it's always a logical gap that's completely overlooked. Ender is honored to be given an apartment with a holoscreen with "4 times" the resolution of normal screens... but I'm thinking, in 22 years time, the resolution of every screen should be 1,000 times the ones he left behind on his space-flight. At that rate, I wouldn't bother walking into the next room for one with only "4 times" the resolution.
Maybe that's a subject that is simply impossible to treat properly in a work of centuries of science fiction, but the repeated logical gap (in the face of our own monthly dealings with new technologies) is something that's bothering me more and more. Maybe the Singularity will come and solve this problem for us once and for all.