# Re-Re-Revisiting the SAT

First, I got worked up about the test. Then I got a score and ranted about it on this blog. (I’m still uncertainly hoping that didn’t come off as arrogant. Let me add, I did not get a perfect PSAT.) Then a friend pitched to me the idea that I write an article about it for my school newspaper, which I did. It was far too long. As if that weren’t enough, I then decided to examine whether the SAT was an accurate prediction of “academic ability and success” for my English research paper. Now I’ve come full circle to this blog, where I’m going to try to synthesize and conclude everything, free of the shackles of the research paper format, to allow me to move on with my life. This post contains bits lifted from all three essays and lots of new stuff; I’ve been editing it for so long that I feel like I have it memorized. Its word count is around that of the newspaper article plus the research paper, i.e. far far far too long.

But whatever, nobody reads this blog anyway and I have to get this out of my system. When I said I wanted to “move on with my life”, I really meant my winter homework. Oops!

Disclaimer: I am not an admissions officer. I have not yet even been accepted to a prestigious university (despite rumors to the contrary…), for whatever definition of “prestigious”, unlike some of the bloggers I’m referencing. So some of this is pure speculation. On the other hand, some of it is researched and referenced, and I think the pure speculation still makes sense. That’s why I’m posting it.

Okay, here we go…

Let’s start with the question of accurate prediction. The SAT is a useful predictor, but not as useful as one might assume. Intuitively, it ought to be more accurate than other metrics because it’s a standardized test, whereas GPAs other awards vary by habits of teacher and region and are hard to compare objectively. But as a study from the College Board itself (PDF) found:

the correlation of HSGPA [high-school GPA] and FYGPA [first-year GPA in college] is 0.36 (Adj. r = 0.54), which is slightly higher than the multiple correlation of the SAT (critical reading, math, and writing combined) with FYGPA (r = 0.35, Adj. r = 0.53).

Of course, that doesn’t mean the SAT is worthless, because combining the SAT score and high school GPA results in a more accurate metric than either one alone. But by “more accurate” I refer to a marginal improvement of 0.08 correlation. Why are the correlation and the improvement so small? It seems strange because most students I know are going to need the skills that the SAT tests — basic reading, writing, and math skills — to get through life. Many people won’t; the SAT would be a terrible predictor of the success of, say, a painter or a professional athlete. But the SAT was never supposed to measure success in life in general, just “academic success”.

The primary purpose of the SAT is to measure a student’s potential for academic success in college. (Kobrin et al. 1)

That’s still a regrettable term because it implies a dichtonomy between “academics” and “other stuff” that makes no sense, but that’s an argument for another time. Okay, reading, writing, and math: I think it makes sense for most colleges to require basic proficiency in those skills. The problem is, who decides what “basic proficiency” means here? And more importantly, how can those skills be tested? If one tries to measure and compare them objectively — if one tries to set up a standard — then one is forced to select only a few facets of a few of these skills to test, and omit most.

It is impossible to test the entire range of reading, writing, and math skills. The SAT does not test whether you can read instruction manuals, write formal letters, or make a single rigorous step in a mathematical proof. (You may not agree that these are “basic” skills. But, again, who decides?) If it tested everything, it would take a month each time to administer and even longer to grade, and nobody would take it. So, standard-setters must pick a small subset of these skills to test, and furthermore, they have to test the skills in a way that is easy to grade. Hence, a test full of multiple-choice questions and grid-ins that a computer can score, plus one essay that’s written fast and graded faster.

But once such a standard is in place, all of the students realize that they only need that subset of the skills to do well on the test. Students only focus on the things they know will be on the test — grammar rules, for example. But not just any grammar rules. They learn to notice and categorically reject “mistakes” like fragments and comma splices — never mind that fragments have a perfectly legitimate rhetorical use, as you can see from the “sentence” before this one; never mind that Jonathan Swift, Benjamin Franklin, and even E. B. White (of “Strunk & White” fame) have all joined independent clauses with commas in their writing. If the standard says it’s wrong, it’s wrong. Fill in the bubble that says so.

The writing part of the SAT says it expects you to answer based on conventions of “standard written English”, which could arguably justify its rigid rules of never allowing fragments and comma splices. But that only strengthens my point, because informal writing is an important skill too. There are times when you want to write elaborately, but there are also times when you want to avoid it. For example, somebody told me the metaphors in my previous SAT post were too confusing. Metaphors are probably a bonus on the SAT, but it doesn’t matter how much I can show off my literary depth on this blog. It matters whether I can get the message across. That is, frankly, not what the essay teaches people to do at all; but more on that later.

The worst issue is that students do not need to give in-depth explanation of anything they learned. Due to its stringent time limit, the essay portion rewards quick reckless writing much more than deep thought. The passages of the critical reading section can be interpreted in different ways, depending on the readers and their experiences; that’s why literature is so beautiful. And unlike what math textbooks may suggest, explanation, with words instead of equations, is central to mathematics. (But this is old news from Lockhart’s territory: “Mathematics is the art of explanation” (5). Go read his lament (PDF) and everything next to it on my links page if you’re interested.) No multiple-choice questions can assess whether you can lucidly provide reasons for everything you think, which is arguably the most important lesson math has for a layperson. This is, of course, an inevitable result of requiring that the test must be administered and graded efficiently on such a massive scale. But the inescapable effect is that the SAT completely fails to assess deep thinking, and instead only rewards simple answers to simple questions. Can anybody dispute that higher learning demands more of the former than the latter?

Having written all this, I have to admit, even though the SAT’s predictiveness is lower than GPA, it’s still pretty high for a test that spans one morning. So there is an objection at this point I considered: maybe the SAT only tests parts of one’s ability, but those parts of ability will be somewhat representative of the other parts. One would expect that a typical good speed essay writer would also be better at other forms of writing, right?

Yes, of course, the writing section is still correlated with writing ability, so the SAT might be a faintly defensible predictor. But it’s important to note that the SAT is not just a predictor. It does not only enable admissions officers to make more informed decisions about students, while having zero side effects on the students themselves. The SAT is a huge force in the lives of us students, the concern of every college-bound junior and above; it shapes our study habits, our motivation, and our goals for learning.

To come back to the essay, encouraging students to practice writing 25-minute essays in order to improve their college-bound skills is like encouraging people to play Grand Theft Auto in order to improve their driving skills. (I’m not saying GTA is bad; I would take twenty-five minutes of playing it over twenty-five minutes of essay writing any day of the week.)

This sounds like a stretch, but bear with me. Firstly, the format of speed essays get rid of a lot of concerns — factual accuracy, for example. You don’t need to know when any of the wars you’re referencing actually took place, or who actually wrote the books you’re citing. Just write. The more, the better. This is an old objection, too, first noted by Dr. Perelman (MIT’s writing director) just after the SAT essay was introduced: “If you just graded them based on length without ever reading them, you’d be right over 90 percent of the time.” Driving games do that too! Just drive. You don’t need to move your hands around to operate the drive or turn signal or anything. Your car engine isn’t going to break down mysteriously, your turn signal won’t start flickering randomly, you’re not going to get lost in a desert and be forced to drink windshield wiper fluid until you meet a magical talking snake. Everything is isolated, idealized, predictable, and not one bit similar to the real deal.

Also, the easiest way to just have fun in GTA (especially if you only get 25 minutes to play and cannot save your game to continue later) is to ignore the plot, hijack the first car you see, and drive like crazy. So it is with the essay — ignore the heartfelt stance you’d develop if you had time, take the first remotely arguable side you think of, and write like crazy.

What happens if you try to apply the skills in the real world? Taking GTA to the real world will certainly get you arrested, sooner or later. No more driving for you. As for writing essays recklessly… I don’t know what the penalty is for being totally factually incorrect, but it doesn’t bode well for your grade. You may also offend people. Either way, you lose authority and the ability to write convincingly. You can still write, but if nobody else cares, you might as well be locked up in the literary world (metaphorically speaking).

…okay, that’s pushing it a bit far. But the basic point is there: writing, like driving, simply should not be done quickly and recklessly. I wrote this post in much, much more than 25 minutes. If I confined myself to such a time limit and applied all the strategies I had learnt for the SAT, I’d have to take one stance (“Standardized tests are the spawn of Satan himself and nobody should take them”) and defend it to the death. My writing would be snarky, hyperbolic, and completely unconvincing, because I don’t actually believe that stance! So why are we encouraging these frantic dashes?

A related question I asked in my paper was whether the SAT is coachable. Can dedicated companies help people game the test and get artificially high scores that don’t reflect their academic ability? There is apparently a Federal Trade Commission study that showed that test prep companies were indeed effectively coaching students, cited in The SAT: Aptitude or Demographics? and The Case Against Standardized Tests. Unfortunately, I could not find the original study online. Instead, I found a report by the ETS “The Effectiveness of Coaching for the SAT: Review and Reanalysis of Research from the Fifties To the FTC” (PDF), which spans 135 pages, critiquing it. (Somebody must have had fun comimg up with that alliterative title.)

This was a common theme in the research — a majority of the studies freely available on the Internet were by the College Board. One of the most detailed defenses of standardized testing, High-Stakes Testing in Higher Education and Employment’’ (PDF), is by researchers at the University of Minnesota and seems to have a lot of statistical authority; it’s definitely worth looking at. But its first author, Sackett, has also done College-Board-commissioned research before.

The economic motives are simple: the College Board profits if it can convince people that its tests are good; other researchers don’t profit if they convince people that those tests are bad, unless some sort of paywall is involved. Oh well. It’s tempting to dismiss these as corporate shill activity, but that’s an unproductive form of debate. I can only say that I don’t have the statistical knowledge or the time to judge. So I’ll leave it to readers to decide how cynical they want to be.

But. All these studies miss one point: what about students “self-coaching”, like me cramming 25-minute essays in the few weeks before my SAT while constantly griping about how useless it was? What if everybody’s scores are inflated because they know what will and will not be on the SAT?

What if the SAT is accurate only because it’s a self-fulfilling prophecy?

Here’s a thought experiment. Imagine if one day, the College Board decided to change its writing section to require writing a limerick about a particular topic. Students would start training — listing topical rhymes, finding words with the right syllable count and accents, picking words with precise connotations, learning about how to find a humorous punch line in twenty-five minutes. Before you could blink an eye, there’d be thick books from Barron’s and Kaplan and the Princeton Review and everybody else on How to Master the SAT Limerick.

And guess what? None of that is completely useless for college. It would still extend your vocabulary and literary horizons. The content of the SAT might still be a somewhat reasonable predictor — limerick-writing ability would still be correlated with vocabulary size and how much one reads. But it would also measure, with undiminished power, how much effort you’re willing to put into studying for the SAT to get into college, or how academically ambitious you are.

Even if the SAT tested how quickly students could recite the alphabet backwards, it would still be a somewhat accurate measure of students’ motivation if they knew that colleges took it seriously!

If this is the strongest explanation, we students are in a lose-lose situation. If we try to avoid pointless, possibly harmful, cramming for the SAT, we become indistinguishable from the large number of students who don’t study for the SAT because they’re not academically motivated.

I have no way to test this hypothesis, of course, and I don’t see how it’s possible — I can’t imagine any way to replicate the academic motivations in a controlled setting with a statistically significant population and a blatantly non-academic test. It seems even harder to conduct than a clinical trial.

But we still have life beyond the test to discuss, so let’s move on.

Readers should remind themselves that when we take the SAT, we have a specific target in mind, namely college admissions. It’s not some law of nature that’s set in stone; nobody is pointing a gun at our foreheads and forcing us to take it. And the SAT gets a lot of hype, but it is not the single factor that colleges look at. It’s not even a really important one, I dare say — it’s just the easiest to quantify and filter by. That’s all.

What does this mean? Dear fellow students: if anybody, especially yourself, tells you “You’ll never get into college C with SAT score X and grade-point average Y”, they are wrong. At best, they can tell you that the average student with SAT score X and grade-point average Y will not get into college C. But the average student has one testicle and one ovary. You are not the average student with your scores. If your SAT score is 1900 but you have stellar grades and passionate service records, you don’t have the same chance as a slacker who got a 1900 through some lucky guessing. Conversely, increasing your SAT score from 1900 to 2200 will not magically turn you from the average 1900 student into the average 2200 student in other regards.

The psychological hurdle here is simply that the SAT score, being a number, is easy to compare. Since we don’t know how colleges will look at or compare our achievements and essays, and we can’t compare our passion to other students’ passions in a statistical manner, we latch on to the SAT. It’s an integer. There exists a total order on the integers that is antisymmetric, transitive, and total. And even though being able to apply a mathematical model is nice, there are some situations where it would be egregiously wrong to rely on them too much, and this is one of those situations.

Recently a classmate showed me the scattergrams on our school’s Naviance system: you can search up a college and get a graph of the GPAs and SAT scores of applicants from our school, as well as whether they were accepted, deferred, or rejected. It also lists the average GPA and SAT for accepted students. Notably, the Caltech scattergram graphs the average SAT score of the one student from our school who has gotten accepted since the program has started collecting data, a 2400.

It’s easy to look at simple data like this and happily tell yourself “I have a good SAT score for my dream college”, or (more likely) to observe that you probably need improvement to get in; it’s much harder to know whether the admissions officers will sense your passion strongly enough. We all like concrete progress, things that can be measured in numbers that count upwards. Cookie Clicker, anyone? But the graph cannot show the thousands of other dimensions on which students perform, most of which could never be expressed numerically. No graph or study can.

But enough with the negative talk. I’m going to keep addressing people in high school: what should you be doing (in my humble, unverified opinion)? You should be doing what you love, what you are passionate about. If you don’t know what you’re passionate about, you should be looking for it by trying new things. It’s really that simple.

What a bunch of clichés, right? But after all, what else is there in life to do? So let me unpack those clichés in the context of this post a little, assuming that you probably wouldn’t accept completely throwing away all concerns of college while pursuing some fantastic idea about “passion”.

First, learn. There’s an amazing number of free resources on the Internet you can learn from. Take something at edX, Coursera, or KhanAcademy; use Codecademy or Duolingo; watch CrashCourse all day long; just read Wikipedia or whatever, I don’t care. It’s probably going to be difficult to completely absorb an extra course of material alongside schoolwork, and that’s okay! In fact, that’s exactly the benefit of learning from the Internet. You get to try small bits of different things at your own pace. And if you find something you’re really interested in, you’ll naturally be willing to put in the effort.

Of course, there are a lot of other things you might want to learn that aren’t transmitted so well over the Internet. Maybe you want to learn to cook or dance or play an instrument (although I’m sure there are tutorials for those activities available online, and I don’t claim to know that you’ll definitely learn them better from a local teacher). If you’re in a college-bound mindset, you may feel pressured to avoid them because they are not “academic”, even if you really like doing them. Ignore that feeling; you will accomplish orders of magnitude more if you do something you want to rather than something you hate. And it’s also important to remember that you don’t have to go to college either. If you discover a deep passion that you could devote your entire life to, and it isn’t something you think you could study in college, maybe you should pursue it elsewhere. I have to admit that I don’t know that much about this option, though, but it is an option.

There are some more caveats to this somewhat quixotic plan. Firstly, of course, there’s dealing with the mandatory schoolwork that results in your GPA. I’m going to suggest that you take it seriously, but not necessarily to dedicate yourself completely — as Paul Graham phrased it, treat it as a “day job”. Of course, if you’re actually taking a course you’re interested in with a good teacher, there’s no reason not to dedicate yourself completely. Also, if you don’t think you are motivated enough to learn anything well outside of school, maybe it’s best for yourself to take advantage of the pressure that school gives you in order to learn. But to paraphrase Mark Twain (via d684n again), don’t let schoolwork interfere with your learning.

Also, do not do lots of random things for the sole purpose of putting them on your app. If you’re interested in lots of random things, that might be defensible; but I strongly doubt that most people are like that, and even if you are one, being a jack-of-all-trades doesn’t actually set you apart, unless you can invent something special in the combination of the random things you do. Doing too many things just detracts from the strength of each component and wastes your time.

So there you have it. Those are the alternatives. Unfortunately, I still can’t get around the major concession I concluded my last post with. Maybe studying for the SAT really is the most efficient way for you to improve your college chances, and maybe which college you go to really is important enough to justify doing so.

I can’t tell you whether the SAT plays a huge role in your future. I think the test is distorted… but it doesn’t really matter what I think about it.

But I can steal another line from Lockhart here:

SALVIATI: If I object to a pendulum being too far to one side, it doesn’t mean I want it to be all the way on the other side.

The decision — how much time to spend on studying for a test, how much time to spend actually living — is still yours. I don’t expect you to give up SAT-cramming entirely; I didn’t. But please, at least consider the other options and what they can bring you. I really want to see what people might come up with if they did things differently.

Further links, some random, some not so random:

I guess I shall now go to sleep and do homework all day long tomorrow later today…

(note: the commenting setup here is experimental and I may not check my comments often; if you want to tell me something instead of the world, email me!)