Absurdities seem to come in clumps. While I was on the train reading Twitter earlier this week, I saw this. You may not see the final sentence, so I’ve cut it out for you here:
As absurdities go, it’s really hard to top that, but later that day I stumbled upon this document from ACT moments after I got into the office. It’s a summary of their top five reasons (full report ) on Why Test-optional Policies Do NOT Benefit Institutions or Students. (Emphasis via capitalization is theirs. They don’t want you to think they’ve jumped on the bandwagon.)
There are several “WTF” moments in this document, (my personal favorite being statisticians–who really should know better–making guesses about what some of Bill Hiss’s data might possibly say if only they could analyze it) but let’s start with the most beautiful. See if you can spot it here, as ACT attempts to debunk the claim of many researchers who don’t have to sell test services to make a living:
ACT is picking on kids who score 10 or less on the ACT in the callout, and using them as an example of the reason test-optional is a bad policy. In the world of multiple-choice tests, there is a threshold score for guessing, that is, the score you’d expect to earn if you didn’t read any of the questions and simply filled in the bubbles. Some test prep experts I know estimate this score to be about 12 on the ACT.
But let’s get back to the students who perform worse than random guessing and score 10 or less on the test. In case you were wondering, here is a chart of all ACT tests from 2002 to 2013, and the percentage (in the blue bar) who scored a 10 or less.
Can’t see it? Look harder.
It’s that little sliver of blue on top of the orange: In 2013, it was about 0.4% of all testers, or roughly four of every 1,000 students, or less than 8,000 students total, and that number is about twice as high as 2002, before many more students who probably weren’t thinking of college were forced to take the test anyway. Over 5,000 of the 8,000 of them are under-represented students of color, and the largest single group are very-low income students. Just over 10% who listed a class rank were in the top quarter of their class, so the number of 4.0 students is an extraordinarily small sample size, unless these kids all went to the same five colleges. They didn’t.
However, even these students, who’ve probably done everything their high school has asked of them, but who are poor, statistically likely to be from an under-resourced high school, with parents who (almost certainly) did not go to college, and who score very low on this test, only (emphasis mine) have a 30% chance of a B or better, and presumably an even higher chance of a C or better. And remember, these kids are so far outside the range of “College-ready”–a term ACT loves to use with administrators in school districts as they try to make taxpayers happy and sell more tests, that ACT suggests they shouldn’t be in college at all.
Of course, there doesn’t seem to be any control for other important factors that contribute to student success, like, oh, income, or parental attainment, or taxpayer support for your school, or the need to work in college, or whether you commute (if I’m wrong, I’ll be the first to correct it publicly, but I’ve asked several people who know way more about statistics than I do, and they suggest I’m right after looking at this.) If you don’t know that kids who score low tend to be poorer, tend to be from under-resourced high schools, tend to have parents who are not college educated; and further, if you fail to understand that each of those factors in and of itself predicts college enrollment and attainment, well, as the kids say, I can’t even. You might want to read this. And if you don’t know that colleges like test scores because they predict wealth, you should read this.
Additionally, take a deeper look at the first chart. The real lesson, it would appear, is that if you’re a 2.0 student, you have a very low chance of getting a B-average in college, even if you score in the 99th percentile on the ACT. College admissions officers know this already. In fact, that’s the very basis for test-optional admissions: The realization that HS GPA is a way better predictor of success.
After we went test-optional, two representatives from ACT came to talk to us. I asked a simple question: Do you acknowledge that four years of high school more closely resembles four years of college than a three-hour test resembles four years of college? The answer they gave, of course, was yes. Testing agencies know this.
If you have that 35 and a 2.5 in high school, your chances are still only 50% (about the same as a student with a 20 ACT and a 3.3 GPA). And regardless of your test score–EVEN A 10–your chances go way up with your grades. (Again, before you look at other factors).
No one–not the most ardent critic of tests–has ever suggested that tests don’t predict something by themselves, but as the ACT report acknowledges, about 75% of students have scores commensurate with their HS performance, so in the majority of cases, ACT adds virtually nothing to understanding; it simply echoes the high school record. And it’s clear–even from the ACT data–that students with lower scores and high grade point averages have a solid chance of doing well in college. Well, maybe not for the 0.4%, but still.
If this doesn’t make sense, consider the words of one test-prep expert who called this report “cherry-picking B.S.” except he used a longer word for “B.S.”
Standardized multiple-choice tests (whose creator, Frederick Kelly called them a measure of lower-order thinking skills) measure a specific type of skill, and all things being equal, choosing a “right answer” from four given is a skill you’d rather have than not have; every skill you bring to college (including many the ACT or SAT can’t measure) probably contributes in some way to success. Bill Sedlacek and others have shown this: In short, people with more skills tend to be more successful. Duh, as the kids say.
But we could devise lots of tests to measure creativity, leadership, drive, determination, the ability to overcome obstacles, a sense of humor, and even a realistic sense of self, all of which would likely add to our ability to predict college success. The question is: Is it worth it? And how much time would be dedicated to teaching students how to do well on these tests? It’s Campbell’s Law, all over again.
The same ACT that criticizes the Hiss and Frank study for looking at the whole sample presumes to speak for “Institutions” using national data. There is no recognition that every university has a different purpose and mission, or even that some universities might have different results: The title of the report leaves little wiggle room.
And, of course, we do our own research, and we look at those things that predict academic success: For us, a 2.5 GPA and 48 credits earned in the first year is the critical outcome: Hit that, and you’re pretty much guaranteed to graduate. For us, ACT and SAT uniquely explain about 2% of the variance in that type of freshman performance, far less than GPA, and only about as much as our attempt to measure non-cognitive variables (which we developed without spending millions of dollars on research, and which are–unlike tests–almost perfectly gender, race, and income neutral.) Other institutions, like The University of Georgia, and the Cal System, have uncovered similar patterns.
Weigh the benefits of this 2% bump against the cost of standardized testing, against the ways the tests are misused (comparing one school district to another, or basing teacher pay on scores), and against the time spent on prepping for the tests themselves that could be used for other things (like, teaching math, for instance). Then consider the ways in which testing perpetuates class and racial and income divisions, especially when colleges use it to make decisions. Then ask if it’s all worth it.
Really thoughtful, well-supported thesis. One of the best I’ve read that supports colleges going testing optional. Would like to see the results of a study comparing student success (college GPA, graduation rate) for students who do not submit scores with those who do, compared by HS GPA. It would be interesting to see if Jon’s 2% effect holds up across institutions.
LikeLike
Great piece, Jon. I don’t know why, after all the years of analysis (including Hiss’s, which I helped introduce at NACAC many years ago), there’s still such debate about using them. The only thing I can figure is that a number looks like science.
LikeLike
Bates did one years ago based on 20 years of data (I introduced it and Bill Hiss at NACAC in Milwaukee); I don’t know about any other. It would be interesting to do a meta-data comparison if there are similar studies from other institutions.
LikeLike
You wrote that a 2.5 GPA and 48 hours completed in the first year predicts graduation at your school. Did you mean 48 hours in two years?
LikeLike