Explaining Test-optional with (almost) no statistics

I’ve been enjoying the holiday break, one of the nicest parts about working at a university.  It’s normally a time to sit back and take stock of the year while looking forward to getting back to work soon after the first of January.

As I checked my Twitter feed this morning, I noticed NACAC had posted a link to an opinion article in US News and World Report, written by Kathryn Juric of the College Board.  I know that the author of an article seldom writes the headline or creates the link of the article off the homepage, but this one grabbed me:  Colleges Must Keep the SAT Requirement.

OK, I think. I’m uniquely capable of responding, based on two things: First, I’m a member of the Midwest Regional Council of the College Board, and also help plan the Midwest Regional Forum.  I like and respect the people I come in contact with there. And, I work at a Test-optional University.

The article sounds a bit defensive, at least to this English major who was fairly good at reading subtext amid context.  I can handle that, and I understand it.  As Upton Sinclair famously wrote,  “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”  If you feel your livelihood is under attack, you might take offense.  I would.  I am.

But the attempt to lump the whole test optional approach into one collective movement that “scapegoats” all standardized exams is at least immature reasoning, and at worst sophistry, using the logical fallacies of Begging the Question, and The Fallacy of Composition.  There certainly are people who are militantly anti-test.  And there are people who point out that there are issues of non-equal performance based gender, income, and ethnicity, and who have concerns about access to college based on such an exam.  Those people hardly account for the whole test-optional movement, although those points are good ones that should not be dismissed.

For me, the move to test-optional was really two-fold: First, there was and is the research conducted by people outside the testing industry.  That research is,  for the most part, pretty conclusive and unanimous: Standardized tests don’t really tell us a lot we don’t already know, at least not at the 85% of the universities in the country who are brave enough to acknowledge they’ll never run admissions like Princeton does.

Sidebar: A little bit of statistics without formal statistics talk: In our internal studies, standardized test scores uniquely explain very little of a student’s performance at DePaul.  (This is important: Other colleges may have different results, something the pro-testing people never seen to want to admit, which, I think, makes their argument much weaker.)  Standardized tests may appear to explain performance because they tend to co-vary with grades in college-prep classes, which are the most important predictor in every study I’ve ever seen, by a substantial measure.  (In other words, most students with high grades in a given school have higher test scores, and vice versa.  So test scores simply repeat and amplify the high school grade signals.)  What was most surprising to me was that this was true across all schools, with the possible exception of the very lowest schools on the socio-economic and college-bound scales.  We know poor kids and/or kids from families without college-educated parents don’t go to college.  I suspect the most overlooked factor in this is the very things colleges think those students need to be successful in the first place.  As I said in my post “The Myth of Need-blind Admissions,”  here:

It’s true that these institutions do a great job of funding poor students they admit.  The problem appears to be that they don’t admit many of them in the first place.  This is the myth of need-blind admissions: All these institutions (I think) claim to be need blind, but when they make admissions decisions, they only pay attention to the income part of low-income, not the residual effects.  If you use SAT or ACT; if you favor students who have lots of AP courses; if you effectively reward expensive test-prep programs; or even if you prize activities that can only be mastered if you have lots of time because you don’t have to work, you’re overlooking a lot of things that come with being poor, or even middle-class.  Need blind admissions is a nice, noble-sounding term. It’s not so pretty in reality.

The research and statistics part is important, of course, as we don’t do much in higher education without them.

But for many, the test-optional movement is based on a different approach and philosophy.  So, without statistics, a few observations:

  • There is no doubt that standardized tests measure some type of intelligence.  The ability to quickly choose the correct answer from four given is a skill, and it’s fairly important skill: Separating the wheat from the chaff is often very important in logic, for instance, or even mathematics.  (The creator of the “bubble test,” however, Frederick Kelly, called this  “lower-order thinking.”
  • And as I’ve written before, selective colleges really like this: When you have thousands of well qualified applicants, you can select from those who have both proven academic success and that special skill that comes with doing well on standardized tests. It adds a small additional measure of precision, they think, to an inherently imprecise decision.  It’s unlikely any university in the Ivy League is going test-optional.  Despite their lofty reputations, they have too much riding on the SAT Arms race.  It’s good for them to cite standardized test scores that are off the charts. If you want to look up your favorite, you can do so here, with 2011 IPEDS data.
  • However, college work is really not very much like a standardized test.  And neither is life.  It’s just not that often when someone comes to you and gives you a problem, tells you the answer and three wrong ones, and then requires you to pick the correct one out of the bunch.  Usually, you find, the number of possible answers are quite numerous, there is no single answer that’s perfect, and often, the problem presented can’t always even be placed in the form of a proper short question.
  • Nor does a standardized test tell us whether you’re going to be capable of sticking with a subject for a semester, adept at getting an assignment and spending hours researching it and writing a paper, or contributing to class discussions.  But you know what does?  Your record of doing just that for four years in high school.
  • As a parent, I’m concerned by the extent to which standardized testing has taken over much of what is done in schools today.  Make no mistake:  People in charge of schools are held accountable for the outcomes on these tests, and the result is multiple choice tests in almost every class, including English literature, and history. Taking a standardized or multiple choice test is a skill that can be honed over time, and if performance on the test is the measure, kids are going to be tested this way.
  • I’m also concerned by the ways in which standardized testing have come to be the focus of the junior year.  I attended a college program with my son, and three quarters of it was about testing: Where, when, how often, which test prep,etc.  I’ve known parents who made kids give up activities they love to prepare for college entrance exams.  Exams, that in the end, may not tell us much about anything important. And I think what is lost in the race is time to develop thinking, writing, problem solving, aimless exploration that leads to discovery, creativity, and other important skills teenagers should be developing.

Tests may help us distinguish between dogs and poodles. We can posit of course, that

  • All poodles are dogs (or) All good testers are smart
  • Not all dogs are poodles (or) Not all smart people are good testers
  • You can be a dog who is not a poodle (or) You can be smart with being a good tester
  • And we must admit that all non-dogs cannot be poodles (or) Let’s admit that if you’re not very smart, you won’t score well on standardized tests.  That’s not the point.

Robert Sternberg, the Provost of Oklahoma State University wrote a most eloquent piece about some of the conceptual problems with standardized testing.  I hope people at the College Board and people who are proponents of standardized testing read it and consider that maybe everything they’ve come to believe about the value of tests might be wrong–not for everyone, not for every college, not in every situation–but for a substantial percentage of our students.  These students are right, capable, talented, motivated, eager to learn, accomplished as students..but maybe not especially proficient at picking out the right answer.  And they shouldn’t be measured by a test created by someone who’s never taught them.

These students and the colleges who want them to become productive, educated people should really pose no threat to the College Board.  If you want to lead educational reform, start by acting educated.

2 thoughts on “Explaining Test-optional with (almost) no statistics

  1. I’ve worked at two test-optional institutions. At both places, our decision to become test-optional was based primarily on research that demonstrated that high school performance, in the context of the courses taken, was the best predictor of success in the college classroom. The added validity provided by standardized tests was not significant enough to justify requiring the test of every applicant. Jon is correct that test-optional policies may not be for every institution; it’s unfortunate that Kathryn Juric’s position suggests that there is an absolute right and wrong on this issue.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s