Wednesday, December 7, 2022

Test-optional Admissions and Data

by Patrick O'Connor, Ph.D.

Test-optional admissions is once again in the news, as a prominent college announced last week they would once again require submission of test scores starting for everyone applying in Fall, 2023. This announcement adds to the small, but in some cases notable, number of colleges that have announced a return to testing.

Like the announcements before it, this notice is raising the hackles of many respected members of the college counseling community. Their rationale is fairly sound; the tests have long been accused of being racially and socially biased, and claims exist that suggest students can prepare for the tests in a way that improves their score, but not the understanding of the content.

I’ve been a big champion of test-optional admissions, and still am—if the test score really doesn’t add anything relevant to the application, why are we asking students to take them? At the same time, both sides of the test-optional movement have been, and continue to be, extremely lax in the presentation of data supporting their side of the issue. Counselors know numbers don’t always tell the story of a student’s ability to learn and contribute to a college campus, but taking a stand on this issue without a serious look at data ignores a big part of the story.

And why doesn’t data play a more prominent role? Well…

Colleges simply don’t have it, Part I. Many test-optional colleges made their decision to abandon testing due to COVID. Their rationale was largely pragmatic; if a bright student can’t take the test, they can’t earn a score—and if we require a score, we can’t admit them. The dire circumstances of the testing access experienced at the height of COVID made the test-optional decision easier; without it, no score means no students, and no students means no college.

Colleges simply don’t have it, Part II. Since then, the colleges that went test optional have all kinds of data they could use to measure the need for tests. Creating an institutional definition of student success, it wouldn’t be hard to compare grades, test scores, degree completion, satisfaction with the college experience and more. The college announcing its intention to require testing claims to have done just this—kind of. The press release they sent said they had “evidence” test scores helped predict student success, but anyone who’s taken a stats course knows there’s a huge gap between evidence and data. The two simply aren’t the same. (It turns out the college did use data, making one wonder why they didn’t announce it in the press release.)

Colleges don’t want to know if data matters or not. The best defense I ever heard from a college regarding required submission of test scores went something like this. We’re a very big school, and most students will take almost all of their first two years of classes where the sole means of assessment is a multiple choice test. If an applicant doesn’t “test well” with the ACT or SAT, why would we expect them to do well at our college?

That’s not a bad argument, but without data, it’s too easy to have the claim “make sense” and not test it out, since any test that suggested there was no such relationship puts the college in a rough spot.


All of this reinforces my current stand on testing—if you think you need it, prove it with rigorously applied data. If you’re not going to do that, let’s continue to make college more accessible by leaving the tests out of the admissions equation.


1 comment: