5 Reasons to Rethink the U.S. News College Rankings
Published on by

Darkside of college rankingsWednesday, September 9th was that day, again.

I’m talking, of course, about the day when U.S. News & World Report releases its yearly college rankings. As usual, the 2016 rankings were predictably similar to the rankings from the year before, but different enough to get people to pay attention.

Once again, Princeton beat out Harvard and Yale for the coveted number one spot in the National University Rankings. Columbia, Stanford and Chicago were neck-and-neck for number four while MIT followed in seventh place.

Now, everyone knows that the U.S. News rankings are a big deal. They’re the place you go if you want to be able to slap an exact number on a university’s prestige. So influential are the rankings that college are routinely caught lying and falsifying data to inflate their scores.

But what does it actually mean that Princeton “beat out” Harvard, that Chicago and Stanford “tied” and that Columbia ranked higher than MIT?

It turns out the answer is a little unsettling.

U.S News uses a formula relying on several different variables to calculate colleges’ scores. The different factors it takes into account are:

  • Undergraduate academic reputation – 22.5%
  • Retention – 22.5%
  • Faculty resources – 20%
  • Student selectivity – 12.5%
  • Financial resources – 10%
  • Graduation rate performance – 7.5%
  • Alumni giving rate – 5%

At first glance, this system is perfectly reasonable. For example, it makes sense that schools with more financial resources at their disposal that attract better students should score higher.

But the ways U.S. News actually puts specific numbers on these factors have some unintended consequences – unintended consequences that are important not just because they mean the U.S. News college rankings are questionable at best (I mean, we already knew that!) but because they affect the kind of experience colleges end up providing for students.

Here are 5 of these unintended consequence that you’re not likely to read about in this year’s edition of U.S. News & World Report and that showcase the dark side of college rankings.

  1. The “reputation” factor turns the rankings into a popularity contest

One of the most important criteria U.S. News considers is colleges’ “undergraduate academic reputation” – colleges that people think are good get higher scores, ones that people aren’t so hot on fare worse.

And specifically, the people U.S. News surveys to get a feel for colleges’ reputations are presidents and other administrative officials in higher education. Secondarily, they also solicit input from guidance counselors at public high schools. In other words, 22.5% of a given school’s score in the U.S. News rankings is determined by how high-ranking officials in academia and a handful of guidance counselors feel about that school.

U.S. News argues that calculating 22.5% of colleges’ scores based on the opinions of administrators and guidance counselors helps the rankings “account for intangibles,” but many more skeptically minded observers have pointed out that perhaps this way of doing things just turns 22.5% of the rankings into a big popularity contest.

There’s also the problem that in this case, the people doing the ranking are in fact also the people being ranked. There’s nothing to stop college presidents from giving competing schools lower rankings to bolster their own schools’ standings – in fact, considering some schools are willing to even fabricate numbers to boost their scores, it would be quite shocking if this wasn’t happening at least to some extent.

And finally, there’s the sheer circularity of basing a ranking that helps determine a school’s reputation on that school’s reputation. Placing higher on the U.S. News college rankings enhances a school’s reputation, and having a good reputation makes it more likely that a school will place higher on the U.S. News college rankings. So making “reputation” 22.5% of the final score helps ensure name-brand schools remain name-brand schools and Podunk State University remains Podunk State University.

  1. The “resources” factor drives up tuition rates

Despite the name, the “financial resources” factor of the rankings isn’t a measure of how much money schools have but rather of how much they spend. It should probably be renamed the “big spender” factor.

The rationale, in the words of U.S. News, is that “generous per-student spending indicates that a college can offer a wide variety of programs and services.” So 10% of a school’s score is based on its average spending per student.

The obvious flaw with this metric is that, with the caveat that spending on sports, dorms and hospitals doesn’t count, it doesn’t really matter what schools spend this money on or whether they spend it in a way that improves the quality of the education offered. It just matters that they spend money.

Buying a herd of alpacas for Zoology 101? Sure, more points in the U.S. News rankings.

Sending a group of psychology majors on an all-expenses-paid vacation to Cancun to conduct first-hand research on the emotional effects of sun bathing? Yep, that counts too.

It takes only a very rudimentary understanding of economics to see that if you give schools an incentive to spend more money, they will spend more money, and if they spend more money, they’ll have to raise their tuition fees.

At a time when the United States is in a crisis of soaring college tuition fees, U.S. News has done pretty much the least helpful thing possible: created a system for rewarding schools that spend money just for the sake of spending money.

  1. The “student selectivity” factor corrupts the admissions process

The “student selectivity” factor actually breaks down into three sub-factors: enrolled students’ SAT/ACT scores, percentage of enrolled students who graduated at the top of their high school classes and overall acceptance rates.

Because this metric encourages a certain definition of “student selectivity,” college applicants will feel its effects directly during the admissions process. By tinkering with the way they admit students, schools can inflate their scores on this factor of the college rankings by more strongly weighting standardized test scores and high school class standings. And don’t think for a second that they wouldn’t stoop to that level!

As long as colleges are rewarded for prizing SAT scores and class standing above all else, they’re less likely to take chances on students with outstanding accomplishments in other areas who don’t stack up well on these measures. They’re also less likely to admit lower-income applicants, who don’t perform as well on standardized tests on average. (Of course, given how the U.S. News rankings help drive up costs, these students will have a hard time paying their fees even if they do get accepted.)

Finally, since part of this factor is based on overall acceptance rate, colleges are given a stronger incentive to attract as many applicants as possible, regardless of whether the applicants they’re attracting are good fits or have any chance of getting in. The more applicants colleges can reject, the better – it makes them more “selective”!

  1. None of the factors measure educational quality

With all this talk about the different factors included in the college rankings, you might have noticed that one factor is conspicuously absent from U.S. News’ formula: any kind of measure of the educational quality these schools actually provide.

No measure of students’ educational outcomes. No measure of job prospects. No measure of whether students leave these institutions any more educated than they entered. Considering the whole point of higher education is, well, to educate people, it’s certainly a bit odd that U.S News makes no efforts to measure the educational experiences the colleges they rank provide.

Of course, believers in the U.S. News college rankings might argue that educational quality is just one of those “intangibles” measured by the “reputation” factor’s national survey of college administrators. But does anyone have any guesses as to how much the president of Harvard knows about the educational experience at University of Alabama–Huntsville?

It’s possible that educational outcomes are just too darn hard to measure. After all, an education is a complex thing. But if it’s the case that measuring educational effectiveness simply isn’t possible, this raises a rather unpleasant question: what are the U.S. News college rankings really measuring?

  1. It’s just too easy to game the system

Although there have been some high-profile examples of schools fudging their data in attempts to move up the U.S. News rankings, the truth is that colleges really don’t need to go so far as to fabricate numbers to game the system.

Colleges can easily manipulate their scores on almost every factor in the college rankings without making real improvements to the quality of the education they provide.

Want to score higher on “student selectivity”? Encourage applications from students who don’t have a chance of being admitted and let in applicants with the highest SAT scores.

Want to look like you have more “financial resources”? Spend money indiscriminately.

Want to do well on the “retention” factor? Make your courses easier.

In fact, pretty much the only factor that can’t be manipulated is the “reputation” factor. College presidents are always going to put big-name schools everyone knows about at the top of the list, they’re always going to be clueless about what goes on at the vast majority of the schools they’re ranking and they’re often going to mark down competing colleges. You can’t do a thing to change that!

The U.S. News college rankings can seem like a bit of harmless fun, but they have a darker, more insidious side hidden from public view. Because of the vast audience the rankings reach, colleges vie to move up in the U.S. News pecking order – which would be a good thing if this competition motivated colleges to improve educational quality.

Unfortunately, educational quality carries little weight in the rankings, and colleges instead manipulate their scores by making changes that aren’t just superficial but are actually detrimental to students’ experience. By driving up costs, encouraging admissions officers to put SAT scores front and center, emphasizing prestige over substance, rewarding schools for turning down as many applicants as possible and giving colleges an incentive to manipulate their performance on simplistic metrics rather than improve the overall educational experience they offer, the U.S. News rankings feed into many of the worst parts of today’s higher education system. And in the end, students – especially lower-income students, who take out more debt to cover high tuition fees and tend to perform worse on standardized tests – are the ones who pay.

It’s safe to say that students, parents and schools alike would probably all be better off if the U.S. News college rankings magically disappeared from the face of the planet. But realistically, we’ll be going through this exercise again next year, and even those of us who should know better will be indulging in our annual guilty habit of perusing U.S. News & World Report. Still, at least if in September 2016 you find yourself wondering what exactly it means that Harvard “beat” Yale again this year, you’ll know the answer: pretty much nothing.

By Niels V.