This article was originally written for, now a part of

With thousands of colleges to choose from, it’s no wonder that high school students and their parents turn to college rankings to help guide their college search. It’s an especially helpful way for prospective first-generation college students to narrow down the options. But, it’s important to understand how the ranking works and how sometimes–perhaps more often than many people care to admit–the numbers get skewed.

The Practice of Rating Universities and Colleges

In an August 2013 speech on the affordability of college, President Obama addressed the practice of organizations ranking universities and their programs. He said, “Right now, private rankings like U.S. News and World Report puts out each year … encourage a lot of colleges to focus on ways to … game the numbers.” One solution he suggested for changing this practice was to revamp the ratings system and rank higher education institutions by their quality and value. The President stated that doing so would stop some universities from continuing to raise their tuition and fees, as well as keep overall tuition costs down at schools that wanted government awards and incentives.

As the system stands now, institutions release these statistics and ranking systems annually, and every year, students, parents and educators refer to them when making essential decisions about the future. While there’s no doubt that simple numbers cannot capture every aspect of college life and the reputation of a certain department or degree, the question arises over whether or not these rankings serve any use at all.

The U.S. News & World Report University Ranking System

The most cited, most publicized higher education ratings system is published annually by U.S. News & World Report. This publication gave a brief summary of its methodology in “How U.S. News Calculates Its Best College Rankings.” According to this article, the steps are:

  1. Categorize the institution by its mission.
  2. Gather data for 16 areas regarding academic performance.
  3. Create a composite, weighted score for each school.
  4. Rank the institutions against peers in their category.

Schools are not ranked if they do not use SAT or ACT scores, if they receive inadequate numbers of assessments from peers, if fewer than 200 students are enrolled, if nontraditional students comprise a large percentage of the student body or if there are no first-year students.

Critiques of the U.S. News & World Report Ranking System

While the system for determining these rankings seems relatively thorough at first glance, it has generated fierce criticism. In her blog, higher education journalist Lynn O’Shaughnessy wrote, “Despite the annual hoopla over these college rankings, U.S. News is a shell of its former self. It’s not even a magazine anymore. It now rates all sorts of stuff for its survival … U.S. News ranks such things as hospitals, cars, diets, vacations and health insurers!” She critiques its methodology by saying it does not measure learning and educational experience. Instead, O’Shaughnessy says, the rankings encourage universities and colleges to inflate their rejection rate by marketing to students who are ineligible for entry.

The U.S. News algorithm has even prompted some institutions to cheat by providing false data. In The Washington Post, Nick Anderson’s “Five colleges misreported data to U.S. News, raising concerns about rankings, reputation,” reported how Tulane University, Bucknell University, Claremont McKenna College, Emory University and George Washington University, all well-regarded schools, have later admitted they inflated test scores or other information about their incoming students.

Another glaring issue concerns finances. The rankings by U.S. News & World Report ignore compelling factors such as an institution’s affordability and the average amount of debt students carry upon graduation. This results in the rankings favoring elite institutions that can afford to spend more, often at the expense of their students upon graduating.

In “The Order of Things,” published in The New Yorker, Malcolm Gladwell argues against the practice of ranking educational institutions, opining that there is no truly objective way to measure quality. For example, nearly a quarter of an institution’s final score on the U.S. News & World Report scaling system depends upon its reputation. Yet the way this score is formulated entails having deans and provosts rate their peer institutions. The irony here is that most deans and provosts do not track the changes at their peer institutions of higher learning and depend instead on the existing reputation of the college. Gladwell points out that the primary factor driving these institutions’ reputation is the existing U.S. News rankings. Thus, there is little to no change in these ratings from year to year. Gladwell also notes that the rating system pays no attention to whether an elite institution is worth a high tuition it charges and that the rating algorithm does not really weigh educational outcomes.

Alternative University and College Ranking Systems

Dissatisfaction with the U.S. News & World Report rating process has sparked varying responses in the educational community. Some institutions of higher education refuse to provide statistics for the annual survey. For example, the reputable liberal arts institution Reed College has refused to participate for years. In a 2009 interview published in The Oregonian, Reed College President Colin Diver dismissed the U.S. News & World Report ratings system as based on ignorance and gossip-driven. This stance has not stopped U.S. News from procuring as much data as it can independently of Reed College and listing the school as a top contender in several categories.

Another response involves creating algorithms to determine the quality of schools. Some organizations have created their own higher education ranking systems, utilizing criteria they feel more accurately reflects student concerns. They include:

  • Best Bang for the Buck RankingsWashington Monthly assembles this list of colleges that offer students education at an affordable price. Statistics on the list include the percentage of students receiving Pell Grants, graduation rate, default rate and the school’s net price. This system has generated a good deal of media attention and addresses President Obama’s concern about ranking systems that do not take cost and debt into account.
  • Alumni FactorGraduate satisfaction and success are the main areas of focus of this ranking system. The Alumni Factor is a scaling system that rates schools based on 15 characteristics, including career preparation, overall happiness, financial value, intellectual development and how likely graduates are to recommend the school to friends.
  • Parchment Student Choice College Rankings: This rating system is based on one key question: When students are accepted to multiple institutions, which one do they pick? The site uses over 250,000 survey results to rank the most-chosen colleges, and allows would-be applicants to search schools by students characteristics (such as class rank or major) so that they can get a more accurate picture of student life.

The Arbitrary Nature of Ranking

In 2010, The Daily Beast released its evaluation of what it deemed to be the “100 Happiest Colleges.” While the exercise was partly for fun and games, it pointed to a couple of issues. First, quality of life can drive many factors in a school’s success, including academic performance and student retention rates. The second issue, however, was a bit of a twist. The Daily Beast created its pool of institutions for ranking from the top 50 schools in several categories as determined by U.S. News & World Report. Rather than reworking an inaccurate or broken rating system, this one reinforced it.

The Happiest Colleges list was a follow-up to another much-publicized ranking on The Daily Beast, that of “The 50 Most Stressful Colleges.” To create this list, the authors used noticeably distinctive criteria, including:

  • The cost: Using information from the National Center on Education Statistics, The Daily Beast tallied not just the cost of tuition, but also of room and board. The inclusion of this information is valuable considering that the cost of living, even as a student, is far greater in New York City or San Francisco than in a small college town.
  • Competitiveness: Using statistics from the U.S. News & World Report rankings, The Daily Beast measured an institution’s academic rigor.
  • Acceptance rate: Based on the belief that competitive schools tend to have more competitive students, The Daily Beast factored in data on acceptance rates from U.S. News & World Report.
  • Engineering: In an interesting twist, The Daily Beast considered schools more stressful if they had notable graduate programs in engineering that were considered rigorous.
  • Crime on campus: The Daily Beast used data from the U.S. Department of Education to rank the level of crime at each school relative to the other institutions in the study.

All of this creativity in ranking points to the issue that selecting the selection criteria is purely subjective, not to mention the subjectivity of determining the weighting of each criterion. These factors factor into why Gladwell goes a step further than some critics in his criticism of U.S. News & World Report, saying that ranking systems are arbitrary in general and should not carry significant weight in determining educational quality or decisions.

To support his argument, Gladwell cites Indiana University law school professor Jeffrey Stake’s project, The Ranking Game. The game consists of a spreadsheet filled with statistics on every law school in the United States. Players select their criteria, weigh it and formulate their own ranking algorithm. With repeated adjustments to criteria and the weight given to each criterion, users discover how the rankings change dramatically at each turn. 

Ranking the Rankers

Ultimately, there is no single, objective list that captures every desirable quality in an institution and accurately rates it against its peers. In the end, it’s up to students, families and educators to give the most weight to the only criteria that matter–how much to refer to and depend upon the various college ranking systems. As pointed out in Valerie Strauss’ article, “The problem with the U.S. News college rankings,” an educational institution’s value can never be reduced to a number.