Criticism of college and university rankings refers to critiques of various rankings publications among faculty and administrators in institutions of higher education in both the United States and Canada, as well as in media reports.
Arguments critical of U.S. News & World Report Best Colleges Rankings include that it is not possible to arrive at a single number which characterizes university performance; ratings can be easily manipulated; and ratings may include subjective characteristics, like "reputation", as determined by surveying university administrators, such as chancellors or deans.[1] Critics say rankings have incentivized institutions to encourage more unqualified students to apply (in order to increase selectivity) and are a better measure of the abilities students had when they arrived than what they learned from higher education. In 2023, a third of the 196 law schools annually surveyed had withdrawn cooperation from the U.S. News rankings.
In 2006, 26 of 47 universities in Canada refused to complete annual MacLean's Guide to Canadian Universities surveys. Subsequently, 11 Canadian universities issued a joint statement describing the rankings as "over-simplified and arbitrary".[2]
In 1995, Reed College refused to participate in U.S. News & World Report annual survey. According to Reed's Office of Admissions, "Reed College has actively questioned the methodology and usefulness of college rankings ever since the magazine's best-colleges list first appeared in 1983, despite the fact that the issue ranked Reed among the top ten national liberal arts colleges. Reed's concern intensified with disclosures in 1994 by The Wall Street Journal about institutions flagrantly manipulating data in order to move up in the rankings in U.S. News and other popular college guides. This led Reed's then-president Steven Koblik to inform the editors of U.S. News that he didn't find their project credible, and that the college would not be returning any of their surveys."[3]
Rolling Stone, in its October 16, 1997, issue, argued that Reed's rankings were artificially decreased by U.S. News after they stopped sending data to U.S. News & World Report.[4] Reed has also made the same claim.[3] In discussing Reed's decision, President Colin Diver wrote in an article for the November 2005 issue of the Atlantic Monthly, "by far the most important consequence of sitting out the rankings game, however, is the freedom to pursue our own educational philosophy, not that of some newsmagazine."[5]
Associated Students of Stanford University (ASSU) Vice-President Nicholas Thompson founded FUNC or "Forget U.S. News Coalition" in 1996 as a show of support for Reed College's decision not to participate in the U.S. News & World Report survey.[6] [7] FUNC eventually spread to other colleges and universities and was composed of a "group of students at universities across the country who argue that ranking something as complex and variable as a college education with a single number is an oversimplification. FUNC claims that the process makes college administrations focus on numerical rankings rather than on educating students."[8] FUNC also involved then-Stanford President Gerhard Casper. On September 23, 1996, Casper sent a letter to James Fallows, editor of U.S. News & World Report, stating, "As the president of a university that is among the top-ranked universities, I hope I have the standing to persuade you that much about these rankings - particularly their specious formulas and spurious precision - is utterly misleading."[9]
In January 1997, then-president of Alma College, Alan Stone, asked 480 colleges to boycott the U.S. News & World Report Rankings due to the peer assessment survey which counts for 22.5% of a college's ranking.[10] According to the Chronicle of Higher Education, in 1996, Alma College surveyed 158 colleges about the rankings. The result of the survey indicated that "84 per cent of the respondents admitted that they were unfamiliar with some of the institutions they had been asked to rank. Almost 44 per cent indicated that they 'tended to leave responses for unfamiliar schools blank. Stone stated, "this makes me wonder just how many votes are being considered for each school's academic-reputation ranking."[11] [12]
In February 1997, Stanford University contemplated following both Reed and Alma by not filling out the ranking survey, a move advocated by FUNC.[13] On April 18, 1997, Casper issued a letter critical of U.S. News & World Report college rankings titled "An alternative to the U.S. News & World Report College Survey"[14] Casper's letter circulated among college presidents and led to a decision by Stanford that it will "submit objective data to U.S. News, but will withhold subjective reputational votes."[15] Stanford also announced at this time that it would post information about the University on its website.[16] In 1998, Stanford posted an alternative database on its website, stating: "This page is offered in contrast to commercial guides that purport to 'rank' colleges; such rankings are inherently misleading and inaccurate. Stanford believes the following information, presented without arbitrary formulas, provides a better foundation for prospective students and their families to begin comparing and contrasting schools."[17] It has since been posted annually as the "Stanford University Common Data Set".[18] FUNC eventually disbanded and Stanford currently participates in the survey.[19]
St. John's College, which since 1937 has followed the Great Books Program, runs counter to the usual emphasis on rankings and selectivity., St. John's has chosen not to participate in any collegiate rankings surveys and has not sent them their requested survey information. However, the school is still included in the U.S News college ranking guide and ranks in the third tier. This may be due to the school's decision not to send information to U.S. News. President Christopher B. Nelson stated that, "in principle, St. John's is opposed to rankings." He notes that:
Led by the University of Toronto in 2005; in September 2006, 26 of 47 universities annually surveyed in Canada jointly refused to participate in annual national rankings of universities in Canada conducted by Maclean's University Rankings survey.[20] The president of the University of Alberta then stated that:
Criticism of college and university rankings has been voiced by a 2007 movement which developed among faculty and administrators in American institutions of higher education. It follows previous movements in the U.S. and Canada (by schools in the 1990s such as Reed College, Stanford University, Alma College, as well as a number of universities in Canada in 2006) which have criticized the practice of college rankings. The arguments of those who criticize the ranking are that it is not possible to come with a single number that characterizes university performance. Ratings, as argued by academic institutions and their leaders, can be easily manipulated and include such subjective characteristics as the "reputation" determined by surveying university administrators such as chancellors or deans.[21] Methodology of many rankings (e.g., U.S. News & World Report 2015 Best Engineering Schools Rankings) emphasizes research expenditures (such as grants and contracts) as the only measure of scientific accomplishments despite the concern that measuring science by the amount of money spent rather than by the importance and impact of scientific discoveries or the depth of the ideas could encourage costly projects that are not necessary scientifically sound.[22]
On June 19, 2007, during the annual meeting of the Annapolis Group, members discussed the letter to college presidents asking them not to participate in the "reputation survey" section of the U.S. News & World Report survey (this section comprises 25% of the ranking). As a result, "a majority of the approximately 80 presidents at the meeting said that they did not intend to participate in the U.S. News reputational rankings in the future."[23] However, the decision to fill out the reputational survey or not will be left up to each individual college as: "the Annapolis Group is not a legislative body and any decision about participating in the US News rankings rests with the individual institutions."[24] The statement also said that its members "have agreed to participate in the development of an alternative common format that presents information about their colleges for students and their families to use in the college search process."[24] This database will be web based and developed in conjunction with higher education organizations including the National Association of Independent Colleges and Universities and the Council of Independent Colleges.
On June 22, 2007, U.S. News & World Report editor Robert Morse issued a response in which he argued, "in terms of the peer assessment survey, we at U.S. News firmly believe the survey has significant value because it allows us to measure the "intangibles" of a college that we can't measure through statistical data. Plus, the reputation of a school can help get that all-important first job and plays a key part in which grad school someone will be able to get into. The peer survey is by nature subjective, but the technique of asking industry leaders to rate their competitors is a commonly accepted practice. The results from the peer survey also can act to level the playing field between private and public colleges."[25] In reference to the alternative database discussed by the Annapolis Group, Morse also argued, "It's important to point out that the Annapolis Group's stated goal of presenting college data in a common format has been tried before ... U.S. News has been supplying this exact college information for many years already. And it appears that NAICU will be doing it with significantly less comparability and functionality. U.S. News first collects all these data (using an agreed-upon set of definitions from the Common Data Set). Then we post the data on our website in easily accessible, comparable tables. In other words, the Annapolis Group and the others in the NAICU initiative actually are following the lead of U.S. News."[25]
In 2007, educators in the United States began to question the impact of rankings on the college admissions process, coinciding with the newly released Washington Post article "The Cost of Bucking College Rankings"[26] by Michele Tolela Myers (the former president of Sarah Lawrence College). As Sarah Lawrence College dropped its SAT test score submission requirement for its undergraduate applicants in 2003[27] (thus joining the SAT optional movement for undergraduate admission), SLC does not have SAT data to send to U.S. News for its national survey. Of this decision, Myers states, "We are a writing-intensive school, and the information produced by SAT scores added little to our ability to predict how a student would do at our college; it did, however, do much to bias admission in favor of those who could afford expensive coaching sessions.[26] [28]
As a result of this policy, in the same Washington Post article, Myers stated that: "I was recently informed by the director of data research at U.S. News, the person at the magazine who has a lot to say about how the rankings are computed, that absent students' SAT scores, the magazine will calculate the college's ranking by assuming an arbitrary average SAT score of one standard deviation (roughly 200 points) below the average score of our peer group. In other words, in the absence of real data, they will make up a number. He made clear to me that he believes that schools that do not use SAT scores in their admission process are admitting less capable students and therefore should lose points on their selectivity index."[26] [29]
Myers further stated that "several faculty members and deans suggested that perhaps it was time to stop playing ranking roulette and opt out of the survey."[26] Myers next argued that at the NEAIR (North East Association for Institutional Research) 33rd Annual Conference in 2006, a talk given by U.S. News,[30] "indicated that if a school stops sending data, the default assumption will be that it performs one standard deviation below the mean on numerous factors for which U.S. News can't find published data. Again, making up the numbers it can't get. The message is clear. Unless we are willing to be badly misrepresented, we had better send the information the magazine wants."[26] [29]
U.S. News & World Report issued a response to this article on March 12, 2007, which stated: "Sarah Lawrence's decision is unique, and the magazine's handling of it is still under consideration. Some colleges have made SAT or ACT scores optional in the admissions process, but to our knowledge, no other major college has decided to disregard them completely. Our rankings are painstakingly tabulated, using the best data available. U.S. News data researchers regularly participate in briefings and conferences where the most complicated nuances of the process are discussed with the ranked institutions. We regularly adjust to changes in the educational environment, and we plan to address this circumstance in a similar manner."[28]
The Presidents' Letter (dated May 10, 2007), developed by Lloyd Thacker of the Education Conservancy, was sent to college and university presidents in the United States in May 2007, concerning the U.S. News & World Report college rankings. The letter does not ask for a full boycott but rather states that:
"while we believe colleges and universities may want to cooperate in providing data to publications for the purposes of rankings, we believe such data provision should be limited to data which is collected in accord with clear, shared professional standards (not the idiosyncratic standards of any single publication), and to data which is required to be reported to state or federal officials or which the institution believes (in accord with good accountability) should routinely be made available to any member of the public who seeks it."[31]
Instead, it asks presidents to not participate in the "reputational survey" portion of the overall survey (which accounts for 25 percent of the total rank and asks college presidents to give their subjective opinion of other colleges). The letter also asks presidents not to use the rankings as a form of publicity:
12 college and university presidents originally signed the letter in early May 2007.[32] The letter currently has sixty-one signatures, though others may be added at a later date.[33]
A number of signatures have been added to the original twelve. Others may be added at a later date.[31]
On June 19, 2007, during the annual meeting of the Annapolis Group, which represents over 100 liberal arts colleges, members discussed the letter to college presidents. As a result, "a majority of the approximately 80 presidents at the meeting said that they did not intend to participate in the U.S. News reputational rankings in the future."[34] However, the decision to fill out the reputational survey or not will be left up to each individual college as: "the Annapolis Group is not a legislative body and any decision about participating in the US News rankings rests with the individual institutions."[24]
The statement also said that its members "have agreed to participate in the development of an alternative common format that presents information about their colleges for students and their families to use in the college search process."[24] This database will be web based and developed in conjunction with higher education organizations including the National Association of Independent Colleges and Universities and the Council of Independent Colleges.
The new database was described in Time magazine as "a web-based alternative to the rankings that is being spearheaded by the 900-member National Association of Independent Colleges and Universities. NAICU's easy-to-read template, which is expected to be rolled out by hundreds of schools in September, allows students and their families to pull up extensive information organized in an objective format that includes such data as what percentage of students graduate in four years compared to those who graduate in five or six years."[35]
On June 22, 2007, U.S. News & World Report editor Robert Morse issued a response in which he argued, "in terms of the peer assessment survey, we at U.S. News firmly believe the survey has significant value because it allows us to measure the 'intangibles' of a college that we can't measure through statistical data. Plus, the reputation of a school can help get that all-important first job and plays a key part in which grad school someone will be able to get into. The peer survey is by nature subjective, but the technique of asking industry leaders to rate their competitors is a commonly accepted practice. The results from the peer survey also can act to level the playing field between private and public colleges."[25]
In reference to the alternative database discussed by the Annapolis Group, Morse also argued, "it's important to point out that the Annapolis Group's stated goal of presenting college data in a common format has been tried before ... U.S. News has been supplying this exact college information for many years already. And it appears that NAICU will be doing it with significantly less comparability and functionality. U.S. News first collects all these data (using an agreed-upon set of definitions from the Common Data Set). Then we post the data on our website in easily accessible, comparable tables. In other words, the Annapolis Group and the others in the NAICU initiative actually are following the lead of U.S. News."[25]
A debate on this issue was published as a podcast in the June 25, 2007, issue of Inside Higher Ed. The debate was between Lloyd Thacker, director of the Education Conservancy, who is a well known critic of the U.S. News rankings, and U.S. News editor Brian Kelly. The debate was moderated by Inside Higher Ed reporter, Scott Jaschik.[36]
Chair of the Annapolis Group, and president of Gettysburg College, Katherine Haley Will, discussed this decision further in a July 9, 2007, article for The Washington Post. In this article, Hill states that this decision was not based upon "a lack of concern about providing accurate, comprehensive information to help students and their families make decisions about college." Rather, she argued against the methodology of the U.S. News rankings. In particular, she argues against "the largest single factor in the U.S. News rating formula" which is the reputational survey as, "it is unrealistic to expect academic officials to know enough about hundreds of institutions to fairly evaluate the quality of their programs." Hill then argues that, "by contrast, 1 percent of the U.S. News ratings formula is assigned to student-to-faculty ratios, which many faculty members and students consider the most important factor in educational experience." Hill states that the members of the Annapolis Group will offer the same information in an alternative, free, format which will not rank schools, as, "an educational experience can't be reduced to one number, a school's so-called rank. The simplicity of a rank is understandably more appealing than spending hours poring over college catalogues and visiting campuses, but myriad complex variables can't be reduced to a single number." Instead, Hill asks students and parents to "compare schools on a variety of factors ... they should visit campuses and go on what feels like a good match rather than relying on filtered or secondhand information. We must encourage students to look inside their hearts and trust their instincts when it comes to choosing a college, not whether parents or friends think a university is cool or prestigious."[37]
A number of presidents have issued responses to these events. One of them, Presbyterian College president John Griffith, compared this movement to a form of revolution: "I have lived long enough to come to the conclusion that major shifts occur every quarter century or so in the way American culture approaches matters of importance. We often call those shifts revolutions because people revolt against old and outmoded ways of doing things in favor of new approaches, new technologies and new ideas that better meet the needs of the time. We have experienced revolutions in information technology, travel and communication. There is one going on now that is symbolized by the introduction of iPhones this past week; we know what this one is about. But there is another revolution going on related to choosing a college -- and the role that public rankings play in that choice -- that may be less clear."[38]
Presidents have also discussed the role of endowment, correlating a high ranking on the survey with institutional wealth. President of Muhlenberg College, Peyton Helm, argued that "most of the other factors weighted by U.S. News in their rankings (in a secret formula they will not reveal, that is changed every year, and that independent researchers have been unable to replicate) are based, ultimately, on institutional wealth ... A trustee once asked me what it would take for Muhlenberg to be ranked in the top five by U.S. News. My answer was simple: A check for $800 million placed directly in the endowment would do it -- even if we never changed another thing we were doing." Helms also noted that, "what you won't read in U.S. News is that most of the data they use is public information, readily available on the Web sites of most colleges and universities, as well as on the U.S. Department of Education Web site. There is no single formula for weighting these factors -- they will have different significance for different students and families. So, next year I and many other leaders of our nation's best colleges and universities will be working on a new and better Web-based tool for families engaged in the college search."[39] Millsaps College president, Frances Lucas, further noted that, "she previously had paid little attention to the rankings debate because her own institution was rated highly in U.S. News. But after learning more about the magazine's methodology and discussing the issue with colleagues at this week's meeting, she concluded that the rankings were based too heavily on measurements determined by institutional wealth."[40]
Walter Kimbrough, then president of the historically black Philander Smith College, argued that U.S. News "focuses on institutional resources, student selectivity and graduation rates to select the top institutions. But since many HBCUs struggle with these issues, he says the rankings in effect discourage students from going to those schools... If there are people looking at the rankings as a measurement of the quality of an institution, they think [HBCUs] do not have any type of qualities... [The rankings] do not tell you who the best schools are, just the most privileged."[41] [42]
Former president of Sarah Lawrence College, Michele Tolela Myers, in discussing her decision to no longer submit information to U.S. News, stated, "they will do what they will do, ... we will do what we will do. And we want to do it in a principled way."[43] Myers also indicated in a press release for the college magazine, Sarah Lawrence, that the college will be involved in developing the new database of colleges discussed in the Annapolis Group statement as they "believe in accountability and openness, and that the public has a right to solid and reliable information about the important decisions involved in choosing a college." The press release also indicated that Sarah Lawrence "plans not to participate in the peer reputational survey or data collection for U.S. News and World Report's rankings" as, according to Myers, "by submitting data and the peer reputation survey we have tacitly been endorsing these rankings ... all the information we have provided to U.S. News in the past will be available to the public through other channels."[44]
Other presidents have also commented on the reputational survey. Former president of Scripps College, Nancy Y. Bekavac also stated in a press release on the college website that Scripps will also no longer submit the Reputational Survey to U.S. News as "for years we have known of flaws in the methodology; many of us have spoken with editors at U.S. News in an attempt to improve its approach ... but nothing can really improve a system that seeks to reduce 3,300 educational programs in American higher education to one set of numbers, and then rank them. College presidents, academic deans and deans of admission do not know enough about other institutions to make meaningful comparisons. This gives a false sense of reliability to what is a ranking system without any real validity."[45] Sweet Briar College president, Elizabeth S. Muhlenfeld stated that, "one of our colleagues likened it to trying to rank composers. It's a great analogy. How can you say that Beethoven and Brahms are better than Mahler or Mozart?"[46] Trinity Washington University president Patricia McGuire argued that, "the survey asks me to 'rate the academic quality of undergraduate programs,' assigning each school a single score using a 1-to-5 scale from 'marginal' to 'distinguished.' That I have little real information about these 181 institutions does not seem to matter to the U.S. News editors ... Some of the actual best colleges in this nation do not fare well in the U.S. News survey because they do not have the wealth, big-time sports notoriety or public relations clout to influence the peer voting system."[47] Finally, DePauw University president, Robert G. Bottoms, argued that, "I, in fact, did not fill out the reputational survey for this past year. I came to the conclusion that I am not in a position to make judgments on other schools, many of which I have little or no familiarity with. The fact that one quarter of a college's ranking is based upon what is, in essence, its popularity, is very disturbing and we choose not to be a part of the process."[48]
Catharine Bond Hill, president of Vassar College argued that, "many of us in higher education dislike popular college rankings such as the annual academic beauty pageant from US News & World Report. But expecting them to go away is naive, and attempting to undermine them is unwise since students and families could perceive that as petulant and paternalistic. Worse, it could seem as if we have something to hide." Rather than not sending in the reputation survey, she argued, it would be of value to focus on, "a third-party non-profit or foundation", sending them "the same data that we already submit to US News and other rating organizations." On this point, she argues, "a one-size ranking does not fit all, because students and families care about different things ... What if a school doesn't use the SAT in making admissions decisions and therefore doesn't collect or report these data? In a new system, that school couldn't be ranked if a student chose a positive weight for the SATs. Students would know that the school doesn't value that piece of information. They could then run the rankings with other information (maybe class rank and other indicators of academic achievement), excluding the SAT, and see what those rankings look like. Alternatively, they could decide they actually do care about the average SATs of the student body and decide to look at other schools. Fair enough."[49]
Other academic administrators have discussed the correlation between economics, college choice, and the rankings. David McGrath, emeritus professor of English, College of DuPage, discussed his own decision to attend Chicago State University in the July 24, 2007, Chicago Tribune article, "Ode to a fourth-tier college". Of this decision, he noted that, "I qualified for admission elsewhere, but CSU was close to my part-time job, and it was cheap ... I never required a student loan since I earned enough as a grocery bagger to pay tuition and fees in 1970 that totaled $300 per year. All told, a pretty good value, even for a fourth-tier school." McGrath considered it a "good value", because, "CSU eventually led to a teaching career, and my working alongside professors from Princeton, Northwestern, and the University of Chicago. I earned the same ample salary and benefits as they, and, more important, was privileged to engage in the same kind of fulfilling work." He also referenced the 2000 Krueger-Dale study (which compared groups of students who received the same SAT scores, attended both high and low-income schools, and found no difference in post-graduation success rates[50]) and noted that, "too often, it seems, students choose colleges the way they choose jeans or athletic shoes. They would rather bust the family budget than be caught dead in sweats bearing an unrecognizable school logo. But it's their ability, work ethic and dedication that determine the height of their achievement."[51]
Author and journalist, Peter Sacks, narrows the argument by suggesting a direct correlation between the wealth of school and its rank. He suggests that "the ranking amounts to little more than a pseudo-scientific and yet popularly legitimate tool for perpetuating inequality between educational haves and have nots -- the rich families from the poor ones, and the well-endowed schools from the poorly endowed ones. Toss in the most heavily weighted factor in the U.S. News survey, the assessment of deans, college presidents, admissions officials and others regarding their peer institutions (a beauty contest that constitutes a full 25 percent of the U.S. News ranking), and you get the perfect recipe for a self-perpetuating, class-based rankings system driven by brand names, marketing hype, and prestige."[52]
In the summer of 1996, Marshall University sociology professor Dr. William Westbrook was having a conversation with a recent Master's graduate. The graduate inquired about Marshall's traditional admissions policy, which had a lower SAT/ACT apparent requirement than his undergraduate school, Shepherd University, with a "selective" admissions policy and higher SAT/ACT apparent requirement. The graduate made an incorrect inference from the averages and thought Shepherd was arbitrarily turning new students away below a higher fixed ACT or SAT score. Dr. Westbrook clarified the situation first by explaining that every university or college is going to meet its enrollment quota with the best students available, and asked where both Marshall University and Shepherd University got their students. Marshall typically recruited from Cabell, Logan, Wayne, and Putnam Counties, WV, and Lawrence County, OH. Shepherd typically recruited both from West Virginia, both locally in the Eastern Panhandle and in the interior of and elsewhere in the state, and from the comparatively wealthy Maryland and Virginia suburbs surrounding Washington D.C. Dr. Westbrook made the point that SAT and ACT scores are an indirect measure of socioeconomic status. County kindergarten to twelfth-grade school boards are funded by property taxes assessed on home values. At nearly the end of grade school, a college-bound student takes the SAT or ACT. The difference between Shepherd University and Marshall University is that Shepherd recruited more from areas whose home values were higher, whose homeowners paid more in property taxes, whose school systems were better funded, and whose students benefitted as a result, took the SAT or ACT, some of whom applied to Shepherd University and were accepted, and attended. Colleges and universities, then, have less control over setting their ACT and SAT requirements than the graduate assumed and than most parents might imagine. Parents need to understand that large scale political and economic forces such as the level of Federal employment and commensurate income around Washington, or other regional differences in living standards, have more to do setting ACT and SAT requirements than the decisions of college committees trying to enroll the best students they can under the circumstances over which they have no control.[53]
Assistant to the dean for the University of North Carolina School of Law, Sarah E. Wald, noted, "the rankings purport to give an overall order to colleges and graduate schools to help students make the best decisions about where to attend school. But universities all know how misleading and even destructive these rankings can be. It's common knowledge how the statistics can be 'gamed.' Colleges can solicit applications from students with little chance of acceptance to boost how selective they appear. Schools can adjust when they allow faculty to take leave in order to raise the faculty/student ratio. And admitting more 'risky' students on transfer rather than in the initial class results in a higher freshman SAT average."[54]
Professor Marty Kaplan of the USC Annenberg School for Communication further argued that, "the problem with U.S. News' college rankings isn't that institutions of higher education shouldn't be held accountable for the quality of services they provide ... The problem is that the fierce competition among colleges to raise their rankings torques the priorities of colleges toward the criteria that U.S. News uses ... So this week, when an association of 80 liberal arts college presidents, including Barnard, Sarah Lawrence and Kenyon, announced that a majority of them would no longer participate in the U.S. News annual survey, and that they would fashion their own way to collect and report common data, it was bad news for the magazine, but good news for families. It's also good news for American higher education, some of whose institutions may now become less timid about accepting the quirky applicant, less nuts about generating journalistic puff pieces, and more bold about declaring (and living up to) unique educational missions that don't derive from focus groups.[55]
Senior scholar at The Carnegie Foundation for the Advancement of Teaching, Alexander C. McCormick, adds to the above discussion by arguing against the way in which the Carnegie Classification of Institutions of Higher Education is used in the creation of U.S. News rankings. The problem, he argues, with this use is that there is "no basis for inferring national versus regional focus, because it's not a factor in the classification criteria. So it should come as no surprise that the national and regional lists contain a great many inconsistencies and bizarre placements ... By continuing to rely on the Carnegie Classification, they avoid the tough job of defining their terms."[56]
Other media outlets have offered rebuttals to this criticism.
U.S. News & World Report editor Robert Morse, argued that "a couple of journalists are making the case for the U.S. News rankings, explaining why the actions of a group of college presidents who have signed the letter boycotting the U.S. News peer survey may not be in the best interests of prospective students and their parents."[57] In fact, Morse refers to an article published in conservative magazine the National Review, entitled "They Protest Too Much", published on June 28, 2007, in which he quotes John J. Miller as stating, "the magazine's editors and writers aren't interfering with higher education so much as responding to a consumer demand for more information about it. The demand exists because colleges and universities are among the least accountable institutions in American life ... the U.S. News rankings indisputably measure something—and something is better than nothing, which is why parents of high school students pore over the magazine's tables and charts. This is rational behavior for people on the verge of spending more huge sums of money on the education of a single child. Like wise investors, they want to know if they're getting a good deal."[58] He also refers to the June 27, 2007 Washington Post op-ed, "A College Course in Cynicism", in which he quotes Robert Samuelson as stating, "[w]hat's so shameful about this campaign against the rankings is its anti-intellectualism. Much information is in some way incomplete or imperfect. The proper response to evidence that you dislike or dispute is to supplement or discredit it with better evidence. The wrong response is to suppress it. And yet, that's the agenda of these college presidents. By not cooperating with the U.S. News survey, they hope to sabotage the rankings. They say they'll provide superior information. But they want to control what parents and students see. This is soft censorship. What their students will learn, if they're paying attention, is a life lesson in cynicism: how eminent authorities cloak their self-interest in high-sounding, deceptive rhetoric."[59]
The provost and dean of Dickinson College, Neil Weissman, responded to Robert Samuelson's rebuttal, in the June 30, 2007, letter to the editor, for The Washington Post, "College Rankings Are Lame Science", in which he states, "when Dickinson College chose not to participate in the U.S. News & World Report rankings of colleges, I imagined the decision would evoke some criticism, but never the charge Robert J. Samuelson made of 'anti-intellectualism' [op-ed, June 27]. 'Intellectual' to me means thoughtful. The problem with the U.S. News rankings is that they are not 'intellectual.' They are, as some higher education experts label them, lame science. Mr. Samuelson also missed the point in suggesting nonparticipating colleges are trying to censor U.S. News. The magazine is of course free to continue its rankings, as are others. We are simply saying that we will not participate in an exercise that, in our view, misleads prospective students more than it helps and drives up college costs by encouraging spending in pursuit of rankings on a fictional prestige ladder invented by U.S. News."[60]
Professor of journalism at Elon University, Michael Skube, argued in the editorial, "The No. 1 reason to rank colleges", against arguments made in the March 11, 2007, article in The Washington Post, "The Cost of Bucking College Rankings"[26] by the president of Sarah Lawrence College, Michele Tolela Myers. Skube states that, while having some merit, these arguments were "partly beside the point ... U.S. News survey, for all its imperfections, performs the useful service of comparing apples with academic apples. In some ways, one might even argue that its nuts-and-bolts consumer information is at least as practical as the bar charts and numbers a car buyer might find in Consumer Reports or Car and Driver. What factors go into the rankings? Student retention accounts for 25% at schools U.S. News calls master's level and those that provide primarily the bachelor's degree (called 'comprehensive' schools, oddly enough)." Skube also notes objections made to the reputation survey portion of the U.S. News survey and responds by stating that, "one can see why." However, he argues, "sometimes just the facts will do, and the U.S. News manual offers them in great heaps ... Sarah Lawrence, for example, does not take into consideration SAT or ACT scores. Don't even send 'em, it tells high school students. That tells me all I need to know about Sarah Lawrence. It tells me that Sarah Lawrence doesn't take aptitude as seriously as I'd like. The university depends far more on high school grades, which, as anyone who has taught at the college level knows, cannot be trusted. If last year's freshman classes at several colleges all had composite high school grade point averages of 3.6 to 3.8, I don't know how the intellectual caliber of one differs from another. But if one college attracted high school students whose SATs averaged 1100 to 1200, and another attracted students with SATs averaging 1300 to 1400, I know the latter is more selective. Sarah Lawrence might not care about such things, but I do."[61]
Former president of Sarah Lawrence College, Michele Tolela Myers, responded to Michael Skube's rebuttal in the July 12, 2007, letter to the editor for the Los Angeles Times, "Argument may be a rank disgrace". On the general topic of U.S. News methodology, she states, "what many of us dispute is the validity of a single score computed by using "data points" to which weights are arbitrarily ascribed (why should retention count for 20% instead of 30%; why is peer assessment 25% instead of 10%; and who decides?). How can a single measure be valid when, in some cases, values are made up when they are not provided (the case of the missing SATs at Sarah Lawrence — the point of my Washington Post Op-Ed)? However, that's exactly what U.S. News does each year. Professional statisticians have reported that the methodology used by the magazine is seriously flawed and cannot be trusted." She also responds to Skube's discussion of Sarah Lawrence's decision not to consider SAT or ACT scores by stating, "Skube says he knows 'all he needs to know about Sarah Lawrence' because the college does not use SAT scores in its admission process, and therefore he infers we don't take aptitude seriously. Perhaps he doesn't know the research showing that SAT tests do not measure aptitude and at best provide a guess about academic performance in the first year of college. I do not think Elon University's SAT scores tell all there is to know about Elon. To think so would be falling into the trap of using one single measure as a proxy for the complex nature of any college. Which is precisely why the rankings are flawed."[62]
A weakness of many of the ranking systems is that they rely on information and data provided by the universities themselves and that the numbers are usually not verified (or verifiable) independently. The years 2021/2022 have seen evidence that the enormous financial consequences of the rankings for the universities have tempted university administrators to manipulate[63] [64] or criminally falsify[65] data submitted to ranking agencies.
In 2022, Miguel Cardona, United States Secretary of Education, declared ranking systems akin to U.S. News to be "a joke."[66] In November 2022, Yale Law School, closely followed by Harvard Law School, withdrew cooperation from U.S. News & World Report’s annual rankings. A year later, more than one third of the 196 law schools annually ranked had declined to provide data to U.S. News for the previous year, according to The Washington Post.[67]
Speaking at a March 2023 conference organized by Yale and Harvard Law Schools, amidst a backlash over the influential law school rankings; Cardona stated that U.S. News had "created an unhealthy obsession with selectivity" and that “We need a culture change".[68]