Statistics in the News: Chapter 20 Index
Under Fire: The Listing of "Best" U. S. Colleges
More often than not, and despite textbook warnings to the
contrary, index numbers are used to quantify something
that is in fact unquantifiable. The Chapter 20 discussion
of the United Nations' Human Development Index provides
a vivid example. Given the availability of hundreds of possible
criteria, how can we possibly determine the "best" country
in which to live, along with precise comparative rankings
of all other countries on earth?
The listing of "America's Best Colleges," published annually
by U. S. News & World Report, provides another example
of people summarizing complex circumstances by a few index
numbers that could just as well be constructed in totally
different ways. Consider what Bard College president Leon
Botstein had to say on the subject: "The criteria are ludicrous.
It is the most successful journalistic scam I have seen in
my entire adult lifetime. A catastrophic fraud. Corrupt, intellectually
bankrupt and revolting." (Bard College was ranked #38 on the
Call it what you want, the magazine's best-colleges issue
(the latest one hit the newsstands on September 17, 2001)
has provided massive sales ever since its debut in 1983. But
the annual listing is also coming under fire. Thus, Amy Graham--who
oversaw the list at U. S. News & World Report for two
years until her resignation in 1999--recently argued in the
Washington Monthly that the manner in which the magazine
ranks colleges "defies common sense" because it "pays scant
attention to measures of learning or good educational practices."
As she sees it, the rankings primarily register a school's
wealth (along with the generosity of its alumni giving), a
college's reputation (the opinions of college and university
presidents who rank peer institutions make up 25 percent of
an institution's score) and the achievement of the high school
students it admits (as measured by College Board scores).
Table A shows the 2001 ranking categories for U.S. Liberal
Arts Colleges conferring bachelor's degrees, along with the
weights used to derive the overall ranking.
Table A U.S. News & World Report's 2001
Methodology for Ranking U. S. Liberal Arts Colleges
#1 Ranked Amherst College
||4.8 out of 5
|| Rank 4
|Graduation Rate Performance
First consider the meaning of the various ranking categories:
- Academic reputation was determined by surveying
the presidents, provosts, and deans of admissions of all
colleges. Each individual was asked to rate peer schools'
undergraduate academic programs on a scale from 1 (marginal)
to 5 (distinguished). Those individuals who did not know
enough about a school to evaluate it fairly were asked to
mark "don't know." A school's score is the average score
of all the respondents who rated it. Responses of "don't
know" did not count for or against a school.
- Faculty resources were measured by a variety of
factors, including the percentage of classes under 20, the
percentage of classes of 50 or more, the overall student/faculty
ratio, and the percentage of faculty teaching full-time.
- Retention rate was measured by the percentage
of freshmen who returned to the same college the following
fall as well as the percentage who ultimately graduated
- Student selectivity was assessed by SAT/ACT 25th-75th
percentiles of the incoming class, the percentage of freshmen
from the top 10% of their high school class, and the college's
acceptance rate = total admitted/total applicants.
- Financial resources were measured by the average
spending per full-time equivalent students on instruction,
research, public service, academic support,
student services, and institutional support.
- Alumni giving was measured by the average percent
of undergraduate alumni of record who donated money to the
college. The percent of alumni giving serves as a proxy
for how satisfied students are with the school.
- Graduation rate performance measured the difference
between the actual six-year graduation rate for students
entering in the fall of 1994 and the predicted graduation
rate, based upon characteristics of the entering class as
well as characteristics of the institution.
Criticizing the kind of indexes created with the help of
the above criteria is child's play: We can criticize the raw
data because of high nonresponse rates in the surveys that
produced them. (For example, the academic reputation
question produced a nonresponse rate of 33 percent.) We can
argue that the ranking categories overlap. (For example, academic
reputation is supposed to measure the faculty's dedication
to teaching and, thus, the likelihood that students learn
what they need and turn into satisfied graduates. But student
satisfaction is also supposed to be measured by the retention
rate, alumni giving, and more.) We can argue that entirely
different categories should be used. The list goes on.
And most of all, we can argue that a different weighting
scheme could easily produce different overall rankings. Just
consider the last column of Table A and ask yourself what
would happen if faculty resources had been given a
weight of 80 percent, with correspondingly smaller weights
for the other categories. Would Amherst College (the author's
academic home) retain the rank of #1? We can seriously doubt
Sources. Adapted from Alex Kuczynski, " 'Best' List
For Colleges By U. S. News Is Under Fire," The New York
Times, August 20, 2001, pp. C1 and C9; Rachel Hartigan
Shea and David L. Marcus, "Special Report: America's Best
Colleges," and Robert J. Morse and Samuel M. Flanigan, "How
We Rank Schools," U. S. News & World Report, September
17, 2001, pp. 89-116; and http://www.usnews.com.
© 2003 South-Western. All Rights Reserved. Disclaimer