Some Thoughts on the Recent Controversies Concerning Law (and Med) School Rankings: Part I in a Series

Updated:
Posted in: Education

Prominent rankings of institutions Americans care about are invariably controversial. Take, for example, teams (and their fans) who are left out of the two national semi-final bowl games because of the College Football Playoffs final-week rankings. (I won’t weigh in on whether Alabama had a compelling argument this past year that, given its strength of schedule and the closeness of its losses, the Crimson Tide should have edged out Ohio State even though the Tide had suffered two defeats compared to the Buckeyes’ one, a lopsided loss to Michigan in Columbus.) And during college basketball season (my favorite sports time of the year), each week I cringe at the injustices visited by Monday’s drop of the AP Top 25 rankings.

While sports rankings have always generated much disagreement, in recent months the rankings phenomenon has been even more contentious in the realm of academic, especially professional, institutions. Many well-known law schools (with Yale and Harvard leading the charge) announced late last year that they weren’t going to “participate” in the annual U.S. News & World Report law schools rankings; since U.S. News doesn’t need a school’s permission to include that school in the magazine’s rankings, a school’s failure to participate really consisted of publicly criticizing the U.S. News system, and withholding non-public information that U.S. News had historically used in its rankings methodology. For its part, U.S. News responded by announcing that, beginning this spring, it would use only publicly available data to rank law schools, and would also tweak its methodology some, even as it would continue to rank all ABA-approved schools. Only time will tell whether the U.S. News law-school ratings will continue to carry the weight among law-school aspirants and others that it has for the three-plus decades it has been around, but many observers think that while U.S. News may lose some luster as a result of this skirmish, it will ultimately continue to have meaningful influence among people who have reason to compare law schools.

And it’s not only law schools that have been pushing back. In recent weeks, several prominent medical schools have also harshly criticized U.S. News med school rankings. These schools have announced that they, too, are no longer going to provide the magazine with information to be used in the yearly rankings.

To be clear, even the schools that are boycotting, so to speak, the U.S. News ratings don’t seem opposed to broad comparative assessments of the nation’s professional schools. That is, they don’t seem to believe, in some fundamental sense, that all professional schools are equally strong. For example, the first sentence on the “About Us” section of the Stanford Medical School (which is one of those who recently announced a withdrawal from U.S. News data-collection participation) boasts that the Med School is “[a] leader in the biomedical revolution.” For an institution to be a leader, there must be other institutions that are following more than leading; implicitly Stanford is saying that other institutions are not leading as much as Stanford Medical School is. Indeed, on that very same front webpage, Stanford proudly proclaims that its Medical Center is “consistently ranked among the top hospitals in the nation for innovative programs” in various medical specialties. So Stanford’s criticism, at least, appears not to be of the very concept of rankings (although there is some of that), but of the U.S. News system for ranking med schools in particular.

And there is no doubt that certain specific aspects of U.S. News’ methodology have been the focus of fire by law and medical school critics. One recurring source of criticism has been that these rankings give a false sense of precision because they rank hundreds of schools ordinally. But even if U.S. News (or other rankings) were simply to cluster groups of schools in different categories of overall strength, there would still be controversy concerning where the category lines were drawn and among schools that fall on the wrong side of any line. Notwithstanding the famous suggestion of American League outfielder John Lowenstein to the contrary, “mov[ing] first base back a foot [would not] avoid all those close plays.”

Another very loud source of criticism of U.S. News these days—that it attaches weight to the standardized test scores of enrolled students (as an indicator of the academic strength of the student body)—has, I must admit, puzzled me a bit. I fully understand that standardized test scores have a disparate impact along racial lines and for that reason they must be used in admissions with a great deal of care. But all the medical schools and law schools who are boycotting themselves attach a fair amount of significance to standardized test scores as part of their admissions processes (at least for now). Medical schools are under no regulatory requirement to do so, yet they have consistently made MCAT scores a relevant factor. And even as the ABA is considering a proposal (that was recently remanded from the ABA’s House of Delegates to the Council of the Section of Legal Education and Admissions to the Bar) to remove the requirement that law schools use standardized tests in admissions, a group of 60 or so law deans, including many of the most progressive and diversity-focused law deans in the country, sent a letter to the ABA last fall opposing the elimination of such a requirement (at least without more study of the matter.) As the letter pointed out, “standardized tests—including the LSAT—can be useful as one of several criteria by which to assess whether applicants are capable of succeeding in law school and to enhance the diversity of our incoming classes. . . . Used properly, as one factor in a holistic admissions process, this index score can help identify students who are capable of performing at a satisfactory level . . .” Moreover, the letter said, if standardized test scores were not used (and if some schools stop using them, others will be pressured into following suit), then other factors, such as college GPAs, may assume even greater weight in admissions decisions, and such other factors might have an equal or greater disparate impact along racial lines. In this regard, it should be remembered that standardized test scores came into wide use a few generations ago in significant part because other admissions criteria—college attended, letters of recommendation, extracurricular activities, etc.—seemed to many to provide unfair advantages to people who come from well-educated, well-heeled and well-connected families. Standardized test scores were added to the mix in part to level the playing field.

To this I would add that college GPAs, one of the other criteria that may increase in significance if scores are weighted less, are ever more unreliable indicators, as a general matter, of academic preparation and strength, because of the massive and well-documented grade inflation (and thus grade compression) prevalent in American colleges in the past few decades, as well as the very large differences in the academic rigor of various courses of study among and within various universities. LSAT (or other standardized test) scores do correlate better than any other single admissions factor to first-year law school performance and to bar passage rates, which are things every law school should care about.

None of this is to say that U.S. News (or other ranking systems that consider strength of each school’s student body) ought not to be sensitive to the incentives its rankings methodology creates as regards diversity. One respect in which U.S. News is sensitive, but probably doesn’t get the credit it deserves, is the magazine’s decision to use median, rather than average, standardized test scores. Since, when a class roster is put to bed, test scores way below the school’s median don’t negatively affect the median any more than scores slightly below the median (which wouldn’t be true for the school’s average), schools have more leeway (that is, less disincentive) to enroll students whose scores may be considerably lower than the mainstream of scores at the institution.

In any event, as I have argued at length elsewhere, to the extent diversity is an important attribute of a law school’s overall quality that deserves to be considered when evaluating the school, then diversity should be factored into U.S. News more directly. To be sure, figuring out how to measure diversity isn’t always easy, and some schools because of their locations have a more naturally diverse applicant pool and are thus at an advantage. But nothing about rankings methodology is easy, and some schools have natural advantages because of their locations in many other regards as well. For example, being a law school that is part of a university whose undergraduate units (the largest feeders of the law school) have rampant undergraduate grade inflation is an advantage. The big point is if test scores do tell us something (and no one I know argues that they tell us everything) relevant about the strength of a student body (and thus an institution), then they shouldn’t simply be discarded in a ratings (or admissions) system, especially because there are other ways, if one desires, to encourage schools to take diversity into account when it admits applicants. For this reason, suggestions by some deans to the effect that U.S. News rankings (as one med school dean put it) “measure the wrong things” by taking into account test scores are simply over the top, even though the relative weights of various factors (in admissions or in a ranking system) is always up for reasonable debate.

And that brings us to a huge problem, because even if the list of relevant criteria for a rankings system weren’t contentious enough, there will never be full agreement among the ranked entities about the relative weight each factor should bear. Even if well more than half of the 200 law deans in the country think the U.S. News methodology is meaningfully flawed (and I would count myself in that group), there is hardly consensus on what the methodology should be. I honestly don’t know of too many deans who are going to say, “we are currently overranked compared to where we really deserve to be, and we propose changes that would consistently and predictably lower our rankings.”

So part of the difficulty here is that at any point in time, a significant percentage of ranked entities won’t like to be ranked under whatever methodology is being used. (None of this is inconsistent with the fact that the most prominent law schools have been the ones leading the charge against U.S. News; some of those schools have seen or could soon see—if things kept going as they were—their places in U.S. News shift a bit. And a bit matters a lot to these keenly competitive schools. Moreover, if U.S. News rankings were to go away entirely, as some elite schools might prefer, their already locked-in elite reputations would continue to do work for them. One virtue of a rankings system, or at least a well-conceived one, is that it allows the outside world to more easily appreciate progress that has been made among schools that traditionally weren’t among the most highly regarded but that now should be looked at afresh.)

Regardless of the dissatisfaction with rankings in general, or any particular rankings methodology, prominent rankings are likely to persist, simply because they serve a demand. Going back to college sports (where I will wallow in Part II of this series), there simply has to be some way to decide which four (or eight) teams make the College Football Playoffs. Or which 68 teams (I personally prefer a smaller, 32-team, field as existed when I was a young child) are invited to participate in March Madness. Moving from sports to professional schools, prospective students, employers, alums, provosts, donors, etc. want something beyond each school’s own use of words like “leading” and “excellent.” So like it or not, rankings of professional schools are, I expect, with us to stay.

So one naturally wonders, are there any ways we can as a general matter agree upon to make rankings operate better? In Part Two of this series, I draw on several developments in the world of sports rankings (especially college basketball and college football) to see what lessons we might incorporate from that world to the world of academic rankings. To be sure, sports is a distinctive realm; sports teams compete against each other to visibly generate (usually) clear winners and losers in ways that academic units do not. Yet even in sports, because not every team plays every other team in a full round-robin, home-and-home format in any sport, we need ways of assessing teams that go beyond mere win-loss records (which themselves might be misleading for other reasons too, such as temporary injury or suspension of players, etc.). Moreover, academic institutions do compete with each other in more than abstract ways as well; for example, graduates from law schools go head-to-head with graduates from other law schools in court, and for jobs by employers, and institutions go head-to-head with each other for students, faculty members, and donor dollars. To be sure, the competitions among academic institutions often yield less visible and more contested results, and that may increase the need for rankings to help fill in the informational gaps. But as I will try to show in Part II, as different as the sports world may be, it is farther along in thinking through more rational rankings systems than are many academic institutions or their evaluators.

Posted in: Education

Tags: Law School, rankings, U.S. News

Comments are closed.