What a Recently Released Study Ranking Law School Faculties by Scholarly Impact Reveals, and Why Both Would-Be Students and Current/Prospective Professors Should Care

Updated:
Posted in: Education

As one law-student-admissions and law-faculty-hiring cycle winds down and another one begins to crank up, yet another law-school ranking was released last month, this one focusing on the “scholarly impact of law faculties, ranking the top third of ABA-accredited law schools.”  Below, I discuss and analyze some of the seeming surprises in the ranking, and also explore why prospective law school students and law faculty members should care about this kind of ranking.

Ranking Methodology

The newly released ranking builds on the work of University of Chicago Professor Brian Leiter, who for years has argued that the most prominent law school ratings—the U.S. News & World Report annual rankings—are seriously flawed, and has explored alternative ways of evaluating and comparing schools of law.  One important gauge, Leiter has suggested, is the academic prominence and influence of each law school’s faculty, as indicated by the number of times the scholarly works of each school’s faculty are cited in other scholarly works within the universe of legal scholarship.  Last month’s study undertook just such an examination, over the years 2007–2011 (inclusive), and then rated each school according to a formula that takes into account both the mean and the median citation-frequency of its tenured faculty.

There are many limitations inherent in any ranking methodology, and this one is no exception.  There is some unavoidable arbitrariness in deciding which legal scholarly journals should be scoured to conduct citation counts.  Also, some faculty members who are included in the tally serve only part-time on a law faculty, and therefore may not tend to be cited in—or may not aim to be cited in—legal journals, as distinguished from journals catering to other disciplines, and yet these faculty members are counted as law faculty for purposes of a law school’s mean/median citation counts.  (Nor do all or most law schools have the same percentage of such joint faculty appointees.)  In addition, one or two exceptionally well-cited members of a given faculty can significantly improve (and perhaps skew) a school’s mean score (which counts in the ranking’s formula more than its median score does), especially if the school has a smallish faculty.  And to mention just one other quirk, sometimes an article or book is cited in legal journals frequently because so many people find it a good example of wrong-headed thinking; a citation, it bears remembering, is a mention, not an endorsement.

Notwithstanding these limits, the Leiter-style rankings of faculty impact (with the implication that impact tracks quality) are second among law school rankings in prominence, beneath only the U.S. News ratings.  And what do the most recent Leiter-style scores indicate?

What the Most Recent Leiter Ratings Show

In some respects, the Leiter ranking results were not surprising.  Only about 70 of the close-to-200 ABA-approved schools made the rankings list, and those at the top were the usual suspects:  Yale, Harvard, Chicago and Stanford were the top four (in that order), and that conforms pretty closely with the consistent placement of these schools in U.S. News yearly rankings.  (In the spirit of full disclosure, I should mention that, by Leiter’s measure, the law school at which I teach, UC Davis, was ranked 23rd, which is very close to its placement in U.S. News ratings in recent years—indeed, it was exactly #23 in 2011.)  Yet other schools fared much better in the recent Leiter survey than they do in U.S. News:  these included Brooklyn, Cardozo, Case Western, Chapman, UNLV, Seattle, and the University of St. Thomas.  And there were others.

Conversely, some schools did not do as well in this scholarly-impact survey as they tend to do in U.S. News.  Prominent among these are:  University of Washington and Pepperdine (neither of which made the top 70), Boston College, Georgia and Iowa.  Again, others could be added to this list too.

What does the divergence between the two rankings—U.S. News and Leiter-style citation counts—for many schools tell us?  For one thing, it should remind us that the U.S. News ranking uses a much broader set of criteria that includes, but also goes beyond, assessments of faculty quality.  For better or worse (and it is often for worse), U.S. News takes account of median LSAT scores and undergraduate GPAs, recent graduate employment rates, the views of a small number of lawyers and judges, the amount of money per student expended on academic matters, etc.   Perhaps the folks who run the U.S. News ratings should make faculty quality a bigger component than it currently is, but given the breadth of the inputs into the U.S. News rating, we should expect that its results would differ significantly from the Leiter approach.

Still, it is interesting to note that even when one focuses on the single component part of the U.S. News ranking that asks faculty members at each school to rate other schools, there is significant divergence between that measure and the Leiter-style impact study.  Again, for some schools there is a similar rating (e.g., Yale and Harvard are rated in the top two by law professors in the U.S. News survey, and my own UC Davis has tended to be rated about #24 by other professors queried by U.S. News, closely matching the impact study’s bottom lines.)  But many, many other schools’ ratings by law professors who fill out the U.S. News survey do not correlate well with the results of the scholarly-impact report.  Why does that seeming anomaly exist?  First, there is undeniably ignorance about the scholarly productivity and quality of law faculties, even among the few thousand law professors throughout the country.  Most law professors know a fair bit about the faculty at two or three dozen national schools at best (and then perhaps also about other law schools located in close geographic proximity to their own schools), but do not have a good aggregate sense of dozens and dozens of other law faculties.  The Leiter-style survey is in part designed to provide an easy information source to remedy some of that ignorance.

Second, the faculty who are asked to fill out each year’s U.S. News survey are not a perfect sampling of law professors nationwide.  Each school gets four ballots—one for its Dean, one for its Associate Dean, one for the Chair of its Appointments Committee, and one for its most newly tenured faculty member.  The first three categories tend to include more senior faculty members, who (because they haven’t been on the entry-level job market themselves for years) may know less about the scholarly productivity of schools that were not prominent when they joined the ranks of the academy.  Relatedly, some of the more senior faculty may not think scholarly impact—or at least scholarly impact as measured by citation counts—is a good index of faculty quality, especially if they became professors at a time when scholarship wasn’t quite the dominant part of most law professors’ jobs in the way that it is today.

Third, and perhaps most importantly, the U.S. News survey does not ask law professors to rate other law school faculties; it asks professors to rate other law schools.  And even those who think that the scholarly impact and prominence of a faculty is an important element in assessing a school could not deny that other elements—the quality of the student body; the reputation and credibility in the world of lawyers, judges and other prospective employers; and the quality of the overall institution with which a law school is connected, to name but a few—are also important.

So, for example, even though my recently-established sister UC law school, UC Irvine, fares slightly better on the scholarly-impact study than UC Berkeley law does (in significant measure because Irvine Dean Erwin Chemerinsky, the most-cited academic, pulls up the average tremendously, and would do so even if we were to assume an Irvine faculty twice its current size), it would be extremely hard to argue that Irvine is yet close to comparable to Berkeley (or the other established UC-campus law schools) given Irvine’s lack of any alumni base, the absence of any track record of graduates, etc.  As a result, Irvine’s assessment among law professors participating in the U.S. News survey may significantly lag behind its scholarly-impact-survey performance.

(Why) Should Prospective Law Students and Law Faculty Care About Any of This?

At first glance, it would be easy to dismiss the Leiter-style ratings as being of interest to only a small group of ego-driven law professors who need to be cited by other academics in order to feel useful.  This critique has more force than it may have had in the past, given the divergence of legal scholarship and legal practice in the last generation; lawyers and judges made use of what law journals published much more forty years ago than they do today, as legal scholarship has become more abstract and more interdisciplinary.  With things as they are today, some may wonder how much it matters, in the real world, if a faculty’s work product is being read and mentioned often by other professors.

Let me offer three answers, one idealistic and two practical.  First, I like to think that while teaching and scholarship don’t always go together (insofar as some great scholars are lousy teachers and some non-scholars are great teachers, and insofar as some scholarship is too esoteric or technical for many students), quite often great teaching involves the incorporation of cutting-edge scholarly ideas into the classroom. All things being equal, a classroom that blends theory with practical rules (and the two aren’t remotely incompatible) is better than a classroom that focuses on the rules alone.

Of course, one can be aware of and fluent in cutting-edge scholarship without actually engaging in it, but the higher the volume of important scholarship that one does, the more likely one is to be on top of the latest intellectual developments, which then can be included in one’s teaching.

Second, as universities (public and private) confront resource constraints, the law schools that succeed in getting more of the pie at their home institutions are the ones that tend to be able to demonstrate scholarly influence.  So each law school has an incentive to focus more and more on faculty impact, and the ones that succeed will be rewarded financially, which will then enable them to do more for their faculties and for their students in myriad ways.

Third, for better or worse, the single biggest long-run reputational input for a law school bearing on the overall credibility and marketability of its degrees and programs is the perception of its quality by other law faculty across the country.  In turn, that perception tends to drive, over time, what lawyers and judges think (even if there is a lag time), because lawyers and judges (rightly?) assume that law professors (as a group) will tend to know more about other law schools than lawyers and judges have the chance to.  Moreover, the top law students who go on to lead the top legal institutions tend to come from the law schools whose faculty (mentors) care about scholarship and scholarly influence.   So unless and until law professors as a group can be convinced that scholarly impact shouldn’t matter much (and the trend is the opposite), scholarly impact will remain an increasingly important measure, one that everyone—existing faculty as well as prospective faculty and students—cannot ignore.

Posted in: Education

4 responses to “What a Recently Released Study Ranking Law School Faculties by Scholarly Impact Reveals, and Why Both Would-Be Students and Current/Prospective Professors Should Care”

  1. Arnie Wuhrman, Murrieta, CA says:

    I must respectfully disagree with Professor Amar, at least insofar as the general practice of law is concerned. Even after 26 years of practice, I marvel at how impressed my new clients are when I tell them (or they see on my wall) that I attended UCLA Law School. My clients are not interested in the scholarship of the faculty — they just know that I succeeded in getting educated at a school that is well-renowned and well-known for being difficult to get admitted to. The USNWR rankings give me credibility on a general population level. The Leiter rankings may be useful for the academic world, but they will never see the light of day in the “real world” in which I practice. For those seeking career success in general legal practice and who have no intention to enter academia, careful consideration of the USNWR rankings is the way to go.

    • Anon says:

      Part of what Dean Amar is suggesting, though, is that the Leiter Rankings give you some indication of where the USNWR rankings may ultimately be headed. Schools with better faculty scholarship may be more likely to rise in USNWR over time. And schools whose USNWR ranking is higher than their scholarly reputation would seem to warrant may be more likely to fall over time.

  2. R.Dernister says:

    An interesting article (thank you, Dean Amar) but as the article points out, legal scholarship and legal practice have little to do with each other these days. Since most law school graduates tend to be practitioners rather than scholars, “scholarly impact” will be minimal where the rubber meets the road.

  3. fafo says:

    Law school rankings are totally irrelevant for lawyers who actually practice law and no one cares an iota about them. utter mental masturbation