James Phillips and John Yoo sent along a link to their paper, The Cite Stuff: Inventing a Better Law Faculty Relevance Measure (September 3, 2012). I quote the abstract:
Citation rankings as a measure of scholarly quality are both controversial and popular. They provide a quantitative, albeit imperfect, measure of intellectual impact and productivity. But the number of times a scholar has been cited by his peers confers more than just bragging rights. They arguably help form the reputation of American legal scholars to the point that they have allegedly influenced faculty hiring decisions, and their collective impact may well shape the ranking of law schools themselves.
Arguably, the most well-known such metric for legal scholarship is the method used by Brian Leiter of the University of Chicago Law School and posted on his website. Despite the methodological rigorousness of the Leiter system, it suffers from some well-recognized limitations. It is biased in favor of schools with older, smaller faculties, for example, and against schools that do not produce as much peer-reviewed scholarship.
This study seeks to improve on earlier efforts by producing a more relevant and accurate citation-based ranking system. It produces a measure that explains 81 percent of the variation in the U.S. News academic peer rankings, implicitly revealing how schools could boost those rankings, and lists the most cited professors based on this new ranking methodology, both overall, amongst younger scholars, and in 20 areas of legal. This allows for the top school in each area of law to be calculated, which could be useful to aspiring JD students who desire to know the best school in the area(s) of law they are most interested in. Finally, this study proposes an alternative faculty ranking system focusing on the percentage of a law school faculty that are “All-Stars” (ranked in the top 10 in citations per year in an area of law). This alternative ranking system improves upon some of the weaknesses of previous faculty quality ranking methodologies and argues that citation-based studies do measure something important - relevance of scholarship.
What grabs me about the paper is that it is explicitly framed (albeing just in part) as a response to my blog posts on other citation counting systems:
This study thus reduces some of the problems in citation studies as identified by Professor Stephen Bainbridge:
They reward longevity and prolificacy. An older author with 100 articles that have each been cited 10 times will have a higher count than an author who has published 1 article that has been cited 900 times. Yet, might one not fairly argue that the latter is the more influential?
They disregard immediacy: An article that is being [cited] 500 times in the first year of its publication is probably more influential than an article that’s being [cited] 500 times over 20 years.
They disregard the half-life of citation rates, which might be a very useful proxy for influence.
They usually rely on one database, typically a legally oriented one, which limits measurement of interdisciplinary impact. [The accompanying footnote reads: "http://www.professorbainbridge.com/professorbainbridgecom/2010/05/ranking-faculty-quality.html."
And, of course, I was interested to note my ranking among the top corporate law scholars:
And also how UCLA did:
Meanwhile, speaking of Brian Leiter, here's an excerpt from his analysis of the new methodology:
The two most interesting things they do are consult citations in the "Web of Science" database (to pick up citations for interdisciplinary scholars--this database includes social science and humanities journals) and calculate a citations-per-year score for individual faculty. A couple of caveats: (1) they look at only the top 16 schools according to the U.S. News reputation data, so not all law schools, and not even a few dozen law schools; and (2) they make some contentious--bordering in some cases on absurd--choices about what "area" to count a faculty member for. ...
... A couple of readers asked whether I thought, per the title of the Phillips & Yoo piece, that their citation study method was "better." I guess I think it's neither better nor worse, just different, but having different metrics is good, as long as they're basically sensible, and this one certainly is.