Brian Leiter posts:
Is it through scholarly impact/citation studies (like this one)? Or reputational surveys (such as here or here)? Or SSRN downloads (e.g., here)? Or is there no reliable way to gauge this? Take the poll.
UPDATE: A comment from a reader leads me to realize that I should emphasize that you should please interpret the categories broadly: so, e.g., "scholarly impact/citation studies" doesn't commit one to my way of doing them--it could include impact/citation studies that used different databases, that included books, Google scholar, etc. Similarly, a reputational survey needn't mean the way US News does them, but could include a survey in which experts are asked to evaluate recent work by colleagues and give their assessment of it.
The core problem is to separate out three levels of metrics: those that measure the quality of an individual article (an article level metric), those that measure the quality of an individual scholar (individual level metrics), and those that measure the quality of a law school as a whole (an institutional level metric). It's not self-evident to me that metrics useful at one level tell us anything about the other levels.
Let's start by thinking about individual faculty members. Most measures of a scholar's quality are based on some form of article level metric. Ranking citation counts simply adds up all the citations to all the scholar's articles. SSRN download rankings do the same with respect to an author's cumulative SSRN postings. These article level metrics are all problematic:
- They reward longevity and prolificacy. An older author with 100 articles that have each been cited 10 times will have a higher cite count than an author who has published 1 article that has been cited 900 times. Yet, might on not fairly argue that the latter is the more influential?
- They reward monumental stupidity: An article so dumb that a host of people decide to take shots at it. Conversely, they can also reward massive mediocrity. Although I suspect this is less true in law than in some disciplines, I suspect it is still possible to increase one's stats by publishing a dull but comprehensive literature review or non-analytical treatise that becomes a standard citation for some set of propositions of the sort for which law review editors insist upon a supporting citation no matter how obvious it may be.
- They disregard immediacy: An article that is being downloaded 500 times in the first year of its publication is probably more influential than article that's been downloaded 500 times over 20 years.
- They disregard the half-life of article citation rates, which might be a very useful proxy for influence.
- They disregard the quality of the citation. An article that gets cited by the Supreme Court as providing the basis for a decision would be a lot more impressive (at least to me) than an article that cited 500 times by fourth tier authors in fourth tier law reviews.
- They usually rely on one database, typically a legally oriented one, which limits measurement of interdisciplinary impact.
- I'm not aware of any that measure impact on judicial opinions, which strikes me as the really relevant question. (Sadly, it will strike all too many of my fellow law professors as irrelevant or even counting against one.)
- They don't make any use of new media (blog references?) or social networking.
We then compound the problem by assuming that simply adding up an aggregate of some article level metric tells us something useful about the reputation of the scholar. (Not that I would ever stop me from blogging about measures by which I do well.) Why doing so should tell us very much is not immediately apparent. In particular, I see no reason to think that the errors introduced by article level metrics are sufficiently randomized that they cancel one another out when aggregated.
In sum, what we need is a faculty member quality metric that transcends aggregates of article level metrics.
Turning to ranking law schools by the quality of their faculty as a group, most database-based metrics are really just a summation of the aggregate article level metrics of the entire faculty. All cites by all articles to all faculty members, for example. Again, one wonders why such indices should be revelatory. In particular, I again see no reason to think that the errors introduced by article level metrics are sufficiently randomized that they cancel one another out when aggregated at two successive levels.
US News' reputation score avoids that problem by creating an institution level metric. But the flaws in that metric are well documented. The answer, however, is to come up with a better institution level metric than relying on an aggregate of article level metrics.
I'm not sure what the right solution would be. We could try adapting a multi-factor index, such as a variant of the faculty scholarly productivity index, but it would have to be highly refined to be useful for law. The FPSI relies on factors that simply don't apply to law, such as peer-reviewed journals (we all know that issue), research awards (few in law), and federal grant awards (ditto).
Or we could let a thousand flowers bloom. Folks who want to measure faculty quality should use a bunch of different metrics, report them all, and let users make of the results what they will.