By: D. Deitch
The concept of ranking things is one that is met with great debate. The first known use of a ranking came in 1836. Ever since people have been ranking everything from military designations to college football teams, and everything in between. The key to any standardized competition ranking is having a set of criteria to which all of the ranked items can be evaluated by. No matter how simple or complex, the criteria behind the rankings is what provokes the discussion. Law journals have been ranked for years by multiple sources with a range of different criteria.
How are intellectual property law journals ranked? Who are the organizations that rank them? And how do these organizations design the criteria for which the rankings are determined?
The exploration for these answers prompted a broad look into the landscape of law journal rankings and opened more questions that are yet to be answered.
Who ranks Journals?
Before looking into how IP Journals are ranked, it is important to understand who is behind journal rankings. What makes understanding IP journal rankings difficult is the lack of reference points people have to look toward. Law journals as a whole, are not evaluated by a multitude of sources. This can lead to reliance on information that is crafted by a small subset of the community. Additionally, another issue that the lack of information causes is a fallacy that there is a direct connection between where a law school ranks nationally and how esteemed their law journals are. While prestige matters, it is naïve to simply rely on law school rankings as a backdrop for where a school’s particular journal should rank amongst its peers.
Washington and Lee School of Law has separated itself as one of, if not, the premier law school journal ranking organization. In addition to Washington and Lee, Google Scholar has cemented itself as one of the central platforms for scholarly literature. Washington and Lee was one of the first organizations to create a function to filter out subject specific journals (such as intellectual property) in their rankings. It took until November of 2011 for Google Scholar to announce that they too would be adding a function that allows users to search by “research area.” One unfortunate difference between the Washington and Lee rankings and the Google Scholar metrics is that Google Scholar does not have a specific intellectual property category amongst their “research areas,” instead having a “technology law” category which serves as a catch-all for journals across verticals such as “International Data Security” or “Law and the Arts.” What someone who does even a minimal amount of research into journal rankings, and specifically journal rankings of specific subjects such as IP will realize, is you end up asking more questions than you arrived with.
Washington and Lee’s ranking methodology collects data from Westlaw using the Boolean search system. The data they collect sorts the citing documents, which are court cases from all U.S. Jurisdictions as well as any law review or journal article for every law journal. A “Citing Document” is categorized as a document which cites journal volumes published in the previous 5 years. Washington and Lee recently changed this number from 8 down to 5 in order to continue the promotion of citing current scholarship, as well as preventing any bias that the rankings may show toward long-published journals.
Washington and Lee use two crucial metrics to create their final rankings. “Impact Factor,” defined as the average number of annual citations per article in a given journal, rounded by two decimal points. The eventual weighing in the Combined Score metric (described below) helps any bias that might be shown against journals that publish a larger number of shorter articles. The other critical metric Washington and Lee use is “Total Cites.” Total Cites is simply the number of combined journal and case cites in the given time period.
Together, Washington and Lee use “Combined Score” to determine their ultimate rankings. Combined Score is their composite of each journal’s Impact Factor and Total Cites. The Combined Score is calculated by weighing approximately 1/3 to Impact Factor and 2/3 to Total Cites. This weighted total is then normalized on a scale of 100 in order to create a relative ranking of all journals. The Combined Score rankings are based on Ronen Perry’s idea that neither an Impact Factor metric nor a Total Cites metric are independently sufficient to create a holistic set of ranking. Perry’s premise was that there is a problem in determining what weight to give to the underlying factors within a combined ranking methodology. Washington and Lee ultimately came to the decision to use 0.33 because that weight gives Harvard the highest combined-rank for the better part of the period dating from 1988 onward.
As far as Google Scholar is concerned, their rankings utilize two key metrics as well: “h5-index” and “h5-median.” The h-index Google uses is an alternative form to Impact Factor in measuring a given journals importance.
Google defines them both as follows:
“h5-median for a publication is the median number of citations for the articles that make up its h5-index.
h5-index is the h-index for articles published in the last 5 complete years. It is the largest number h such that h articles published in 2014-2018 have at least h citations each.”
While there is much debate as to which of the two metrics (Impact Factor or h-index) is “better” there might not be one right answer because they serve two different purposes. Impact Factor is geared toward recognizing the so-called ‘prestige’ while the h-index is designed as more of an author/researcher-based metric. In other words, Impact factor looks to find the reputation of a given journal but does not necessarily find the impact of the individual articles or their authors. On the other hand, h-index measures publication record and its impact by looking at both the number of papers the author has published as well as the citations that those papers have received.
There are those people out there that question whether Washington and Lee or Google Scholar is a full-proof system of calculating rankings of law journals. As a result, there are sources who have decided to create their own form of aggregate rankings.
Bryce Clayton Newell, Assistant Professor of Media Law and Policy at the University of Oregon has gained notoriety with his “Meta-Rankings” first posted in 2016. He considers a weighted combination of available resources that much of the community find reliable in a number of areas. He aggregates information from Washington and Lee with their law journal rankings as well as their individual Impact Factor rankings, Google Scholars metrics, and US News peer reputation score rankings as well as US News school rankings. His “MetaRank” gives 25% weight to Washington and Lee’s journal rankings, Google Scholars rankings, Us News peer reputation score and US News overall school rankings respectively leaving out Washington and Lee’s Impact Factor from the final rankings. This exercise, or something similar has gained popularity in recent years with many people in search of what the best process is to rank law journals. There is certainly disagreement in what to measure and how to weigh certain categories, but the premise remains constant; Trying to find the best way to rank law journals on an even playing field.
As the interest in ranking law journals has increased, many believe the journal landscape will be given more exposure as a result. More and more organizations and resources are being dedicated to studying journals and how people track citations within the legal documents. The hope is that with fresh faces exploring the subject, old methods will be challenged, and new methods will be implemented into the ranking of journals. While it won’t happen overnight, it sure will be interesting to see where it goes.
 Ranking, Merriam-Webster Dictionary. (11th ed. 2009).
 Washington and Lee School of Law, W&L Law Journal Rankings, washington and lee university, (Sep. 3, 2019), https://managementtools4.wlu.edu/LawJournals/Default.aspx.
 Google Scholar Top Publications, google, (Nov, 2019), https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=soc_law.
 W&L Law Journal Rankings, supra note 1.
 Helder Suzuki, Updated Scholar Metrics: Now Grouped by Research Area, google scholar blog, (Nov. 15, 2012), https://scholar.googleblog.com/2012/11/updated-scholar-metrics-now-grouped-by.html.
 Google Scholar Top Publications: Technology Law, google, (Nov, 2019), https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=soc_technologylaw.
 Washington and Lee School of Law, W&L Law Journal Rankings: Ranking Methodology, washington and lee university, (Sep. 2019), https://managementtools4.wlu.edu/LawJournals/Default3.aspx.
 Washington and Lee School of Law, W&L Law Journal Rankings: Combined-Score Ranking, washington and lee university, (Sep. 2019), https://managementtools4.wlu.edu/LawJournals/Default4.aspx.
 Washington and Lee School of Law, W&L Law Journal Rankings: Impact Facto, washington and lee university, (Sep. 2019), https://managementtools4.wlu.edu/LawJournals/Default5.aspx.
 W&L Law Journal Rankings: Combined-Score Ranking, supra note 10.
 Ronen Perry, The Relative Value of American Law Reviews: Refinement and Implementation, 39 Conn. L. Rev. 1 (2007) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=897063.
 W&L Law Journal Rankings: Combined-Score Ranking, supra note 10.
 Google Scholar Metrics, google, (July, 2019), https://scholar.google.com/intl/en/scholar/metrics.html#metrics
 Lutz Bornmann & Hans-Dieter Daniel, The state of h index research in EMBO Reports: vol 10, 2 (Dec. 12, 2009) https://www.embopress.org/doi/abs/10.1038/embor.2008.233.
 Bryce Clayton Newell, Law Journal Meta-Ranking, 2019 Edition, University of Oregon Blog, (July 23, 2019), https://blogs.uoregon.edu/bcnewell/meta-ranking/.
 Paul Caron, 2019 Meta-Ranking of Flagship U.S. Law Reviews, TaxProf Blog, (July 25, 2019), https://taxprof.typepad.com/taxprof_blog/2019/07/2019-meta-ranking-of-flagship-us-law-reviews.html.