Opinion Blog


Rick Hess Straight Up

Education policy maven Rick Hess of the American Enterprise Institute think tank offers straight talk on matters of policy, politics, research, and reform. Read more from this blog.

Education Opinion

The RHSU Edu-Scholar Public Influence Scoring Rubric

By Rick Hess — January 06, 2015 11 min read
  • Save to favorites
  • Print

Tomorrow, I’ll be unveiling the 2015 RHSU Edu-Scholar Public Influence Rankings, honoring the 200 university-based education scholars who had the biggest influence on the nation’s education discourse last year. Today, I want to run through the scoring rubric for those rankings. The Edu-Scholar rankings employ metrics that are publicly available, readily comparable, and replicable by third parties. This obviously limits the nuance and sophistication of the measures, but such is life.

Given that there are well over 20,000 university-based faculty tackling educational questions in the U.S., even making the Edu-Scholar list is an honor—and cracking the top 100 is quite an accomplishment in its own right. So, who made the list? Eligible are university-based scholars who have a focus wholly or primarily on educational questions. The rankings include the top 150 finishers from last year, augmented by 50 “at-large” additions named by a selection committee of 31 accomplished and disciplinarily, intellectually, and geographically diverse scholars. The selection committee (composed of members who were already assured an automatic bid by dint of their 2014 ranking) first nominated individuals for inclusion, then voted on whom to include from that slate of nominees.

I’m indebted to the members of the committee for their assistance, especially given that they’re all extraordinarily busy folks. So, I’d like to acknowledge the members of the 2015 RHSU Selection Committee:

Deborah Ball (U. Michigan), Camilla Benbow (Vanderbilt), Dominic Brewer (NYU), Linda Darling-Hammond (Stanford), Susan Dynarski (U. Michigan), Ronald Ferguson (Harvard), Susan Fuhrman (Columbia), Dan Goldhaber (U. Washington), Sara Goldrick-Rab (U. Wisconsin), Jay Greene (U. Arkansas), Margaret Grogan (Claremont), Eric Hanushek (Stanford), Doug Harris (Tulane), Jeff Henig (Columbia), Thomas Kane (Harvard), Gloria Ladson-Billings (U. Wisconsin), Susanna Loeb (Stanford), Bridget Terry Long (Harvard), Pedro Noguera (NYU), Gary Orfield (UCLA), Robert Pianta (U. Virginia), Andy Porter (UPenn), Jim Ryan (Harvard), Marcelo Suarez-Orozco (UCLA), Sarah Turner (U. Virginia), Jacob Vigdor (U. Washington), Kevin Welner (CU Boulder), Marty West (Harvard), Daniel Willingham (U. Virginia), Yong Zhao (U. Oregon), and Jonathan Zimmerman (NYU).

Okay, so that’s how the list of scholars was compiled. How were they ranked? Each scholar was scored in eight categories, yielding a maximum possible score of 200. No one scored a 200. Surveying the results shows that a 100 puts one safely in the top 20, a 75 suffices to crack the top 40, and a 50 will get someone into the top 100.

Scores are calculated as follows:

Google Scholar Score: This figure gauges the number of articles, books, or papers a scholar has authored that are widely cited. A neat, common way to measure breadth and influence of a scholar’s work is to tally works in descending order of how often each is cited, and then identify the point at which the number of oft-cited works exceeds the cite count for the least-frequently cited. (This is known to aficionados as the h-index.) For instance, a scholar who had 20 works that were each cited at least 20 times, but whose 21st most-frequently cited work was cited just 10 times, would score a 20. The measure recognizes that bodies of scholarship matter because they influence how important questions are understood and discussed. It helps ensure that results recognize deep influence, not just research that was buzzworthy last year. The search was conducted using the advanced search “author” filter in Google Scholar. A hand search culled out works by other, similarly named, individuals. For those scholars who had been proactive enough to create a Google Scholar account, their h-index was available at a glance. While Google Scholar is less precise than more specialized citation databases, it has the virtue of being multidisciplinary and publicly accessible. Points were capped at 50; if a scholar’s score exceeded that, they received a 50. This score offers a quick way to gauge both the expanse and influence of a scholar’s body of work. (This search was conducted on December 8-9.)

Book Points: An author search on Amazon tallied the number of books a scholar has authored, co-authored, or edited. Scholars received 2 points for a single-authored book, 1 point for a coauthored book in which they were the lead author, a half-point for coauthored books in which they were not the lead author, and a half-point for any edited volume. The search was conducted using an “Advanced Books Search” for the scholar’s first and last name. (On a few occasions, a middle initial or name was used to avoid duplication with authors who had the same name, e.g., “David Cohen” became “David K. Cohen,” and “Deborah Ball” became “Deborah Loewenberg Ball.”) We only searched for “Printed Books” (one of several formats Amazon allows you to pick from) so as to avoid double-counting books which are also available as e-books. This obviously means that books released only as e-books are omitted. However, circa 2013, this still seems appropriate, given that few relevant books are, as yet, released solely as e-books (this will likely change before long, but we’ll cross that bridge when we come to it). “Out of print” volumes were excluded. This measure reflects the conviction that book-length contributions can shape and anchor discussion in an outsized fashion. Book points were capped at 25. (This search was conducted on December 9-10.)

Highest Amazon Ranking: This reflects the author’s highest-ranked book on Amazon. The highest-ranked book was subtracted from 400,000, and that figure was divided by 20,000. This yielded a maximum score of 20. Given the nature of Amazon’s ranking algorithm, this can be volatile and is biased in favor of more recent works. For instance, a book may have been very influential a decade ago and continue to influence citation counts and a scholar’s larger profile, but produce few or no ranking points this year. The result is a decidedly imperfect measure, but one that conveys real information about whether a scholar has penned a book that is shaping the conversation. To that point, a number of books that stoked public discussion in recent years score well. (This search was conducted on December 10.)

Education Press Mentions: This reflects the total number of times the scholar was quoted or mentioned in Education Week, the Chronicle of Higher Education, or Inside Higher Education during 2014. The search was conducted using each scholar’s first and last name. If applicable, we also searched names using a common diminutive, and with and without a middle initial. In each instance, the highest result was recorded. To get the total number of Ed Press points, the number of appearances in the Chronicle and Inside Higher Ed were averaged, then that number was added to the number of appearances in EdWeek and divided by 2. This ensures that K-12 and higher ed get equal weight in this metric. Ed Press points were capped at 30. This, like the next couple categories, seeks to use a “wisdom of crowds” metrics to gauge a scholar’s ubiquity and relevance to public discourse last year. (This search was conducted on December 16.)

Web Mentions: This reflects the number of times a scholar was quoted, mentioned, or otherwise discussed online in 2014. The search was conducted using Google. The search terms were each scholar’s name and university affiliation (e.g., “Bill Smith” and “Rutgers”). Using affiliation serves a dual purpose: it avoids confusion due to common names and it increases the likelihood that the mentions are related to their university-affiliated role, rather than their activity in some other capacity. If a scholar is mentioned sans affiliation, that mention is omitted here. (That likely tamps down the scores of well-known scholars for whom affiliation may seem unnecessary. However, since the Darling-Hammonds, Ravitches, and Hanusheks fare just fine, I’m not concerned about the results.) Because social media and online discussions tend toward the informal, the search also included common diminutives (e.g., “Bob Pianta” as well as “Robert Pianta”). Searches were also run with and without middle initial. For each scholar, we used the single highest score from among these various configurations. (We didn’t sum them up, because that would have padded some scores with a lot of duplication.) Points were calculated by dividing total mentions by 30. Scores were capped at 30. (This search was conducted on December 15.)

Newspaper Mentions: A Lexis Nexis search was used to determine the number of times a scholar was quoted or mentioned in U.S. newspapers. As with Web Mentions, the search was conducted using each scholar’s name and affiliation. Searches were run with and without middle initial, and the search also included common diminutives. For those in the top 150 who changed institutional affiliations in 2014, we were aware of the change and ran the names paired with each affiliation. In each instance, the highest result was recorded. Points were calculated by dividing the total number of mentions by two, and were capped at 30. (The search was conducted on December 16.)

Congressional Record Mentions: We conducted a simple name search in the Congressional Record for 2014 to determine whether a scholar had testified or if their work was referenced by a member of Congress. Qualifying scholars received five points. (This search was conducted on December 11.)

Klout Score: A Twitter search determined whether a given scholar had a Twitter profile, with a hand search ruling out similarly named individuals. The score was then calculated from a scholar’s Klout score, which is a number between 0 and 100 that reflects a scholar’s online presence, primarily how often their Twitter activity is retweeted, mentioned, followed, listed, and answered. The Klout score was divided by 10 to calculate points earned, yielding a maximum score of 10. If a scholar was on Twitter but did not have a Klout score, then they received a zero. (This search was conducted on December 11.)

The scoring rubric is intended to both acknowledge scholars whose widely-referenced body of work influences our thinking on edu-questions and scholars who are actively engaged in public discourse and in writing and speaking to pressing concerns. That’s why the scoring is designed to discount, for instance, academic publications that have rarely been cited or books that are unread or out of print. Generally speaking, the scholars who rank highest are those who are both influential researchers and also active in the public square.

There are obviously lots of provisos when perusing the results. Different disciplines approach books and articles differently. Senior scholars obviously have had more opportunity to build a substantial body of work and influence (which is why the results unapologetically favor sustained accomplishment). And readers may care more for some categories than others. That’s all well and good. The whole point is to spur discussion about the nature of responsible public engagement: who’s doing a good job of it, how much these things matter, and how to gauge a scholar’s contribution. If the results help prompt such conversation, then we’re all good.

It’s worth noting that some academics dabble (very successfully) in education, but that it’s only a sideline for them. Two economists who come to mind are James Heckman and Raj Chetty. These accomplished individuals, for all their gifts, are not eligible for the Edu-Scholar rankings. (I’m sure they’ll survive.) For a scholar to be eligible for inclusion, education must constitute a substantial majority of their research and publication. Otherwise, among other challenges, the acclaim a Heckman or Chetty has earned writing about larger economic questions would play havoc with the rankings. This decision helps ensure that the rankings serve as something of an apples-to-apples comparison among scholars who focus on education.

Two questions commonly arise: Can somebody game this rubric? And am I concerned that this exercise will encourage academics to chase publicity? As for gaming, color me unconcerned. If scholars (against all odds) are motivated to write more relevant articles, pen more books that might sell, or be more aggressive about communicating their thinking in an accessible fashion, I think that’s great. That’s not “gaming,” it’s just good public scholarship. If I help encourage that: sweet. (This is why it always matters if metrics actually measure the things you value.) As for academics working harder to communicate beyond the academy, well, there’s obviously a point where public engagement becomes sleazy PR, but most academics are so immensely far from that point that I’m not unduly concerned.

Tomorrow’s list is obviously only a sliver of the faculty across the nation who are tackling education or education policy. For those interested in scoring additional scholars (or themselves!), it’s a straightforward task to do so using the scoring rubric. Indeed, the exercise was designed so that anyone can generate a comparable rating for a given scholar in no more than 15-20 minutes.

And a final note of thanks: For the arduous task of coordinating the selection committee and then spending dozens of hours crunching and double-checking all of this data for 200 scholars, I owe an immense shout-out to my ubertalented, indefatigable, and eagle-eyed research assistant Jenn Hatfield.

The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.