Opinion Blog


Rick Hess Straight Up

Education policy maven Rick Hess of the American Enterprise Institute think tank offers straight talk on matters of policy, politics, research, and reform. Read more from this blog.

College & Workforce Readiness Opinion

When Education Research Training Is Like Giving a Power Saw to a 5th Grader

By Rick Hess — September 24, 2018 4 min read
  • Save to favorites
  • Print

Last week, I wrote about the problem with “thin” expertise in education research. The issue: Too many grad students are training to be education policy scholars in programs that cultivate expertise in research methods but not in the stuff of education. Today, I want to say a few more words on this, prompted by an especially illuminating line of response—which is the contention several have shared that doctoral students should focus single-mindedly on mastering methods and statistical tools, because all that other stuff can be picked up via energetic reading and some late-night reflection.

Now, I wholly buy the notion that grad students need rigorous methodological training. After all, a quarter-century at this has required me to consume more than my share of junk education research. And, while you might not know it from my scribblings today, back when I did my PhD, the Harvard government department insisted on serious training in methods and research design. I say all this only to make clear that, yes, I agree that aspiring researchers should be rigorously trained in methods.

And yet I was struck to see it argued that technical/methodological expertise is all that really matters—that knowing about education, policymaking, and the rest is nice but not essential for education policy researchers. All the same, I wasn’t terribly surprised, because I’ve seen too much of this sort of thing, both from clever econometricians and from scholar-activists content to treat education as a series of simple levers they’re trying to jiggle as they seek to fix larger social ills.

The problem: Putting impressive-sounding, attention-getting analytic tools in the hands of education researchers who don’t understand education is like putting a power saw in the hands of a 5th grader. That saw is more likely to lead to an emergency room visit than to elegant carpentry. Competent education policy researchers need expertise in both methods and substance.

Imagine that researchers are trying to gauge how much test scores matter for long-term outcomes like employment, earnings, or civic engagement. And let’s say they use data from the pre-accountability era, in the 1990s, when test results mattered less to schools. It would be important, of course, to recognize and to note that the relationship of test scores to long-term outcomes in a low-stakes environment could change dramatically when the stakes are ramped up. In a high-stakes context, schools and educators may scramble to inflate scores in ways that lessen their predictive utility. Worse, in such an environment, the educators taking shortcuts and posting impressive results may not be the same ones who were previously delivering authentic gains, muddying the findings. It would be problematic, even counterproductive, if researchers didn’t carefully explain all of this.

Or imagine that researchers are studying high school graduation rates and find substantial gains fueled by credit recovery programs. Let’s presume that those researchers treat graduation rates as an obviously desirable outcome, which means interventions that boost graduation rates get described as “effective.” Of course, it would be vital for researchers to note that credit recovery might boost outcomes by helping students stay on-track and overcome missteps or by amounting to a standards-free alternative path to a meaningless degree. Distinguishing between the two would require researchers to grasp the ins and outs of credit recovery, to understand the incentives at play, and to find ways to tease out the wheat from the chaff. Otherwise, they might offer recommendations which accidentally encourage gimmicks, gamesmanship, and lowered standards.

Or consider what happens when number crunchers with state-of-the-art tools assume that it’s someone else’s job to ensure that numbers measure what they’re supposed to. As Georgetown University’s terrific Nora Gordon has observed regarding the data in the U.S. Department of Education’s Civil Rights Data Collection, “I feel like there is a lot of policy attention to that data source, with good reason, but I don’t know how those data are vetted in any way.” The result, Gordon notes, is that “there are a lot of crazy outliers you’ll see in there, so you have to come up with some decisions about how you’re going to trim them.” Such decisions can hugely impact results, but making them requires context, substantive expertise, and informed judgment.

A big part of the problem is the professional rewards in education research today for using cool tools to study big data sets in order to offer simple answers to complex questions. Addressing that is not just a question of training or skills. But ensuring that education research is useful, wary of blind spots, and reflective about what it can and can’t say, begins by equipping researchers to ask the right questions and to know what they don’t know.

The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.