School & District Management

If at First Your Study Succeeds, Try, Try Again, Education Research Agency Says

By Sarah D. Sparks — August 10, 2017 4 min read
  • Save to favorites
  • Print

Over the past few years, social sciences research has been roiled by problems of promising interventions that sputtered out in other schools and exciting results later found to be statistical noise.

With more districts and states seeking strong research on which to base school improvement, the Institute of Education Sciences has proposed prioritizing funding for studies that confirm or unpack existing findings.

“There is growing recognition in the education research community that for us to grow as a field and a science, we need to put more emphasis in replication,” said Thomas Brock, IES’ acting director.

“I think ESSA does increase the demand,” Brock said. “The What Works Clearinghouse has a growing archive [of intervention studies] ... but most of them have conducted in only one place or one time. As states and school districts look to research for what works, they will want to see, frankly, a broader body of evidence, in contexts that come closer to the characteristics of their own.”

IES asked for feedback on how it can encourage more general replication studies and more nuance in studies of interventions’ effectiveness—which are likely to be in more demand as states and districts seek school improvement and other interventions under the Every Student Succeeds Act. In particular, it asked:

  • How important is it that an intervention is evaluated by someone who did not create or distribute the program?
  • How should evaluators define the routine conditions—the typical practice in classrooms, schools, and districts, as well as the variety of students and the usual level of support for implementing an intervention when they study it?
  • How should IES support researchers collecting and analyzing data on how a program is implemented?
  • How should IES support research examining what caused a particular result and how results may vary when a program is implemented in different environments?

Building Enthusiasm for Replications

IES’ move is part of a wave of new interest in replication studies in education research, after high-profile problems with existing studies.

For example, one analysis of 100 studies published in three top psychology journals found only about a third of them showed the same results when the experiments were repeated. Sketchy results are not unique to education. The Federal Drug Administration requires all new medicines to show at least two large randomized control trials supporting their benefits, in part because the results of the first trial don’t show up in the second one more than half the time.

But Brock and Jon Baron, vice-president for evidence-based policy at the Laura and John Arnold Foundation, both agreed repeating someone else’s study often is thankless work. A landmark 2014 analysis of the top 100 education journals found that over five years, only .14 percent of the education articles published involved replication.

“One researcher told me doing a replication study is a good way to kill your career,” Baron said. “If you succeed in replicating the findings, it’s considered unoriginal. On the other hand, if you don’t reproduce it, you end up getting in a fight with the original researcher. There are not big incentives for researchers to do replication studies.”

Earlier this month, the American Educational Research Association convened representatives from more than 50 education research journals, foundations and organizations to discuss ways to encourage more replication and more nuanced evaluation studies, according to Felice Levine, executive director of AERA.

“What you are hearing and seeing from IES has been part of the conversation in the lead science agencies ...” Levine said. “The ethos on the replication side is not to have ‘gotcha’ [results], but how can we further advance knowledge and learning through stimulating replication research and attention to the reproducibility of research.”

And like IES, the Arnold Foundation has started prioritizing grants for replication studies. Baron said a third of the foundation’s 40 ongoing experimental studies are replications, such as a study to expand and update the MDRC’s evaluation of career academies in California, and another to look at whether book fairs help support reading achievement for low-income students.

Exploring New Contexts

Back in 2010, a randomized control trial in the journal Reading Psychology found book fairs that allowed mostly poor, black and Hispanic elementary students to choose a dozen free books just before summer break led to a boost in reading achievement of 35 percent to 40 percent of a grade level three years later. It suggested a promising way to counter summertime slump, but as Baron noted, “The only way to find out for certain is to do a replication study in another population; there really is no shortcut.”

That’s why the foundation is repeating the experiment as closely as possible in a rural, poor, but mostly white community in East Tennessee. “The hope is if it works with urban low-income black students and it works with rural low income white students, you’ve got a pretty good sense it will be more generalizable across populations,” he said. “Without that kind of evidence, you really don’t know if that first finding might be a fluke.”

IES’ Brock said that based on the response to the request for comment, the research agency could create new grants or restructure existing ones. “All of that could go a long way to signal to researchers and [journal] reviewers that this is something we’re very serious about,” he said.

Comments can be sent to Comments.Research@ed.gov by Monday, Oct. 2, 2017.


Related:

A version of this news article first appeared in the Inside School Research blog.