Teaching

What Role Will Research Play in ESSA?

By Sarah D. Sparks — December 04, 2015 5 min read
  • Save to favorites
  • Print

With the Senate expected to easily approve the mammoth Every Student Succeeds Act next week, it’s worth taking a look at how big a role research will play in guiding the next phase of federal education policy.

The answer: potentially a broader role—but a much quieter one—than in the No Child Left Behind era.

It’s clear that Congress is done trying to demand a “gold standard” for research. While the No Child Left Behind Act famously cited “scientifically based research” more than 100 times, and earlier reauthorization attempts spoke of “scientifically valid” research, the ESSA bill calls simply for programs and interventions to be “evidence based.” It also leaves most of the responsibility to states to decide the quality of that evidence.

Tiered Evidence Makes a Comeback

So, let’s break down what “evidence based” actually means. Buried in the bowels of the bill, it’s defined as an activity, strategy, or intervention that shows a statistically significant effect on improving student outcomes or other relevant outcomes. So far, so good.

The bill describes tiered levels of evidence, modeled on the Investing in Innovation grant program:

  • Strong evidence: includes at least one well-designed and -implemented experimental study, meaning a randomized controlled trial.
  • Moderate evidence: includes at least one well-designed and -implemented quasi-experimental study, like a regression discontinuity analysis.
  • Promising evidence: includes at least one well-designed and implemented correlational study that controls for selection bias.

Programs and interventions have to meet those tiers of evidence if they are part of the law’s accountability-related school improvement section. However, the bill doesn’t actually make a distinction among these levels —all of them are considered “evidence.”

“If you only apply strong and moderate there are a lot fewer options, so I think [lawmakers] were trying to do a balancing act and give people more flexibility to try new things,” said Michele McLaughlin, the president of the Knowledge Alliance, which represents federal and private research groups. “If you’re a state and you don’t feel like buying into the evidence agenda, you can definitely go into the lowest tier ... but this is the era we’re in; it’s coaxing the people into doing things rather than telling them you have to do all this or bad things will happen to you.”

Everything else in the law that must be “evidence based,” can use those tiers—but if there is nothing that meets any of the levels, there’s a fourth tier. If a state or provider can show that a program’s rationale is based on high-quality research or a positive evaluation that suggests it is likely to improve student or other important outcomes, the rationale is enough, as long as the program or intervention has ongoing self-evaluation as well.

Grover “Russ” Whitehurst, a Brookings Institution fellow and the first Institute of Education Sciences director, said it makes sense for Congress to take a step back from requiring specific types of research in ESSA.

“One error of NCLB was demanding the use of [scientifically based research] when it didn’t exist, thus debasing the currency. Another was getting Congress into the business of defining SBR in education—research methodologists they are not,” Whitehurst said. “Perhaps these were justified at the time because the state of education research was really awful. But neither makes sense today.”

The tiers also mirror evidence structures that are already required in other IES research grants and for programs in several other agencies.

Implementation Hinges on State Capacity

Like the rest of the bill, ESSA’s approach to research returns much authority and responsibility to the states. For example, states and districts decide whether evidence on a subject is “reasonably available” when deciding how it should be required. Expect that standard to come into play in areas like teacher professional development, where research on effectiveness has been mixed.

“With the power in the hands of the states the enforcement mechanisms for use of [scientifically based research] at the federal level are severely diminished,” Whitehurst said. “In that context, it doesn’t make a lot of sense for Congress to demand the use of SBR.”

However, across the board, research experts said many states don’t have the capacity to regularly evaluate research breadth and quality. That’s likely why the regional educational laboratory system has been noted in ESSA, even as other longstanding programs were consolidated and after several years of budget fights over the labs’ future. [Correction: An earlier version of this post mischaracterized how RELs are authorized. ESSA refers to them, but they would be officially reauthorized in the law reauthorizing the Education Sciences Reform Act.]

“States are going to need help,” McLaughlin said. “I think programs like [RELs] that people in the past have seen as more marginal will become critical, because who else is going to help these guys?”

In addition, ESSA would allow the Institute of Education Sciences to pool money set aside for evaluations in individual programs. Previously, the .5 percent set aside from many smaller programs didn’t add up to enough for a full experimental-design evaluation for them. Pooling that money will allow IES to plan more comprehensive studies—something the National Board for Education Sciences, the American Educational Research Association, and others have been requesting for years.

Innovation and Aspiration

President Obama’s Investing in Innovation grant program, which lent its tiered-evidence framework to the bill, has also survived in a more targeted form: Education Innovation and Research grants.

The competition will, like i3, include early-phase grants to develop, implement, and test new and promising programs. Second-level grants would support full, rigorous evaluations of programs that have already been successfully implemented.

The third level of grants would target programs that have shown “sizable, important impacts” in those evaluations. These grants would support both scaling up and, interestingly, replication studies to make sure the original results were correct and “identify the conditions in which the program is most effective.”

The innovation and research grants were a big win for Jon Baron, the vice president of evidence-based policy at the Laura and John Arnold Foundation. “I think it’s an important toehold and potentially at some point it is a step forward,” he said.


Related:

A version of this news article first appeared in the Inside School Research blog.