School & District Management

Focus on Fade-Out (Part 2): What We Learn From Lost Effects in Education

By Sarah D. Sparks — January 20, 2016 4 min read
  • Save to favorites
  • Print

This is the second in a series looking at new research on why the effects of successful education interventions often diminish over time. You can read the first installment here.

If there was a CSI: Education, researchers like Drew Bailey and John Protzko would be the ones dusting the body for fingerprints and lining up ballistics trajectories.

In two separate studies, the researchers and their colleagues are piecing together narratives of what happened after more than 100 studies’ worth of promising education programs which showed benefits that faltered and faded over the years.

“One problem is people don’t look at the trajectories of [students in] treatment and control groups separately; they just look at the difference between them,” said Bailey, an assistant education professor at the University of California, Irvine. “If the treatment is fading out because the treatment group is getting worse, that means something quite different than if the control group is catching up. The process through which persistent effects happen is important to understand.”

In a 2015 meta-analysis in the journal Intelligence, Protzko, a postdoctoral scholar in cognitive science at the University of California, Santa Barbara, analyzed nearly 7,600 participants in 39 randomized controlled trials of programs designed to improve students’ intelligence. The programs included educational interventions, like preschool, summer school, and reading supports, but also nonacademic programs like nutritional supplements for pregnant women and direct training in cognitive attention. In particular, Protzko looked at how students fared after participating in a program that had been shown to statistically significantly improve participants’ intelligence immediately after the program ended.

Over time, the benefits of every program faded, regardless of how long or intense it was initially. Moreover, interventions that started earlier in a child’s life “were no more effective and lasted no longer than interventions that started later,” Protzko found.

After modeling the patterns of how results faded, Protzko suggested most gains in IQ fade within a year or two, not because other students caught up, but because the students who saw results from the intervention lost their IQ gains over time.

“It’s easy to think of this in terms of loss—you raised intelligence and now these kids have lost it,” Protzko said. “But adaptation is a better way to think about it. When you remove the more challenging environment [of the intervention], the students adapt to the level of cognitive challenge they have. If you have an education intervention, it’s not enough to introduce an intervention and when it’s over, return students to the level of cognitive challenge they had to start with. You have to keep it going.”

Nearly all of the students in the experiments Protzko studied were from low-income backgrounds, and he noted that it’s impossible to tell from his analysis whether the students did not look for or simply could not find ways to continue to challenge themselves after the intervention was over. For either reason, their gains in intelligence seemed to shrink over time like the muscles of a runner who trained for a marathon and then did no more than amble.

“The permanency of the gains don’t play into whether the program is effective or not. It’s about the decisions the kids make, or are able to make, after the program is over. It tells us about what the children are doing with the gains they made,” Protzko said. “It should alter the terms of the [program effectiveness] debate on both sides.”

Keeping Up Initial Momentum

Bailey and colleagues at the University of California, Irvine, and Duke University, suggested interventions must be seen as pieces in the ongoing process of education rather than attempts to create silver bullets.

“What everyone would like is to have a one-time intervention that permanently, positively changes children’s learning trajectories, such that children who receive the intervention don’t only know more the following year, but they learn more,” in the years to come, Bailey said.

By contrast, Bailey and his colleagues found students who participated in a highly effective early-math intervention continued for some time to know more than children who had not participated—but the control group of students learned faster in the years following the intervention than the students who had participated earlier.

In a working paper on Bailey’s study, he and other researchers analyzed 67 early-childhood intervention programs that showed significant effects for students between 1960 and 2007. (Several of these overlapped with the programs Protzko studied.) But like Protzko, those gains dropped by half in the first 12 months following the interventions, and declined by half again in the next year or two, becoming insignificant. Unlike in Protzko’s study, however, the researchers from UC-Irvine and Duke found the gap between students who received an intervention and those who did not came because students in the control group learned faster in the years that followed, rather than that the students in the interventions slowed their pace of learning.

“Regardless of the reason, children are going to need persistently better instruction to continue to have an advantage over the control group children,” Bailey said. “It’s kind of a boring solution, but if the goal is persistent effects on children’s cognitive skills, kids will need different types of intervention throughout their development.”

Finding What Works in Education

Based on the patterns of which interventions tended to last longest, Bailey and his colleagues are developing a framework for identifying when an education intervention is likely to produce longer-lasting results. We’ll dig into how tomorrow.

Related:

Related Tags:

A version of this news article first appeared in the Inside School Research blog.