School & District Management

What i3’s Successes (and Failures) Tell Us About Implementing Federal Evidence Standards

By Sarah D. Sparks — January 19, 2017 6 min read
  • Save to favorites
  • Print

The federal Investing in Innovation program helped new interventions build evidence of effectiveness, but also highlighted how much local education groups need support from regional and national experts to build successful interventions.

That was the takeaway from an evaluation of the program, known as i3, released this afternoon by the Social Innovation Research Center, a nonprofit think tank supported by the national venture philanthropy fund, New Profit.

The findings raise concerns about state and local districts’ ability to develop and study their own school improvement and other interventions under the Every Student Succeeds Act, which gives districts much more flexibility to implement school improvement and other interventions, but requires them to meet evidence standards modeled after those of i3.

“With all these things happening under ESSA with new evidence definitions, if you are going to just throw that out there and hope the locals will do it with no assistance, you are dreaming,” said Patrick Lester, director of the Social Innovation Research Center. “These [i3 grantees] are in the top 3 percent of [i3] applicants, they are supposed to be the crème of the crop, the elite of school districts ... and we see what the results look like. For the most part, school districts were out of their depth.”

The Obama administration launched the $1.4 billion i3 program in 2009, part of the economic stimulus education spending that brought the much bigger Race to the Top and School Improvement Grants too, but i3 was the only one of the massive federal competitive grants to be codified in the Every Student Succeeds Act, as the revamped Education Innovation and Research program. Both iterations of the grants are intended to support the developing, testing and scaling up effective interventions for school improvement, early childhood education, dropout prevention, and other education areas.

Thomas Brock, the acting director of the Education Department’s research agency, the Institute of Education Sciences, said he agreed with the study’s findings, adding that district and state research capacity “has been front on my mind since ESSA was passed, because it’s clear it is trying to push this [evidence-based] work,” he said. “My worry is ... there may just not be the readiness yet for states and localities to undertake this work, and even when there is readiness, it’s still just very hard work.”

Capacity Issues

About a third of all 44 interventions developed and evaluated under the i3 grants to date showed significant benefits, and others showed some positive results, according to the study.

That’s more than double the average success rate for research and development in education; a 2013 study by the Coalition for Evidence-Based Policy found only 12 percent of well-conducted experimental evaluations of education interventions found positive effects. Final evaluations have only been released for projects from the first three years of the program, 2010-12, but if the current success rate holds, Lester estimates 52 of the grant’s 172 total projects will show evidence of success.

The i3 interventions helped fill gaps in the field on school turnaround strategies, science assessments, and the use of data to improve student achievement the study found. However, while 12 projects focused on teacher and principal professional development, only three found benefits to their interventions and one of those, on the Knowledge Is Power Program charter schools, was not directly focused on professional development.

“It seems to me these should be more focused on a problem of practice,” said John Q. Easton, the vice president for programs at the Spencer Foundation and former director of IES. “You would think these development grants would be at their best when there was a common agreement about this is what we are trying to improve. It does support the argument for research-practice partnerships pretty strongly, to help districts on the evidence side, the analysis side, maybe even the data collection side.

Elements of Success

However, the success of interventions skewed heavily toward experienced, well-funded organizations that were centered around their own interventions, rather than from grass-roots interventions launched by school districts, Lester found. Out of the 16 districts that received i3 grants directly, only three showed positive effects for their interventions. By contrast, 3 out of 4 university-led grants showed evidence of success for their interventions.

Part of that is because all but one of the district-led interventions received development grants, which required the lowest initial evidence base and which were also least likely to show benefits.

Caitlin Scott, a manager in research and evaluation at the research group Education Northwest, said she has seen similar lack of training or resources for districts she has worked with as an evaluator, but argued that most districts don’t have full-time research staff and shouldn’t necessarily prioritize doing research on their own.
She agreed with the evaluation’s recommendation that districts engaging in education research and development receive better access to national experts and research partners who can take on the technical side of the projects.

"[Districts] are in the business of educating students; this is not their daily mission,” Scott said. “If they get an i3 grant or are using Tier 4 [school improvement intervention research] under ESSA, they may need a research partner to do that, but at the end of the period ... they don’t necessarily need to continue to construct ongoing [experimental] or quasi-experimental studies.”

Moving Forward

Successful i3 projects tended to have more modest focus, run by organizations with a lot of experience implementing the intervention and careful planning for how to use their staff and money and expand the program over time. The study found organizations that started out with significant experience and resources had fewer problems at every stage, from design to implementation fidelity to sustainability.

“True innovation tended to come from people who knew the field very well and so they understood that they had something that was genuinely innovative,” Lester said. “I would say failure was more typically associated with lack of capacity; they didn’t even know whether they were being innovative and how to measure it well.”

For example, interventions evaluated using randomized control trials were as likely to see benefits as interventions studied under other statistical evaluation methods, but groups with more resources and expertise were more likely to undertake the so-called “gold standard” experimental studies in the first place.

“Ironically it’s the research departments that are the first to be gutted when there are budget cuts,” said Vivian Tseng, vice president for programs at the William T. Grant Foundation, which funds initiatives to increase the use of evidence in policymaking. “The very entity that could help districts learn how to improve and make better resource allocation decisions [is] too often the first on the chopping block.”

Related:

A version of this news article first appeared in the Inside School Research blog.