Blog

Your Education Road Map

Politics K-12®

ESSA. Congress. State chiefs. School spending. Elections. Education Week reporters keep watch on education policy and politics in the nation’s capital and in the states. Read more from this blog.

School & District Management

Dismal Federal Study of School Improvement Grant Program Missed the Boat, Report Says

By Alyson Klein — September 13, 2018 3 min read
  • Save to favorites
  • Print

U.S. Secretary of Education Betsy DeVos often points to a gloomy federal analysis of the Obama administration’s multi-billion dollar School Improvement Grant program to make the case that big federal spending and direction doesn’t make a difference.

The analysis, which was commissioned by the Institute of Education Sciences, the Education Department’s research arm, found that the SIG grant, which poured more than $7 billion into low-performing schools, had no significant impact on math and reading scores or high school graduation.

But did that analysis give an accurate picture of the program? Two former Education Department officials—Alan Ginsburg and Marshall S. Smith—argue in a new report released by FutureEd, a non-partisan think tank at Georgetown University, that it missed the boat.

The IES report ignored more-localized studies that Ginsburg and Smith say painted a brighter picture of SIG’s success. It had some clear design flaws, they argue. It also missed the chance to help communicate some best practices in states and districts where SIG worked that could be useful as educators and policymakers begin rolling out new school improvement plans under ESSA.

The IES study’s authors dispute those findings. (More below).

Ginsburg was a career staffer who served both Republican and Democratic administrations. He retired as the department’s director of policy and program studies. Smith was a Democratic political appointee, serving in different roles during the Obama, Clinton, and Carter administrations. He was undersecretary under Clinton. Smith is now a senior fellow at FutureEd,

The IES study was conducted by two nonpartisan research organizations, Mathematica and the American Institute for Research. It was released at the end of the Obama administration, just before the Trump team took office.

So what were the flaws in the analysis, according to Smith and Ginsburg? For one thing, it had a relatively small sample size and required students to make “unrealistically large gains” in order to show “statistically significant improvement.” Mathematica and AIR used a higher benchmark to demonstrate that level of improvement than IES has used for other studies of schools, including a look at the KIPP charter network, the report says.

What’s more, schools in the federal study weren’t representative of other SIG schools nationally, Ginsburg and Smith note. Mathematica and IES included a higher percentage of urban schools and students from disadvantaged backgrounds than SIG did overall. That skewed the sample, Ginsburg and Smith say.

And the findings contradict other, more-localized looks at SIG. Ginsburg and Smith examine studies that considered the performance of SIG schools in states such as California and Ohio. Twelve of those studies show that, “many SIG programs did indeed produce significant improvements in student achievement,” the report concluded. Four of those studies used research methods deemed most rigorous by IES showed gains, and three of them showed gains.

“The fact they didn’t recognize the differences in context is pretty serious,” Ginsburg said in an interview. “It creates a lot of noise and can bury the positive findings.”

Mathematica, though, says the study presented an accurate picture of the program.

“Our study had sufficient statistical power to detect meaningful effects. It’s unlikely that there were substantively important impacts that were undetected by the study,” said Joanne Pfleiderer, a spokeswoman for Mathematica.

Dana Tofig, a spokesman for AIR, concurred and said his organization stands by the study. He added that AIR had conducted one of the smaller scale studies referenced in the report, and wrote a blog post on it, in the context of the federal findings.

Here’s a snippet from the FutureEd report showing net scores at schools included in the more localized studies, as compared to IES.

The FutureEd report also notes that the feds missed an opportunity to take a deeper look at why SIG failed in some places and succeeded in others, Ginsburg and Smith say. That information, they argue, could help states and districts figure out how to implement evidence-based interventions under ESSA.

For example, SIG schools in Houston, San Francisco, and Massachusetts put in place “comprehensive, research based approaches,” and had positive results, Ginsburg said.

And Ginsburg wishes that the department had released state-by-state test score data showing how SIG schools performed. He would love to see that kind of data made available for schools flagged for extra help under ESSA, too.