Education Department Clarifies ESSA Rules on School Improvement Evidence
Using research to improve schools means more than just finding a successful evaluation of a program. Tailoring interventions to students, implementing them well, and evaluating them carefully all make the difference between a program that has worked somewhere and a program that works in your school.
The U.S. Education Department hopes to get more states, districts, and researchers thinking about evidence use more deeply with new rules to apply standards of research evidence for school improvement and other projects under the Every Student Succeeds Act. In a Federal Register notice published Monday morning, the department lays out the new requirements for direct grants under the Education Department General Administrative Regulations, or EDGAR, to bring them in line with the tiered standards for evidence that are outlined in ESSA.
For the most part, the rules tweak or clarify existing rules to incorporate ESSA's four increasingly rigorous levels of evidence. As a refresher on those evidence tiers: To use an intervention or approach for school improvement under ESSA, it must be backed by research that is strong (experimental trials), moderate (quasi-experimental), or by promising studies that don't meet the higher standards of rigor but still statistically control for differences between the students using an intervention and those in a control group. For topics aside from school turnaround where there just is no rigorous research, states and districts can test an intervention while conducting their own study. Under current rules, ED can already set an absolute grant priority or a competitive preference for projects that meet those tiers of evidence.
In ESSA, "there's pretty broad discretion, with a few exceptions, mostly around evidence-based practice," said Carrie Phillips, a school improvement and evidence-use expert for the Council of Chief State School Officers at a discussion of ESSA's evidence requirements last week. "That is an area where states and districts know there is a lot of opportunity, but where they will also be judged. The onus is on the locals to make change, rather than a federal program."
The new rules would take a similar approach to Obama-era guidance issued last fall—and later clawed back under the Trump administration. Not only do the studies have to meet the tiered standards of rigor, but they must be "relevant": "There must be a link between the proposed activities, strategies, and interventions and specific statistically significant effects," as the proposed rules say. So, for example, if you want to use a reading intervention for English-language learners, then your supporting study should show benefits for English-learners, or at least a group that included English-learners.
Moreover, a proposed plan would have to show that a district's implementation would be a "faithful adaptation of the evidence" they cite to support using that intervention. If you cite a study for a math program that requires one-on-one tutoring and your district plans to use it only for groups of 10, that study might not be kosher.
And for the districts or states that plan to build and test their own interventions, the rules would require that they submit their evaluator's qualifications and the resources they would dedicate to doing the evaluation. Prior research has shown that evaluations tend to be more effective when they are planned and the researchers are brought in from the beginning.
ESSA specifically called for strong evidence to meet the standards of the federal What Works Clearinghouse, and the proposed rules flesh out those standards in a handbook available to researchers and educators.
ED will accept comments on the proposed rules for the next 30 days at www.regulations.gov.