Will ESSA's Evidence Requirements Spur Actual 'Best Practices'? (No.)
Note: This week, Eric Kalenze, author and Director of Education Solutions at the Search Institute, will be guest-blogging. See his earlier post here.
My introductory guest post began with a paragraph from Rick's first-ever RHSU column, one that rightly called out the tired shields often raised in the education conversation (i.e., "the mantra of 'best practices'" and "vapid assertions about our love for kids") to slow honest debate, truly critical analysis, and making of tough choices.
Appropriate as this assessment is for U.S. education's 'big "R" (structural) Reformers', it should also ring loudly for those of us more focused on reforming instruction and practices.
[NOTE: This is not meant to exclude any 'big "R" Reformers' who might be reading. Really, and for the good of the education enterprise, it's way past time we came together on these kinds of things. If that's you, I do hope you consider reading on.]
Because yeah, we throw the term 'best practices' around a lot, saying we're abiding by them and their foundational ideals/philosophies and all, but seriously: if the practices and ideals that feel right and we continually work to execute are indeed 'best', how can we continue to show weakly in international comparisons, leave scores of the same populations behind, and not see progress jump more steeply over time on internal measurements? And with all the coaching, PD, and oversight across multiple decades? Really, how?
Simply, it's far past time for us to take more critical and honest looks at the practices—and their underlying foundational ideals/philosophies—we've long accepted as 'best'.
A large chunk of my book Education is Upside-Down contemplates this idea, and that chunk is far from being the first argument of its kind. On the contrary: in the last four decades or so, veritable piles of research, evidence, and analysis have accumulated to objectively oppose many truths of practice educators have long held as self-evident.
Still, unfortunately for both system students and professionals, we've historically shown that we don't really know how (or care?) to put all that research, evidence, and analysis to work. As the late Jeanne Chall, an educator/researcher considered among the 20th century's foremost experts in reading instruction, observed in 2000's posthumously released The Academic Achievement Challenge: What Really Works in the Classroom?, educators tend to opt for practices that go "in a direction opposite from the existing research evidence." Though Chall offered many reasons why this might be (including inabilities to properly process and translate research findings into practices, ideological commitments to certain approaches that block consideration of others, etc.), the fact for her remained that insights from research—even those with strong evidence bases—are often ignored or bypassed for...well, the other things.
And for what it's worth, my field experience and study would certainly confirm Chall's assertion. Indeed, it would be hard for me to count how many times I've seen teacher teams, school leaders, and district decision-makers opt for improvement strategies they have no verified reason to put their faith in. (You have too, reader, if you know of any schools/districts that have attempted to improve results by making large investments in educational technology—and that's but one example. Just sayin'.)
Taking these concerns about 'best practices' as prelude, I have to say I was pleased overall to see provisions and guidance in the new Every Student Succeeds Act (ESSA) encouraging selection of evidence-based improvement strategies/interventions. As it could well be the best policy-level steering we've ever seen toward choosing research-verified practices, I consider it a laudable step forward.
I'd stop well short of considering it an applaudable step forward, however, and for several reasons. Of those, here are three big ones:
1. The Limits of Cataloging Interventions
Though the Institute of Education Science (IES)'s What Works Clearinghouse (WWC) has done great work rebuilding its tools to aid decision-makers in selecting evidence-supported interventions, a recent analysis appearing in AERA's Educational Researcher found that "Most interventions were found to have little or no support from technically adequate research studies, and intervention effect sizes were of questionable magnitude to meet education policy goals." It'll be hard to pick effective interventions, in other words, when so few adequately studied and/or verified-effective interventions exist to be picked.
2. Ed Decision-Makers Still the Key—and They Need Better Research Literacy First
Classification of evidence-strength and relative improvement indices aside, ESSA's provisions and guidance toward evidence-based interventions overestimate many decision-makers' (a) command of educational research and (b) strategic knowledge of instructional strategies for their contexts. The new 'intervention shopping' capability provided by WWC will only be as good, after all, as decision-makers' assessments of their contexts' needs and their abilities to select accordingly appropriate interventions. (Past mismatches, after all, could very well be contributing to the low effect sizes noted by AERA researchers in #1, above. If chosen interventions match poorly to a school's actual needs, then little positive effect should figure to result.)
3. 'Flexibility With Responsibility' Won't Move Needles on Leaders' Research Literacy
With no prescriptive, recommending, or regulatory elements in ESSA's evidence-based interventions provisions, it's hard to see how the guidelines will have much of a shaping effect on enterprise-common notions of 'best practices'. At bottom, only one additional wordsmithing hoop exists before SEAs and LEAs are free to enact whichever improvement interventions and strategies they preferred to begin with. Though some additional reflection upon current practices and exploration of alternative approaches could be spurred by such hoop-jumping, it's hard to imagine many districts (especially larger ones) going sufficiently deeply on such work without greater direction, consultation, or prescription. In the end, basically, it becomes a teachable moment, deferred.
Without ways to build leaders' research literacy and facility, both in properly assessing the needs of their contexts and in choosing appropriate instructional strategies, it's hard for me to envision ESSA's evidence requirements effecting much movement on widely accepted (and too often wrong) notions of 'best practices'. While tools and guidance bring at least some consideration of proof into leaders' action-planning, I'm more skeptical about the potential of these provisions than I am hopeful.