Researching 'What Works' in Education Isn't Working
For a couple more weeks, Rick will be out discussing his new edited volume, Bush-Obama School Reform: Lessons Learned. While he's away, several of the contributors are stopping by and offering their reflections on what we've learned from the Bush-Obama era. This week, you'll hear from Bob Pianta, the dean at the University of Virginia's Curry School of Education, and Tara Hofkens, a postdoctoral research associate at UVA. They will discuss the attempts by the Bush and Obama administrations to enhance educational research, what those efforts yielded, and what lessons we should learn.
"What works?" This tagline was a motto for the Bush and Obama vision for federal investment in education: Fund research that develops empirically proven solutions that improve student achievement and create infrastructure to make it available to educators. The What Works Clearinghouse (WWC) would become the go-to place for practitioners in need of tools.
In retrospect, "what works" paints an incomplete, misinformed, and perhaps even misleading picture of what we can glean from education research during the Bush and Obama years. Although much was learned, and several high-leverage programs were developed (particularly in reading), the research funded during this time provided relatively limited insights about "what works," particularly at scale, where most large evaluations yielded small, mixed, or null effects. And we learned little about why interventions did or did not work within and across educational settings.
Other versions of "what works" attempt to address differential effectiveness of interventions, like "what works for whom" or "what works under what conditions." The assumption is that there are "different types" of students (students who are not average, but are similar to one another). From this perspective, the challenge of finding what works at scale—and the challenge of making sense of null effects of interventions—is figuring out who the "different types" of students are and distributing an appropriate response to "those types" of students (across and within school systems).
These taglines have a lot of value: They highlight important variation in what predicts achievement and reflect educators' commitment to supporting all students. While this takes us beyond "what works," to significantly move student achievement, we need to not only highlight variation but also explain it. One of the lessons learned from the Bush-Obama era is that differential effectiveness may be related to the processes by which teaching and learning unfold in classrooms. Post-Bush and Obama, we might want to consider a tagline that sharpens our focus on process in context.
"How does it happen?" This tagline asks the question: How does teaching and learning shape achievement in educational settings? Admittedly, it lacks the punch of "what works." There's an understandable urgency to improve achievement, and it is a deeply motivating and unifying goal for researchers and policymakers. The goal is still to find out what works—with a deeper scientific understanding of the "how" of the process and the setting, or the process in settings.
From the research done in the Bush-Obama era, we have a good sense of what processes we need to better understand (e.g., self-regulation and the dynamic relations between teacher and student behavior—see our first post from earlier this week).
We know much less about the role of local factors that shape effective instruction and learning. Unlike physiological processes that unfold similarly across contexts (from the lab to the real world), teaching and learning are embedded in local factors in an ontological way. Local culture; the climate and safety of neighborhoods and schools; parental involvement; education policies and resources (the number and types of schools in the system, feeder patterns, funding, etc.); school curriculum; instructional reforms; and teacher backgrounds are just some of the factors that may work together to determine how effective instruction and learning unfold in those settings.
We could study these factors as the context for effective teaching and learning. Implementation science, for example, has shined bright lights on the local factors that influence implementation and effectiveness of interventions. A more radical idea, though, is to consider local factors as the target instead of the context of education research and interventions. In other words, we could study the local conditions under which teaching and learning improve achievement, and intervene to support those factors.
Learning isn't something that educators do to children, and it's not something they can make happen or make children do. What educators can do is set up the conditions under which learning is likely to happen and use measures that accurately and robustly capture what was learned (see our last blog for more thoughts on measurement). Sleep is an example. Parents can't actually make their children fall asleep. It's not a behavior that they can demand or directly control. Instead, parents focus on setting up the conditions under which the processes of falling and staying asleep are likely to happen.
In education, what are the conditions under which the processes of effective teaching and learning are likely to happen? To what extent are these conditions shaped or determined by characteristics that are specific to local settings, and to what extent are those characteristics common across settings? By starting local and building up, we may build a foundation for knowing "what works" and, critically, how and where it works.
—Bob Pianta and Tara Hofkens