Broader Evidence for Bigger Impact?
Harvard's Lisbeth Schorr is one of America's most thoughtful observers of social innovations. In a recent article she discusses her concerns about the growing focus in government on programs with evidence from randomized experiments. She's glad to see the rise of experimentation to evaluate well-defined interventions with clear theories of action, but worries that a focus on experimentally proven programs will overly limit reformers to approaches that lend themselves to experiments.
Most of Schorr's concerns are valid; there are indeed some kinds of programs that appear to be effective but are just too complex or localized to be readily evaluated in randomized experiments. She gives the Harlem Children's Zone as an example. Yet I would also argue that she is worrying far too soon.
The evidence-based revolution that I write about in this blog is far from dominant in education or any other area of children's services. In fact, it's hardly gotten started. In education, the only serious investment in evidence-base reform is Investing in Innovation (i3), which is building up the capacity and evidence base of proven and promising practices but does not require or even provide incentives for schools to use proven programs. Quite to the contrary, the really big federal programs, such as Title I, School Improvement Grants (SIG), and Race to the Top, do not contain any encouragement, much less mandate, for schools to use proven programs. With the possible exception of programs resembling Nurse-Family Partnerships, programs for children outside of PK-16 education are also just dipping a toe in the evidence pool, at best.
While Schorr is correct in saying that not everything lends itself to experiments, there is far more that does. Examples would include instructional reforms in every subject and at all grade levels, dropout prevention, programs to prevent the need for special education, programs for English language learners, whole-school reform, and more. No one is arguing that schools should be required to use proven programs; they are arguing that there should be incentives for schools to use programs proven to be effective.
Just as Schorr says, we need to find an appropriate place for programs that cannot readily be evaluated in randomized experiments. But excessive concern for overdoing experimental evaluations has held back progress for decades. I hope we can embrace the exceptions without undoing the promising but still fragile progress that has been made in evidence-based reform in recent years.