« Thin Crowds for AERA Meeting? | Main | John Easton Reportedly Tapped to Head IES »

RCTs: A Glass Half Full or Half Empty?


I have a story in today's online edition of Education Week that describes a spate of disappointing findings coming out of the large-scale, randomized studies that the Institute of Education Sciences has been underwriting in recent years.

Experts contend that randomized studies—in other words. experiments in which participants are randomly assigned to either treatment or control groups—are the "gold standard" for determining what works in education and in many other fields. So there was much hope that this new generation of studies would point to some strong programs that practitioners would feel confident about using in their own schools.

But, of the eight such study reports posting results this year, six offer a mix of findings that can be characterized as showing mostly no effects for the programs tested.

But "no effects" are in the eye of the beholder, I guess. As one reader pointed out to me, four of the eight studies contain at least one positive finding. For example, one study tested 10 different commercial software programs, finding one that consistently worked better than what teachers were already doing in their classrooms. Well, that's certainly positive for that particular program but not so much so for the other nine.

Still, this reader notes, a "hit rate" of 50 percent is high compared to some other fields that have embraced randomized controlled trials, or RCTs. In the pharmaceutical industry, it's estimated that only 10 percent to 12 percent of studies of new drugs produce positive effects in clinical trials.

What do you think? Is the high rate of "no effects" coming out of these federal education studies cause for concern or for celebration?


Researchers need to be trained in child development. Children are hardwired to progress through stages that dictate appropriate subjects and concepts to teach in the classroom. If teachers work with the developmental phases, the benefits will be seen years later. The comparison to drug testing is enlightening. Human beings are each unique and learn in unique ways. Each human being has his/her own gifts that can't be unearthed through standardized tests.

I found your article on the lack of significant effects in most IES studies very informative, mainly because of the response to the news. It's not clear to me why the results should be disappointing, or why anyone would think that the lack of findings faults the design of the studies. To the contrary, it is extremely worthwhile findin g out that many purported innovations do not meet the test of empirical evidence. The findings suggest one of several possibilities: (1) that most educational research is too poor to judge the effectiveness of new ideas/technology; (2) that schools should be far more skeptical about buying into so-called innovative ideas/materials without some hard evidence; or (3) that the solutions to many of our educational problems do not lie in the ideas/materials that have been investigated. People who think there are magic bullets or Holy Grails to be found will never be satisfied. But those who think tha t a variety of broad cultural factors now shape school achievement in America may well feel vindicated. I don't know why we as a nation would want to place greater faith in less scientifically rigorous methods; instead, we should wonder why less rigorous methods would not point in the same direction as more rigorous methods.

Sandra Stotsky
University of Arkansas

Comments are now closed for this post.

Follow This Blog


Most Viewed on Education Week



Recent Comments

  • Sandra Stotsky: Debra, I found your article on the lack of significant read more
  • Karen Green, high school math teacher: Researchers need to be trained in child development. Children are read more