« The Teacher-Recruiting Trail | Main | California, Here We Come! »

Revisiting Reading First


MDRC has just put out a policy brief summarizing the impact of Reading First, the flagship reading program under the No Child Left Behind Act for which funding was eliminated in fiscal 2009.

Since I'm new to the curriculum beat, I walked past two rows of office cubicles to the desk of my colleague Kathleen Kennedy Manzo, who covered reading for Education Week for 12 years. I asked if the policy brief had any new information. The short answer from her is "no."

It summarizes how the federal impact study for Reading First (see Kathleen's article about that research here) found that students in Reading First schools did not have higher reading comprehension scores on average than students in schools that didn't participate in the program.

But the brief makes a couple of points that I hadn't picked up on earlier about the impact of Reading First. It says that the federal program led to schools' having more reading coaches and more professional development and more support for struggling readers in participating schools than had previously been the case. The brief also says that the program "did appear" to increase reading comprehension for students in schools that bumped up reading instruction time beyond what was average for participating schools. The average, the impact study showed, was for schools to increase reading instruction by 7 to 10 minutes per day on top of the 50 minutes already committed to teaching reading according to "scientifically based" ways.


"It says that the federal program led to schools' having more reading coaches and more professional development and more support for struggling readers in participating schools than had previously been the case."

Well, yeah. But the thing is, these initiatives had no effect.

"On average, Reading First did not
meaningfully improve students’ reading
comprehension test scores."

The brief ends with the whimper, "More research is needed."

It's possible to do better than that. Try, "The Non-Impact of Reading First - Where to Go From Here" at

The Policy Brief deserves some comment.

The "early-award" schools, we are told, did better, suggesting that better application of the principles of the National Reading Panel yielded better results. But the difference between early- and late-award schools was only significant for grade 2.

The Policy Brief suggests that the reason for the disappointing results (no advantage for Reading First in reading comprehension) is that aspects of the principles of the National Reading Panel were already in place in the comparison schools. But the Reading First schools had more time devoted to teaching based on these principles.

The Policy Brief suggests that this extra time devoted to reading might not have been enough to make a difference. The Reading First children had an extra 35 minutes a week in grade 1, and an extra 50 minutes a week in grade 3. That's a lot, about an extra 50 days over three years, assuming 90 minutes a day of reading instruction.

The early-award cites had an extra 15 minutes a day, 1/6 more time than comparisons. That's an extra semester over three years.

Never considered in the Policy Brief is the possibility that the National Reading Panel's conclusions were wrong. The Policy Brief accepts the correctness of the Panel's Report uncritically, with no mention of many critiques in the professional literature, no mention, for example, of books and articles by Allington, Coles, Garan and this writer.

Never mentioned is the possibility that the results mean that Smith and Goodman have been right all along: We learn to read by reading, a hypothesis that the Policy Brief, the National Reading Panel, Reading First, and even the International Reading Association appear to be allergic to (Krashen, 2009).

Herlihy, C., Kemple, J., Bloom, H. Zhu, P., and Berlin, G. 2009. Understanding Reading First. Policy Brief, MDRC. New York: MDRC.

Krashen, S. 2009. 2009. Policy paper slights book. Reading Today. (February/March).

What the Report, the Policy Brief and Krashen overlook is that over the course of the study BOTH the Reading First and the "non" groups mean scores remained stable at around the 40th Percentile with wide variability in each group. Irrespective of what was done in the classroom, a sizable proportion of kids were not taught to read, as measured by the test.

Also swept under the rug, is another well-conducted study under IES auspices that compared the best 4 programs for teaching "reading comprehension" to 5th-graders. The only statistically significant difference with "non-treated" kids was that one of the "interventions" was worse than no-instruction.

"Effectiveness of Selected Supplemental Reading Comprehension Interventions: Impacts on a First Cohort of Fifth-Grade Students"


Add a third IES study:

"Closing the Reading Gap, Findings from a Randomized Trial of Four Reading Interventions for Striving Readers"


Bottom line is that kids at grade 3 and 5 who started out below the mean on the battery of tests at the beginning of the year ended up below the mean at the end of the year. Kids without treatment "gained" less (on some tests they were worse at the end of the year than at the beginning).

When our "best instruction" is ineffectual with a sizable proportion of kids, "Change we can believe in" will require transparent accountability and responsibility on the part of the unaccountables at the top of the EdChain.

Comments are now closed for this post.

Follow This Blog


Most Viewed on Education Week



Recent Comments

  • Linda: My problem with homework is they give too much and read more
  • Seo Article Writer: Hello I just see your site when I am searching read more
  • Car Insurance Guy: Ah!!! at last I found what I was looking for. read more
  • cyptoreopully: Hey there everyone i was just introduceing myself here im read more
  • Connie Wms: Good grief. We have gone round and round forever with read more