Assessment

What Do Rising Title I Achievement Scores Really Mean?

By Sarah D. Sparks — August 10, 2011 2 min read
  • Save to favorites
  • Print

On Tuesday, I reported on a new analysis from the Center on Education Policy that found rising math and reading scores and proficiency levels for students participating in federal Title I poverty education programs in many states.

Commenter LaToshaDC was typical of those skeptical of the findings, writing:

Is this based on NAEP data or state tests[?] I mean most of those state tests are so easy my cat could pass. I don't think passing a state test is an indicator that Title I is working for low income students. It's probably more of an indicator that those states are gaming the system.

There have been numerous criticisms over the years that student progress is as much due to states and districts gaming the system as legitimate student learning. In this case, the National Center for Education Statistics has just come out with a new analysis on the rigor of state proficiency levels that backs up her concern.

As my colleague Steve Sawchuk reports, while many states lowered their test cut-off scores for proficiency levels from 2005 to 2007, eight states increased the rigor of 21 different tests from 2007 to 2009. Even so, by 2009, 35 states still set their benchmark for “proficient” below what the Nation’s Report Card considers “basic” understanding of math and reading, according to the NCES report. While the NCES study focuses on 2007 to 2009, it also covers trends since 2005, while the CEP study looks at three-year trends from 2002 to 2009.

Moreover, it specifically notes that, “Changes in achievement between 2005 and 2009 in state tests are not corroborated by changes in achievement measured by NAEP.”

If we look specifically at the 19 states CEP studied in its analysis, a dozen of them were found to have higher reading achievement in 4th or 8th grade from 2007-2009, compared to NAEP scores for the same period, suggesting possible score inflation. By contrast, five states—Colorado, Kentucky, New Hampshire, Rhode Island and Utah— had NAEP reading scores that backed up their state test gains.

Granted, the CEP and NCES reports are not directly comparable. CEP’s findings are based on state accountability reporting under No Child left Behind rather than on the NAEP, which disaggregates student performance by income based on a student’s free or reduced-price lunch eligibility—an indicator that may not be exactly the same as eligibility for the Title I program. Moreover, CEP found achievement gaps between low-income and wealthier students—the groups that would be compared in NAEP—were larger than the gaps between Title I and non-Title I students, suggesting that NAEP may be looking at a different set of students than those served by Title I.

Still, at a time when lawmakers are trying to determine what Title I, the largest federal education program in American history, should look like after the next authorization of the Elementary and Secondary Education Act, the contrast between these reports suggests researchers need better ways of differentiating the tools to judge a program’s effectiveness from those intended to track student progress.

A version of this news article first appeared in the Inside School Research blog.