Federal

Analysis of NCLB Could Be Crystal Ball for Common Core

By Holly Kurtz — May 08, 2014 5 min read
  • Save to favorites
  • Print

The year is 2001, the place Houston, Texas. No Child Left Behind is still a doorstop of a legal document, its effects yet to be felt in classrooms or schools. And common-core reforms? They are barely a gleam in a governor’s eye.

It is the stuff of ancient history, at least in the world of education reform where 13 years represents a child’s entire school career.

Yet a new study of this era could help educators and policymakers understand what they can expect from the future, as common core-aligned assessments become the new measuring stick for the accountability provisions of No Child Left Behind. An article based on the study appears in the current issue of the peer-refereed journal, Sociology of Education.

“In the two states that have implemented these [common core] assessments, Kentucky and New York, approximately 70 percent of students scored below proficient,” write Jennifer Jennings (a New York University assistant professor who once blogged under the pseudonym Eduwonkette) and Heeju Sohn, a doctoral student in sociology at the University of Pennsylvania. “Our findings predict that high-proficiency standards have two consequences: They produce increases in average achievement and increase inequality in high-stakes achievement between higher- and lower-performing students.”

Jennings and Sohn base their conclusions on a comparison of the high-stakes and low-stakes test results for approximately 17,000 pupils who were middle school students in the Houston Independent School District between the years of 2001 and 2004. Half were 6th graders the year before NCLB took effect in 2001. Half were 6th graders the year the act was signed into law, in 2002.

Texas set the NCLB proficiency bar higher for math than reading. Jennings and Sohn found very different results for those two subjects. They also found big differences in the results of the Texas Assessment of Knowledge and Skills, or TAKS, and the Stanford Achievement Test. Although students took both exams prior to 2002, NCLB raised the stakes of the TAKS when state officials decided to use the test to determine whether schools, districts, and states met accountability goals (such as making “Adequate Yearly Progress”) as outlined by the federal law. By contrast, the Stanford exam was not counted for the purposes of NCLB.

With high-stakes TAKS, it was tougher to pass the math test than the reading exam. So, in the wake of NCLB, educators devoted more time to math than to reading, Jennings and Sohn suggest. They also engaged in what the study authors called “educational triage.” This meant they tried to get the most bang for their buck by focusing the most attention on students who hovered just below the cutoff for proficiency. Because that cut point was relatively high, this meant educators focused most on relatively high achivers. As a result, higher-achieving students made gains while lower-achieving students lost ground. This increased the math achievement gap between higher and lower achievers.

But only on TAKS.

On the low-stakes Stanford math exam, the achievement gap remained the same as it was before NCLB. But everybody’s achievement rose.Jennings and Sohn suggested that this was because higher and lower-achieving students alike benefited from the increased attention to math.

In reading, the results were reversed. Here, the so-called “bubble” kids were different in that they were lower-achieving students. This was because it was easier to pass the reading test than it was to pass the math test. So those just shy of the proficiency line were lower achievers.

As with math, Jennings and Sohn suggested that these lower-achieving bubble kids got more attention than their peers. As a result, lower-achieving students’ reading scores increased while higher-achieving students lost ground . This in turn decreased the reading gap between higher and lower-achieving students.

But, again, only on TAKS.

On the Stanford reading exam, everybody lost ground after NCLB.

“We believe part of the negative effect in reading, which is surprisingly large by education policy standards, is attributable to greater emphasis on math over reading more generally,” Jennings and Sohn write. “The binding constraint faced by schools in their efforts to make NCLB AYP [Adequate Yearly Progress] targets was clearly the math test, which could lead to a greater emphasis on math at the expense of reading.”

So what explains the gap between high-stakes and low-stakes results?

“Paying more attention to content that predictably appears on high-stakes tests, or coaching students to respond to predictable question structures, may bolster scores without improving other measures of achievement,” write Jennings and Sohn. “We call this practice instructional triage. As a result of instructional triage, studies using high-stakes measures may tell a different story about the effects of accountability on inequality than does research using other measures of achievement.”

The study’s findings suggest that both instructional and educational triage were more pronounced at lower-performing schools, which are more likely to face the negative consequences of NCLB.

As Jennings and Sohn note, it is yet to be seen whether findings gleaned from their study of NCLB are relevant to the rollout of the common-core assessments and reforms. One important change that has occurred since 2001 is that many states now have NCLB waivers that permit them to use “growth models” that assess accountability by comparing changes in achievement among groups of students who attained similar results the previous school year.

Even with these models, Jennings noted that “there may still be students who are believed to offer a higher ‘return on investment’ than others.”

“Whether teachers believe that these are the higher or lower performing students is an empirical question,” she said.

Additionally, old-fashioned, NCLB-style proficiency rates are still, at the very least, one part of the accountability systems in most states.

In these states, here’s what Jennings predicts:

“At least in the short-term, higher proficiency rates are likely to increase inequality in achievement between lower and higher performing students, and lower proficiency rates are likely to decrease inequality between lower and higher achieving students.”

A version of this news article first appeared in the Inside School Research blog.