Opinion Blog


Rick Hess Straight Up

Education policy maven Rick Hess of the American Enterprise Institute think tank offers straight talk on matters of policy, politics, research, and reform. Read more from this blog.

Assessment Opinion

Three Reflections on the NAEP 2017 Talkfest

By Rick Hess — April 17, 2018 3 min read
  • Save to favorites
  • Print

Last week, the 2017 National Assessment of Educational Progress (NAEP) results were released. They mostly showed flat lines in reading and math, though scores for high-performing students were up a bit and those for low-performing students were down a bit. The results occasioned a lot of fanfare—some instructive, a lot bordering on the silly. Rather than add to the cacophony about “what the scores mean,” I’ll just share three quick thoughts prompted by the whole ritual.

First, I kept flashing on the old trope, “I’m not a doctor, but I play one on TV.” There was some serious commentary by skilled psychometricians, commentators who understand the methodology and design of NAEP, and analysts with serious statistical training. (I feel comfortable saying that our Ed Next forum met that bar.) But there was also a lot of dubious opining by folks that may not have always known what they were talking about, but energetically played the part of NAEP savant. Such is life, but it’s useful to remember that “experts” can show up on op-ed pages, webinars, or the radio without necessarily knowing what they’re talking about.

Second, selective skepticism makes otherwise sensible concerns less convincing. The NAEP just switched to digital administration. Before the release, as word trickled out that the 2017 results would be grim, an array of state chiefs and advocates started to ask whether the switch could have lowered scores and helped produce the “meh” results. It’s a valid question. Yet, I couldn’t help but recall that, when this same question arose a few years back with regards to Smarter Balanced and PARCC—which were going to have to test some students in each state via paper-and-pencil and others via computer—some of the same folks who are today troubled about this issue seemed to wave it away as a distraction from their focus on implementing the Common Core and teacher evaluation. Concern for technical fidelity becomes less credible when it seems to be driven by convenience and larger agendas.

Third, fueled by NAEP’s reading and math results, a general narrative of 21st-century schooling has gradually taken shape: The first decade was one of big gains, the second decade one of stagnation. The question thus becomes, what went right in the first decade and/or wrong in the second? But I wonder if the foundation for all of this hypothesizing may be shakier than is generally assumed.

After all, we don’t actually know that learning went up and then flat-lined—what we know is that fourth- and eighth-grade reading and math scores on a high-quality national assessment did. And there are a bunch of possible reasons why scores move, not all of which reflect student learning. As I noted last June, reasons that scores might move can include:


  • Students may be learning more reading and math. The tests are simply picking that up. All good.
  • Students may be learning more in general. And the reading and math scores are a proxy for that. Even better.
  • Instructional effort is being shifted from untested subjects and activities to the tested ones (e.g. to reading and math).
  • Schools are focused on preparing kids for tests and engaged in test preparation so that the scores improve even if students aren’t learning.

It seems wholly plausible, for instance, that the first decade under NCLB saw scores go up—in some part—because schools were devoting enormous attention to reading and math instruction or test preparation, at the expense of other subjects and skills. If so, it’s a fair question as to what share of NAEP gains was a case of students learning “more” and how much was simply a product of a shift in instructional energy, attention, and focus. If this were the case, it would also cast results of the past decade in a rather different light.

But we’ve devoted curiously little energy to such queries. Indeed, the pundits tend to skip by such considerations when divining the grand significance of NAEP results. At that point, a valuable exercise can start to turn into one more platform for the “TV docs” to pitch their wares.

Related Tags:

The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.