Opinion
Assessment Opinion

NAEP Results: Gaps in Opportunities to Learn?

By Robert Rothman — May 03, 2016 3 min read
  • Save to favorites
  • Print

The recently released results of the 2015 National Assessment of Educational Progress for twelfth graders have sparked some concern. Scores in reading remained flat, as they have been for years. But scores in math declined slightly, just as scores for fourth and eighth grade mathematics and eighth grade reading did, in results released last fall. Disturbingly, scores for low performing students dropped substantially.

Some commentators have suggested that the results reflect the upheaval in instruction caused by the shift to the Common Core State Standards over the past few years. Others have suggested that there is a mismatch between what NAEP tests and what’s in the Common Core, so the results might not be an accurate reflection of what students know and are able to do. Still others have pointed out that the results might actually represent good news: graduation rates have gone up; students in twelfth grade who night have dropped out in previous years took the test in 2015, so the overall averages represent a broader range of student abilities than in the past.

The simple truth is that NAEP is not designed to provide causal explanations. It’s a test given every two years to a representative sample of students who happen to be in fourth, eighth, or twelfth grade that particular year. It does not follow students over time, so it’s impossible to say that a policy or practice “caused” the results.

Nevertheless, the assessment is the best national measure of school performance, and the results bear further scrutiny. Moreover, they are consistent with other assessments, such as the Programme for International Student Assessment (PISA), also given to high school students, in which U.S. student performance has remained flat or declined slightly over the past decade.

NAEP is important for other reasons as well. In addition to the test, NAEP also administers a survey to students and teachers that provide a wealth of information about classroom practices. These data provide context for the test results. Again, they suggest correlation, not causation, so the information must be interpreted with caution. But they suggest some areas for further study.

For example, the 2015 reading assessment asked students whether they explained what they read in class. Those who said they never or hardly ever did so scored 263 on a 500-point scale. Those who said they explained what they read in class every day scored substantially higher: 295.

Similarly, the results show that students who made presentations in class about something they read six times or more out-performed students who said they never did so, and students who did projects about something they read six times or more performed slightly better than those who never did so.

Again, these are correlations. It might not be the case that explaining what you read or doing a project on it causes you to become a better reader and do better on the assessment. Perhaps students who were already high performers have more opportunities to make explanations or do projects. But if that’s the case, it’s worth asking why these opportunities are not distributed more equitably.

Another survey question offers some additional provocative information. Students who said they prepared for state assessments “not at all” scored 291 on the reading assessment, while those who said they prepared for the state tests “to a great extent” scored 282. This finding provides yet more evidence to suggest that low-performing students spend more time on test prep than high performers. (A forthcoming survey from the Center on Education Policy has some additional information on this score.)

Put together, the findings paint a picture of unequal opportunities to learn challenging content. Low-performing students spend less time explaining their reading or doing projects, and more time on test prep. Once again, these are correlations: they do not suggest that these patterns caused the low performance. But why do they exist? What can be done about them? That’s the challenge for educators and policy makers.

Related Tags:

The opinions expressed in Learning Deeply are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.