Opinion
Assessment Opinion

Performance Assessment 2.0: Lessons From the Last Century

By Robert Rothman — June 08, 2015 4 min read
  • Save to favorites
  • Print

Contributors to this blog (see, for example, here and here) have often discussed the role of performance assessment in schools and school systems that are organized for deeper learning. Performance assessment, or assessment that asks students to complete a task that produces a product, taps into student competencies that more conventional assessments, like multiple-choice tests, seldom measure. At the same time, the use of performance assessment signals the need for classroom experiences that enable students to develop their abilities to use knowledge to solve problems and think critically.

To those of us with long memories, these posts might sound familiar. Didn’t schools engage in performance assessments way back in the last century?

Indeed--and even whole states included performance assessments in their statewide assessment systems. As history books will recount, states like Kentucky and Vermont used portfolios of student work in their assessments, and Maryland’s, Connecticut’s, and several other states’ end-of-year tests included performance tasks.

Just about all of those programs ended early in this century, after the No Child Left Behind required all states to administer reading and mathematics tests to all students in grades 3 through 8. This requirement made performance assessments, which cost more than multiple-choice and short-answer tests, because of the need for human scorers, difficult to sustain.

Now that there is a growing interest in performance assessment, what can the experience of the 1990s suggest as states move forward? A new report from the Stanford Center on Assessment, Learning and Equity (SCALE) provides some answers.

The report, by Ruth Chung Wei, Raymond L. Pecheone, and Katherine L. Wilczak, looks at the experiences in eight states, along with a national initiative known as the New Standards Project, to examine their successes and challenges. Their findings glean important lessons for schools and state officials in 2015.

The authors look at three factors that supported or hindered the success of performance assessment systems:


  • political contexts and the role of leadership, communication, and public support;
  • technical quality and design; and
  • practical issues such as cost and implementation.

The research shows that the financial constraints imposed by NCLB were only one of the challenges that ultimately doomed the earlier generation of performance assessments. While some of the programs (such as Connecticut’s) benefited from political and public support, others, such as California’s fell victim to opposition from political leaders.

In addition, some of the programs were unable to maintain the technical quality needed for the assessments to be used in accountability systems. Vermont’s portfolio, for example, initially showed relatively low levels of reliability, although the quality improved over time.

Wei, Pecheone, and Wilczak show that these issues are not insurmountable, and that recent developments in the field have made performance assessments sounder and cheaper. For example, technology substantially lowers the cost of administering performance assessments and can lower the cost of scoring by making virtual scoring possible. States no longer have to pay teachers to travel to score the assessments. At the same time, the formation of consortia of states can lower development costs by spreading them out over more students.

Researchers and educators have also made strides in improving scoring systems and other factors that limited the technical quality of earlier assessment systems. The rubrics used to evaluate student work are stronger and training protocols have improved, so teachers can make more reliable judgments about the quality of student products.

Political and public support remains a challenge. Policy makers remember the challenges of the 1990s and are reluctant to endorse systems that they believe might be more costly or less reliable. The decisions by several states to drop out of the Partnership for Assessment of Readiness for College and Careers (PARCC) and the Smarter Balanced Assessment Consortium--whose tests include performance tasks--might be a sign that policy makers are shying away from these kinds of assessments.

The good news is that there is more of an imperative for performance assessment. The Common Core State Standards and other standards for college and career readiness call for students to demonstrate competencies that are unlikely to be tapped by multiple-choice tests exclusively. The state of the art in performance assessment has advanced considerably. And the example of schools like those described on this blog show the power of this kind of assessment. A look back at what worked and what did not in “Performance Assessment 1.0" can help point the way toward the future.

The opinions expressed in Learning Deeply are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.