Science

PARCC Approves Test Performance-Level Descriptions by Grade, Subject

By Catherine Gewertz — June 27, 2013 4 min read
  • Save to favorites
  • Print

Arlington, Va.

PARCC, one of two groups of states designing tests for the common standards, has approved descriptions of the skills and knowledge that students must have, at each grade level and in each subject, to demonstrate specific levels of mastery on the assessments.

At its quarterly meeting yesterday, the PARCC governing board approved its “performance level descriptors.” The 215-page document goes grade by grade, from 3 through 11, in math and English/language arts, detailing the “claims” and “sub claims” that the test seeks to make about students’ mastery and how they should be demonstrated.

It’s a more granular version of the “policy-level descriptors” PARCC approved last fall. The policy-level descriptors lay out broadly what achievement means at each of the five scoring levels of the test. A 5 reflects “distinguished” command of the subject, while a 4 shows a “strong” command, and a 3 shows a “moderate” command. Scoring at level 2 shows a “partial” command of the subject, and level 1 a “minimal” command. Each is linked to a description of how likely a student is to perform well in entry-level, credit-bearing college courses.

The grade- and subject-specific performance-level descriptors get into much more detail. In English/language arts, for instance, they track how well students must perform in various ways with very complex text, moderately complex text, and “readily accessible” text at each level of the test.

Jeffrey Nellhaus, PARCC’s assessment director, told the assembly of PARCC K-12 and higher-education representatives that key things distinguish the five levels of student performance. In math, he said, it’s the relative complexity of the standard being assessed, a student’s use of “stimulus materials” such as graphs and tables, and the extent to which students can construct solutions to problems.

In the reading portion of the English/language arts test, where students fall on the 1-5 scale revolves around the complexity of the text they’re working with on a given question, the accuracy of their responses, and the evidence they cite in their responses. On the writing portion, it revolves around how well students develop their ideas, how well they organize their responses, and their use of language and conventions.

These PLDs, as the consortium wonks call them, are scheduled for release on July 17, after final, minor tweaks and editing. But you can see the version presented to the governing board on a special page of PARCC’s website. You will want to scroll down to the bottom of the page, where a bunch of documents are listed. Midway down that list, there is a cluster of three documents that were presented at the meeting yesterday: “Draft PLDs,” “PLD PPT” (that translates “PLD Power Point,” for those of you who don’t love all this alphabet soup), and “PLD Memo.”

The governing board made it clear that they are still subject to change if feedback from the scheduled 2014 field tests warrants.

You might recall that PARCC released these for public comment in April.

The other state assessment group, Smarter Balanced, approved its version of the performance descriptors—which it calls “achievement level descriptors"—in March.

The PARCC governing board adopted the grade-and-subject-specific descriptors unanimously. Higher education representatives voted along with K-12 representatives from each state, since PARCC policy views this vote as a key matter in determining college readiness.

Some interesting discussion surfaced in considering the descriptors. Some of it circled back to the question of how colleges can be assured that a 4 on the PARCC test—the level at which a student is labeled “college ready"—truly signals a good chance of success in entry-level, credit-bearing college courses.

Arizona’s superintendent of public instruction, John Huppenthal, posed the question. Is it really clear, he asked, that a “moderate” level of mastery—represented by a 3 on the PARCC test—falls short of college readiness, while a “strong” level—level 4 on the test—of mastery constitutes college readiness?

This question goes to the heart of one of the biggest issues causing uneasiness in the assessment consortia right now: how colleges and universities can verify that the achievement levels on the PARCC and Smarter Balanced exams are predictive of good performance on entry-level credit-bearing college courses. This is a big chunk of the two groups’ research agendas for the years after the first test administration in 2015.

But the trouble with that is the colleges and universities want to know sooner than that if they can rely on the “college-ready” cut scores to reflect real readiness for the rigors of college coursework. Consortium insiders are discussing other kinds of research that could be done earlier in the game to validate those cutoff levels.

In response to Huppenthal’s question yesterday, Nellhaus had this to say: “To be candid, we don’t know yet. We need to do the research.” Massachusetts Commissioner Mitchell Chester, the chairman of PARCC’s governing board, said the group will gather “empirical evidence” validating the college-readiness cutoff levels, not only by tracking students into college, but by other means, such as administering the test to college freshmen.

“It won’t be just a content judgment,” he said.

Higher education will also have raw scale scores to review when considering students for course placement decisions, Nellhaus said.

The scale, and the various cut points delineating one achievement level from another, have yet to be determined.

Related Tags:

A version of this news article first appeared in the Curriculum Matters blog.