College & Workforce Readiness

Assessment Consortium Moves to Build Higher Ed. Links

By Catherine Gewertz — April 04, 2012 6 min read
  • Save to favorites
  • Print

From Alexandria, Va.

The Partnership for Assessment of Readiness for College and Careers made a key move yesterday: It decided that higher education representatives from its member states will become voting members of PARCC on core decisions about how the forthcoming tests will reflect college readiness.

At its quarterly meeting yesterday, PARCC’s governing board—the education chiefs of its 24 member states—decided unanimously to allow higher ed. representatives to vote on a handful of issues: who will set the cutoff score for the tests, what evidence will be used to decide on cutoff scores, how to describe the expected performance levels on the test, and the million-dollar question: what the cutoff score will be.

Let’s stop for a second to do a quick refresher for those of you whose eyes are already glazing over: PARCC is one of two big groups of states that have federal Race to the Top money to design assessments (and instructional resources and other stuff) for the common standards. The other group is the SMARTER Balanced Assessment Consortium.

When the U.S. Department of Education handed out $360 million to these two groups, it wanted a lot of things for its money, as we told you when the competition opened in April 2010. It wanted tests that can measure student achievement as well as student growth, tests that can be used to judge teacher and school performance, tests that offer teachers formative feedback to help them guide instruction. But it also wanted tests that reflect whether students are ready—or are on track to be ready—to make smooth transitions into college and good jobs.

A key basket of work in this regard is getting colleges and universities on board to accept the two consortia’s tests as solid indicators that students can skip remedial work and go right into entry-level, credit-bearing courses. What, after all, does a test of college readiness mean if colleges don’t agree that it indeed connotes readiness for college-level work?

The idea, then, was that involvement of higher education in the consortia’s work would be crucial to reaching consensus on what a test would need to do to show that a student who passes it is college-ready. One of the federal government’s requirements in offering the Race to Top money, in fact, was that state consortia show a hefty pledge of support for their work from public college and university systems. Both did so.

But it’s one thing for a university system to pledge support for the work of test design, and another to see it unfold in such a way that it can be embraced as a substitute for entry-level course placement. This is what the PARCC vote aims to address.

The move puts the consortium’s Advisory Committee on College Readiness at the voting table, right alongside its decisionmaking body, the governing board, when it comes to the most pivotal issues about how the tests reflect college readiness. Established last July, the group is composed mostly of the highest-ranking official from one state system of colleges or universities in each PARCC state.

The ACCR and PARCC’s Higher Education Leadership Team, made up of additional postsecondary officials, are key vehicles for its outreach strategy to build college and university support and confidence in its tests.

Bringing top university officials to the voting table as the college-readiness decisions are made represents “a huge step toward operationalizing” the consortium’s work on that aspect of the assessment, Massachusetts Commissioner of Education Mitchell D. Chester, the chair of PARCC’s governing board, told the group yesterday just before the vote.

Discussion later in the meeting offered hints about how thorny the process will be. Working its way down the agenda, the governing board took on the issue of how to set performance standards (cutoff scores) for the tests. Mary Ann Snider, Rhode Island’s chief of educator quality, solicited feedback on this question from the board, with the hope that guidelines for performance-standard-setting might be voted on at the board’s June meeting.

To get a quick gauge of the board’s inclinations, she asked how many performance levels they thought the test should have: three, four, five, or some other number. Most states voted for four levels, largely mirroring the current practice in most PARCC states. Then she asked them when indicators of being “on track” for college readiness should first appear on test results: elementary school, middle school, or high school. Most voted for elementary school.

Snider also asked whether the test should indicate only how well students have mastered material from their current grade level, or whether it should show how well they’ve mastered content from the previous grade level, too. These answers came back deeply divided.

The question was aimed at a key part of the dialogue about the new assessment systems: how to design them so they show parents, teachers, and others how students are progressing over time, rather than a simple determination of proficiency (or lack thereof) at the moment. But the prospect of having, say, 4th grade tests reflect students’ mastery of 3rd grade content raised some serious doubts.

“If I’m a 5th grade teacher, am I now responsible for 4th grade content in my evaluation?” asked James Palmer, an interim division administrator in student assessment at the Illinois state board of education.

Gayle Potter, director of student assessment in Arkansas, said that it’s important to give parents and teachers important information about where students are in their learning. But she also said she worried about “giving teachers mixed signals” about their responsibility for lower grades’ content.

Some board members noted that indicators of mastery of the previous year’s content would be helpful in adjusting instruction. But others expressed doubt about whether a summative test was the best way to do that. Perhaps, they said, that function is better handled by other portions of the planned assessment system, such as the optional midyear assessments.

Discussing when to put career-readiness indicators on the test, too, proved thorny. Deborah Gist, Rhode Island’s schools chief, said that placing an “on track to career readiness” indicator on an elementary student’s test results worries her.

“It starts to feel like tracking when kids are still so young,” she said. Educators and parents do need to know whether students are on a productive pathway to success in college and work, she said, “but labeling makes me nervous.”

Precisely what the test will say about career readiness also is not yet clear. Michael Cohen, the president of Achieve, a Washington-based group serving as PARCC’s managing partner, urged the governing board to be cautious about how it writes the career-readiness “claims” for the test. “It’s hard to imagine” employers using the test results to judge candidates’ readiness for work, he said.

He explained after the meeting that care should be taken to avoid creating the impression that employers will use the test results to determine if applicants are qualified for a given job. The tests will not measure every single skill that employers and professors seek, only the academic skills, he said. So making accurate claims about what the tests do and don’t measure will be critical, Cohen said.

A version of this news article first appeared in the Curriculum Matters blog.