Assessment

Common Assessments: More Details Emerge

By Catherine Gewertz — January 03, 2012 6 min read
  • Save to favorites
  • Print

Happy New Year, and welcome to the Year of the Common Assessments. Or at least the year of common-assessment procurements.

I know; what a nerdy way to usher in a new year, right? Sorry; we can’t help it. It’s part of our job here at EdWeek. One of our ongoing resolutions is to keep you informed about the activities of the two big groups of states that are designing tests for the common standards. And we have some updates for you.

The two consortia—which, you probably recall, are working with federal Race to the Top money—have released documents that shed a bit more light on what the tests might look like when they’re fully operational in 2014-15. We say “might” because there is a very long road to travel between these documents and the final tests—lots of tweaking, field-testing, revising, reviewing. But the accumulating stack of documents offers interesting glimpses.

So what do we have here? First of all, the Partnership for Assessment of Readiness for College and Careers, or PARCC, issued an “invitation to negotiate” for development of its test items. (That’s what most folks call an RFP, or request for proposals. But in Florida, which is PARCC’s fiscal agent, they call it an invitation to negotiate, or ITN.)

You can find PARCC’s announcement here, and the ITN itself here. The ITN goes hand-in-hand with PARCC’s model content frameworks, which were released in November and can be found here, along with webinars walking you through them.

A cluster of earlier PARCC invitations-to-negotiate, including one for systems architecture and another for information on artificial intelligence used in assessments, are here, along with the consortium’s procurement timeline and descriptions of all its anticipated procurements.

What’s in PARCC’s newest ITN?

It covers development of the group’s summative, end-of-year test, as well as its midyear formative assessments. It doesn’t include the consortium’s planned early-year diagnostic assessment or its assessment for speaking and listening skills.

PARCC discusses the “innovations” it seeks in the tests. On the English/language arts assessment, for instance, it seeks “enhanced comprehension” reading items that can measure deeper, more nuanced types of understanding and require students to read complex passages and cite evidence to support their answers. There will also be a focus on measuring students’ “academic vocabulary.”

It gives some hints of what test items might look like as it describes the “task generation models” that winning vendors will use to design items. In a section of the English/language arts test probing students’ research skills, for instance, they could be asked to draw on a speech from a historical figure and several related informational texts about the speech. Another task could ask them to do something similar in science, since the standards’ literacy skills reach across disciplines.

The ITN previews things like how long the tests might take. The group’s performance-based assessment in English/language arts, for instance, one of the two components of its summative test, is envisioned as three sessions over two days, one focusing on a research simulation and another on a literary analysis. The end-of-year, computer-based test will ask students to read about six passages and respond to machine-scorable items, including pairs of readings that enable comparison and synthesis, and “innovative” items that are designed to deepen students text analysis as they move through the test. Reading passages for grades 3-5 will be 200-800 words long, and those for students in grades 6-8 will be 400-1,000 words long. Passages for high school students will be 500-1,500 words long.

The math test will include items with single or multiple prompts. Some will be machine-scorable, and some will require hand scoring. The performance-based assessment in math will include tasks that demand written arguments or justifications of students’ answers, or critiques of reasoning. They will also include problems that involve real-world scenarios. The number of tasks is, at least at this point, pretty broad: between 11 and 66 in grades 3-8 on the end-of-year assessment and the first section of the performance-based assessment, for instance, with an additional 2-4 tasks in each of the second and third sections of the performance-based assessment.

The ITN also says that PARCC will develop two series of end-of-course tests in math at the high school level: one for the traditional course sequence—Algebra 1, geometry, and Algebra 2—and another for a course sequence that would integrate those topics.

There is a lot more detail in the ITN, as well as in its appendices. We’re still wading through it all.

But a notable feature of the PARCC ITN is that this big chunk of work isn’t going to just one vendor. The document outlines two phases: the first phase, in which “multiple” vendors will develop half the test items, and a second phase, in which contractors from Phase I who have “shown the ability to work collaboratively and deliver high quality, innovative assessment items and tasks on deadline” will have their contracts renewed to finish the job. The contracts for this work are slated to be awarded in April or May.

A few new tidbits are trickling out of the other consortium, too. The SMARTER Balanced Assessment Consortium has issued a cluster of requests for proposals, or RFPs, recently.

You can see its master work plan here, and its procurement schedule—listing each anticipated RFP and what it’s for—here. You can see each released RFP and its status here, but you’ll see that they are mixed in with all of the RFPs for Washington state, which is SMARTER Balanced’s fiscal agent. You can pick out which ones are the consortium RFPs by looking for “SBAC” in the title.

Back in August, we told you about two documents from SMARTER Balanced: its content-mapping-and-specifications document, and RFP #4, which was for item specifications. In a more recent flurry, there are RFPs for developing accessibility and accommodation policies (RFP #6), building the software to support the item bank (RFP #7) and developing style guidelines that will be used to review test items for content alignment, bias and sensitivity, and creating training materials for item writers (RFP #8).

RFP #14, for item writing (among other things), offers a few new tidbits, but doesn’t provide the kinds of test details that are included in PARCC’s ITN. The August documents get closer to providing those kinds of things. But we do learn in RFP #14 that SMARTER Balanced seeks development of 10,000 selected-response or constructed-response items and 420 performance tasks in math and English/language arts, to facilitate pilot-testing in the 2012-13 school year. Most will be scored with automated scoring, the RFP says.

Part of the work will be conducting research to find out which types of items are best suited to automated scoring and which must be scored by hand. The winning vendor(s) will conduct “cognitive labs” and small-scale trials to test innovative approaches to test items.

An interesting aspect to this RFP is that it asks the prospective vendor to hire and train teachers from SMARTER Balanced states to write items and tasks, and review items for content alignment, accessibility, and bias.

A version of this news article first appeared in the Curriculum Matters blog.