If you want an advance glance at the testing systems that are being sketched out by the consortia of states applying for Race to the Top money, you have to get on a plane to Detroit.
That's what I found out by flying up here for the Council of Chief State School Officers' annual conference on student assessment. One of the sessions at this event was a presentation by the two main consortia that we know are applying for $320 million in RTT money to design comprehensive assessment systems. (Another $30 million is being awarded for high school level exams, but no one gave a presentation on that here. And by the way, we've heard that the career-tech-ed consortium, one of the two groups that filed an intent to apply for the high school exam money, decided not to apply after all. That leaves a consortium organized by the National Center on Education and the Economy as the sole applicant, unless a surprise contender pops up.)
We'll have more complete details of what all the consortia have planned once they release their applications, which are due on June 23. But in the meantime, some interesting tidbits emerged at the presentation here in Detroit.
We found out that the Smarter/Balanced consortium has 30 states, and the Partnership for Assessment of Readiness for College and Career (PARCC) has 26. Each has a subset of governing states that must commit to full implementation of the assessment systems its group designs.
The two consortia have elements in common. Both are capitalizing on technology to make better tests. Both will design not just one test, but a family of summative, formative, and interim assessments that will work together to provide data for many purposes, from state accountability to adjusting instruction in real time. Both are using the idea of a "distributive" approach to testing, meaning that it's not just one shot and you're done. They're talking about incorporating into their testing systems student work that spans multiple periods, days or weeks.
Susan Gendron, Maine's former commissioner of education who now is policy director of the Smarter/Balanced consortium, and Laura Slover, who oversees research for Achieve, which is the managing partner of the PARCC consortium, pointed out that while the two consortia's plans have common elements—and that they plan to work together on some things if both win grants—key differences exist between the two. While the Smarter/Balanced folks are developing computer-adaptive tests, the PARCC group is developing computer-delivered tests—not the same thing. Deep teacher involvement in development and scoring of assessments is central to the Smarter/Balanced group's work, whereas the PARCC's approach uses both human and artificial scoring and allows states to decide how involved teachers will be, Gendron and Slover said.
The extent of the change these consortia are attempting to bring about was reflected in the two women's answers to a question about test security. They said that it was an important question, with much grappling yet to come. But Slover said that part of the answer could involve "turning the security thing on its head" by moving in a direction that might actually require less secrecy. Gendron seemed to suggest that being explicit about what students are expected to know, and asking them to apply that knowledge in new ways, might shift the security question significantly, since the answers aren't just fill-in-the-bubble responses, but analytical ones.
Assessment is, of course, big business, so one audience member asked when the groups anticipated releasing requests for proposals for the work. (I gathered that many test-developer types were in the full-to-overflowing room, since ripples of laughter and nods accompanied this question.) PARCC plans to have its RFPs out the door in February, Slover said, and Smarter/Balanced anticipates havings its own done by this fall. (All this planning without even knowing if you will get some of that federal money. Just imagine.)