Who will answer the phone at one minute after midnight on Oct. 1, 2014?
That's a lighthearted way of raising a very serious question in the testing world: How will the two assessment consortia survive to oversee the tests once federal funding runs out at the end of September 2014?
It's a question that's on the minds of the education chiefs of the Smarter Balanced Assessment Consortium as they meet here this week. Joe Willhoft, SBAC's executive director, opened the discussion on Wednesday with that joke, but the chiefs (or their delegates, if chiefs were absent) quickly turned their attention to approving a plan for doing something concrete about it. They voted to approve participation in a partnership to explore ways to keep Smarter Balanced and the other assessment consortium, the Partnership for Assessment of Readiness for College and Careers, or PARCC, alive after the fall of 2014.
The two consortia will work with two groups that are very familiar to you by now if you follow the common-standards work, because they spearheaded that initiative: the Council of Chief State School Officers and the National Governors Association. Those organizations will work to find funding to help sustain the two consortia after the $360 million in Race to the Top money runs out.
You might recall that the federal money was meant specifically for the design of the two testing systems, not for administration of the tests, which is supposed to begin in the 2014-15 academic year. That has been a source of worry for both consortia from the beginning, as states asked how they would pay for the new tests, who would update them from time to time, and other things.
As we've reported to you, sustainability has been under active discussion in both consortia for some time now. But the discussion here was the first I'd heard about an official partnership. PARCC has voted to participate in that partnership, too, officials there tell me.
Also on the chiefs' agenda was how to move forward to design "achievement level descriptors" for its tests. Those are the paragraph-long statements describing what each level of achievement on the test means. Smarter Balanced has contracted with CTB/McGraw Hill to facilitate this process, but they needed the chiefs' approval for a timetable and a process for including the input of K-12 teachers, higher-education faculty and other experts.
The group approved a process in which 30 K-12 educators and 21 higher-education faculty will be nominated by the states to help shape the writing of the ALDs. The consortium's math and English/language arts directors, Shelbi Cole and Barbara Kapinus, respectively, will choose three experts in each content area to participate, as well. Drafts will be revised after rounds of feedback from psychometricians, the public, SBAC work groups and others, and be considered for final approval next March.
If you are asking how there can be such a fuss over paragraph-length descriptions, it's worth knowing that these things carry a lot of weight. These are the descriptions on which important decisions are based, such as whether high school students are ready to skip remedial work and enroll in entry-level, credit-bearing college courses. PARCC's discussions on the same stuff (see here and here) offer a vivid demonstration of how tricky it can be.
Another unresolved question within both consortia is how to define career readiness. At the Smarter Balanced meeting, the states waded into that territory, endorsing a policy statement about career readiness that was developed by the Career Readiness Partner Council, which is trying to hammer out shared ideas of career-ready skills and infuse them into common-core instruction. The catch? The consortium couldn't release the statement itself to the public, since the Council doesn't plan to release it until mid-October. It wasn't clear precisely what role this statement will have in the consortium's work. In the meantime, you can find out more about the Council here.
Also on the agenda for the Smarter Balanced meeting was an item about how to sample populations of students in the consortium's planned spring 2013 pilot test of 10,000-plus items and tasks. Since K-12 enrollment in the group's states ranges so radically in size, from 87,000 in Vermont to 6.2 million in California, what would be the best way to get appropriate representation in a pilot test? The chiefs considered "oversampling" the smaller states, but ultimately decided to go with an approach that uses the same percentage of students in each state.
Most of the four-day gathering was spent in private meetings with vendors working on more than a dozen contracts for the test, from the test engine to item development. There have been some interesting developments on the test design, as well, but those haven't yet reached the agenda for the public portion of the chiefs' meetings. I will be reporting to you on that shortly, however; stay tuned.