Federal

How Reviewers Rated the Race to Top Assessment Contenders

By Catherine Gewertz — September 03, 2010 1 min read
  • Save to favorites
  • Print

Those of you nerdy enough to yearn for nitty-gritty details of the RTT assessment competition will want to curl up with the reviewers’ score sheets on the competing state consortia.

The summary chart of the three contenders’ scores shows that in the “comprehensive” category (for K-12 assessment systems), the PARCC consortium took home the most points: 164 out of 220. The SMARTER Balanced group nabbed 151. (We’re told, however, that this score difference is not what gave the PARCC group a $170 million grant, while SMARTER Balanced got $160 million.)

In the high school category, the SCOBES consortium got just under 126 points of an available 220. It was the only group vying for money in that category, but it did not get funded.

A more detailed score sheet for the SCOBES consortium shows that the peer reviewers were of radically differing opinions about the group’s proposal. (Reviewer #6 seemed to be considering sending flowers, but it’s amazing that Reviewer #9 didn’t throw darts. Check out those wildly varying totals.)

The score sheets of PARCC and SMARTER Balanced show variations of 71 to 73 points among the nine reviewers.

Each reviewer goes into more detail on the judging in their individual comments. (Listed on the RTT assessment scoring main page.)

We know who the peer reviewers are; but which ones gave which ratings isn’t disclosed. The reviewers listed with asterisks were alternates, who evaluated the applications but whose appraisals were not counted in the end, according to the Education Department.

A version of this news article first appeared in the Curriculum Matters blog.