Common-Assessment Groups to Undergo New Federal Review Process
The U.S. Department of Education has created a technical-review process for the two state consortia that are designing assessments for the common standards.
The technical review will focus on two aspects of the work the assessment consortia are doing: item design and validation. This is in contrast to the program-review process that the department began when the two consortia first received federal Race to the Top funding in 2010. That monitors how the states are progressing with the work they outlined in their original applications.
The department outlined the new technical-review work and lists the panelists who will conduct it in a notice on the Race to the Top-Assessment website. A review guide on that same page details how the department has been conducting its program review, and also includes its Year One reports on each of the two consortia—the Smarter Balanced Assessment Consortium and the Partnership for Assessment of Readiness for College and Careers, or PARCC.
The new technical review for the Race to the Top Assessment program is part of the department's bid to find better ways to work with grantees, find out what works and what doesn't, and revise as projects progress, Ann Whalen, the department's director of policy and implementation, told me last week. It will focus on the quality of the tests that are being crafted, and see that the groups have a sound research plan in place to validate the tests as proxies for college readiness.
The first meeting in the new review process will take place later this month, when consortia representatives will meet for two days with department officials and the technical-review panelists here in Washington, Whalen said. The idea isn't for panelists to reach consensus on the consortia's work, she said. Instead, they will share their thoughts individually with the department to guide it as it works with the two groups. The panel's feedback will also be available, in a yet-to-be-determined form, to the public, Whalen said.
The department's website goes into much more detail about the seven panelists who will serve as the technical reviewers. But here is a quick list:
•Peter Behuniak, who was Connecticut's assessment director and has advised more than a dozen states on their assessment systems. He was an adviser to former President Bill Clinton in his bid to create a voluntary national test. Behuniak is now a professor in the educational psychology department at the University of Connecticut.
•Gregory Cizek, a professor of educational measurement and evaluation at the University of North Carolina who serves on the Smarter Balanced technical advisory committee. Among his focus areas in assessment are standard-setting, test validity, and and test policy.
•Rebecca Kopriva, a senior scientist at the University of Wisconsin's Center for Educational Research who focuses on making assessments accessible for all students.
•Suzanne Lane, a professor in the University of Pittsburgh's research-methodology program. A member of the PARCC technical-advisory committee, Lane focuses on test validity and design in large-scale assessment programs.
•James Pellegrino, a professor of education at the University of Illinois at Chicago who focuses on the application of cognitive research findings to assessment and instructional practice. He serves on both the PARCC and SBAC technical-advisory committees.
•Kathleen Porter-Magee, who oversees the academic standards program at the Thomas B. Fordham Institute in Washington. A former middle and high school teacher, Porter-Magee oversaw curriculum and professional development, and led the development of an interim-assessment program, at the charter school network Achievement First.
•William Schmidt, a professor at Michigan State University and director of its Center for the Study of Curriculum. Schmidt is widely known for his studies of mathematics curriculum, which found U.S. curricula to be "a mile wide and an inch deep."