« 'I Can Do That!': How Maker Spaces Teach Students to Redesign Their Worlds | Main | Facebook Pushes Digital Literacy With New Lesson Library for Teachers »

'Smarter Balanced' Common Core Test Gets Thumbs Up from the Feds

testA+_560x292blog-Getty.jpg

Nevada's use of the Smarter Balanced test—one of the major providers of tests measuring the Common Core State Standards—has met all of the U.S. Department of Education's requirements for grades 3-8.

It's the first time that the Smarter Balanced test, which was crafted by a consortia of states with the help of an initial infusion of federal cash, has received the thumbs up from the feds. The approval is part of the wonky "peer review" process, a requirement under the Every Student Succeeds Act meant to make sure the tests are technically sound.

If you poke around the Education Department's assessment website, you'll find that most states are still in peer-review purgatory despite ESSA now coming up to its third anniversary. 

So why are we making a big deal about this one approval? Well, as one of the two major testing consortia that emerged after the common core rolled out, Smarter Balanced has faced a lot of scrutiny, especially due to its adaptive design, which "adjusts" or give students questions of different levels of difficulty. Some folks continue to charge that the exam isn't measuring everything it's supposed to measure.

The most recent peer review process started back in 2015. Nevada first started submitting its materials in December 2017. 

"It's an affirmation of the work that started over eight years ago among a group of states who were banded together to build an assessment system from the ground up to measure college and career readiness," said Tony Alpert, the executive director of the Smarter Balanced Assessment Consortium. "It's a good bookend on a long process." 

State of Assessment Peer Review

Admittedly, testing peer review doesn't exactly scream "party time" to most people, unless you're a big geek like your trusty Curriculum Matters reporters. But here's why it matters. 

Under ESSA, as under the law it replaced, states must submit technical evidence to show that the tests used to generate accountability ratings under the law produce valid, reliable, and comparable information on student performance. The tests must adequately cover all of the state's content standards. Theoretically, the Education Department can withhold cash from states' federal education allocations if they don't get their act together on the tests. (The agency made a few serious threats to this end back in 2006, but doesn't appear to have ever actually pulled the trigger.)

In reality, the peer-review process is pretty technical and involves a lot of negotiating back and forth between the department and the states that can take months, if not years, to complete. For example, states have to submit research and studies showing that when they translate the test into other languages, it still produces accurate student results. And they have to show that different "forms" or versions of the test produce equally accurate measurements.

Note that peer review doesn't really get into the strength or weaknesses of specific test items—it's simply focused on whether the test measures the cognitive depth of the state's content requirements, not how it stacks up against competing exams. 

Also, while the SBAC folks submitted technical material to the Education Department on behalf of the 12 states that use its tests, most of those states are still in the middle of peer review. California, Delaware, Hawaii, Idaho, Oregon, South Dakota, Vermont, and Washington state all "substantially meet" the regulatory requirements, the department's peer reviewers have concluded, but they have a little further to go. That's because despite using the same exam, the states hired different contractors to administer and score the results, and have to show that those processes were sound. 

In Nevada's case, the peer reviewers asked for more information on aspects like how the state planned to report on student subgroups, on student privacy protocols, and on the composition of the state's technical advisory committee, said Peter Zutz, the administrator over the office of assessment and accountability management. That was enough to get the state past the finish line.

"I think what peer review provides for us is verification that, indeed, our assessments are meeting the needs of Nevada's students," Zutz said. "They're aligned and rigorous, and assess and report what they claim to assess and report on." 

Another common-core test, PARCC, won federal approval this past January as part of Maryland's peer-review process. 

Nevada's approval only covers Smarter Balanced's grades 3-8 tests. The state will submit its alternative assessments for students with disabilities, its high school exams, and its sicence tests later this year.

In general, high school tests are becoming something of an Achilles' heel for many states. As my colleague Catherine Gewertz wrote a year and a half ago, many states are interested in using college-entrance exams to meet high school testing requirements in a two-birds-one-stone type of arrangement. But the U.S. Department of Education's peer reviewers have demanded more information on those tests' technical properties.

Six states use Smarter Balanced at the high school level, but those haven't been approved just yet.

Image: Getty

Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Follow This Blog

Advertisement

Most Viewed on Education Week

Categories

Archives

Recent Comments