School & District Management

A Glimpse of Technology-Enhanced Tests

By Catherine Gewertz — May 08, 2012 4 min read
  • Save to favorites
  • Print

Experts who work on technology-enhanced assessment have a few ideas to replace those tiresome multiple-choice tests that so many people complain about. Take this one, for instance:

A middle school student sits down at a computer and watches an animation of a spring that powers a racecar in a pinball machine. Prompts lead her to think about what gives the spring its power: Is it the thickness of the wire? The number of coils? She has to choose a hypothesis and explain what leads her to think it could be correct. She designs an experiment to test her hypothesis, inquiring into how the thickness of the wire and the number of coils affect the spring’s ability to propel the racecar.

From this, she generates a table of outcomes. She explains how the different variables shed light on her hypothesis, discusses whether it was correct. She reflects on how the experiment could be changed to produce more enlightening data, and then she reworks it and discusses her conclusions: Given everything she’s learned, which spring will make the pinball racecar go faster, and why?

This assessment task was just one of those showcased yesterday at a conference on technology-enhanced assessment in Oxon Hill, Md., during a session on ways to measure skills that have traditionally been tricky to assess. The pinball car task was designed by a team from SRI International. They were showing it off not only to demonstrate the potential depth and engagement that such tasks offer, but to show how they can be designed from the bottom up—not adapted—according to the principles of Universal Design for Learning and evidence-centered design.

The gathering was organized by the Center for K-12 Assessment and Performance Management at ETS and the Council of Chief State School Officers, which helps its state members sort through assessment issues through its SCASS system, a network of collaboratives on various standards and testing issues. It was sponsored by some of the biggest names in the testing industry. All the papers from the conference can be found on a special page of the center’s website.

While the SRI team presented the prototype task in science, a team from ETS displayed a mathematics task that it has been piloting in New Jersey as part of its CBAL initiative, which envisions testing as a form of instruction. They presented a task called “proportional punch,” which leads students through exercises in ratio and proportions as they figure out how to make cherry punch of varying concentrations. As they try various recipes of water and punch mix, they can watch a “sweetness meter” change. Developing tables of data, students have to theorize about how different ratios will influence the punch, and must explain their answers. Which will be sweeter: a punch made with 7 scoops of mix and 9 cups of water or one made with 10 scoops and 13 cups of water? Why? The task also offers special-needs adaptations such as text-to-speech and haptic, or vibrating, sensations to aid students with visual impairments.

An English/language arts task, showcased by a team from CTB/McGraw-Hill, had students read an article, conduct research and write a series of extended and brief responses to prompts. They must evaluate the credibility of their research sources. They can take notes on a yellow pad on the screen, can have what they write read back to them, and can respond orally if they choose. The students’ performance creates data feedback for the teacher, and the task includes tailored suggestions about instructional next steps.

One theme that kept arising during the presentations and accompanying discussions was how technology offers the possibility of making tests more of a learning experience for students. Juan D’Brot, West Virginia’s assessment director, noted after the presentations that he was struck by their potential to be “a blend of instruction and assessment.”

“We’re really blending this summative-formative-interim continuum,” he said.

Sue Rigney, an assessment expert with the U.S. Department of Education who co-moderated the morning’s presentations, noted the blurring of those lines, as well.

“We are seeing a narrowing of the distance between summative assessment and instruction,” she said. “I don’t know where it will end.”

The excitement about the potential of technology-enhanced tests, however, was tempered throughout the discussions with a sense of how far there is to go before such tests are refined and available, and the challenges they pose when they are ready.

Experts in the room repeatedly cautioned one another to avoid being seduced by the “coolness factor” of technology, and stay focused on the instructional and measurement rationale behind each design feature. Additionally, many questions still hover over the work on technology-enhanced tests: how do educators manage and understand the flood of new kinds of data they will produce? How do that data dovetail with states’ accountability systems? Not every possible set of skills can be assessed, so which ones are most important to assess? And what process will guide educators and policymakers as they decide which tests to use for classroom-based instruction and which to use for large-scale settings like state and federal accountability?

The ship, as CTB/McGraw-Hill’s Karen Barton said, is not quite ready to launch.

Related Tags:

A version of this news article first appeared in the Curriculum Matters blog.