Opinion
Assessment Opinion

What kind of testing is best?

By Diane Ravitch — May 18, 2007 3 min read
  • Save to favorites
  • Print

Deb,

You will not be surprised to learn that I agree with you about the value of a road test for licensing future drivers. If you can’t actually operate a car with safety and confidence, then you should not be licensed to drive, no matter how well you score on the written exam.

As it happens, many of the studies that are taught in school are not comparable to driving a car. Many of them involve not only “habits of mind” but the acquisition of skills and knowledge that cannot be evaluated in any way that is akin to a road test. How should we test a student’s ability to read? We might have them read out loud. That is a good thing for a teacher to do with regularity. For testing purposes, though, it would be very time-consuming, and for an entire class it might take days to listen to each student read selections of varying levels of difficulty. Or we might give the students a test of reading comprehension in which they read an essay or poem or story and then answered questions to demonstrate that they understand what they have read. The latter method, in the eyes of most school officials, is preferable because it takes less time and less money to administer and turns out to be a reliable indicator of student reading ability. It also makes it possible to compare student performance and to gauge whether they are making progress in relation to what students of their age typically know.

I think the same argument could be made for assessing students’ knowledge of mathematics, science, and other subjects. I prefer to see students writing research papers in history, to be sure. Most fill-in-the-bubble style history questions are extremely superficial. And yet, superficial as they are, such questions too (if they aren’t too stupid, too superficial, too vapid) can quickly identify students who really don’t have a clue about whatever history they studied and they can be designed to show different levels of difficulty and knowledge. For example, the latest NAEP test of U.S. history has a 12th grade question that shows a map of the continental U.S. around 1800, on which a dotted line traces a route. Students are asked to identify whether “the expedition whose route is shown was undertaken to explore the: a) lands taken in the Mexican War; b) lands taken from England in the War of 1812; c) Louisiana Purchase; or d) Gadsden Purchase.” Students need to be able to look at that map and know that they are looking at the expedition exploring the Louisiana Purchase. There is no way to fake it, other than a lucky guess.

I am all in favor of exhibitions, research papers, and other means of demonstrating what students know and can do. These ways of assessing give teachers an in-depth look at what students have learned. These are the right tools for the individual teacher and for Sizer-style schools. More power to them and to you.

But you acknowledge that these are not the right tools for a district or a state or a nation that is trying to see how well students in fourth grade or eighth grade or twelfth grade are doing. You suggest that large-scale standardized testing should be sample-based, like NAEP. This would assure that there are no “stakes” for any individual student. Quite honestly, I don’t know what the right answer is. I know that there are testing experts and education economists who argue that having stakes is very important, that they create incentives for higher performance, that one reason NAEP 12th grade results are so poor is because students know the test has no stakes. Al Shanker often told audiences that his students would ask him, “Will it be on the test?” If he said no, they didn’t bother to learn what he was teaching; if he said yes, they were very attentive.

Maybe our readers will weigh in and help me on this one. Or we should call in some psychometricians.

Diane

The opinions expressed in Bridging Differences are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.