The successful lawsuit filed by black firefighters against New York City claiming that two entrance exams used in 1999 and 2002 intentionally discriminated against them surfaced once again on May 26 when a federal judge accused the city of obstructionism in redesigning the tainted exams ("Racial Bias in Fire Exams Can Lurk in the Details," July 24, 2009). There is a lesson to be learned from the controversy that applies to standardized test construction in general and to teacher certification in particular.
The lesson boils down to the importance of collecting varied kinds of validity evidence in order to determine how much confidence to place in scores made by test takers. Black firefighter candidates argued the written exam contained too many items that measured knowledge not necessary for fighting fires. They cited the number of items that required applicants to read and understand long passages containing technical terms. In other words, they claimed reading comprehension was being disproportionately measured rather than firefighting skills. The federal judge in the case agreed, ruling that there was "only a minimal relationship between the content of the examinations and the content of the job of firefighter."
This ruling has direct relevance to teacher licensing. In attempting to increase the number of "highly qualified" teachers in every classroom, states have designed standardized tests that purport to allow valid inferences to be made about future teacher effectiveness. Yet upon scrutiny, these paper-and-pencil tests have severe limitations for the task at hand. Although they may accurately measure a candidate's knowledge of subject matter—an indispensable prerequisite—it's doubtful they can identify those applicants who will be successful teachers in a classroom. In short, they have low predictive value.
It's always been my belief that performance assessment is a far more defensible strategy. It's been the basis for auditions for decades. If you want to know if someone can sing or act, then you listen and watch. You don't give them a written exam. Once knowledge of their subject is initially assessed, candidates for a teaching credential are not that different. They can be given a description of the students, the subject matter to be taught on the day they are observed and any other pertinent information. Then they can be evaluated by a panel of experienced teachers in the same field.
The latter point is too often ignored. When principals observe a probationary teacher, the chances are the raters are not certified in the subject field. This disconnect makes a mockery of the entire process. Can principals who do not speak Spanish, for example, be counted on to evaluate the teacher's lesson on the use of the subjunctive? Or can principals who have never taken a course in chemistry be competent to evaluate a laboratory lesson?
Performance assessment admittedly is costly and slow, and in addition can be subjective if not correctly implemented. But these disadvantages have to be weighed against the advantage of gaining a new generation of talented teachers. I say the latter far outweighs the former.