A Written Test Can't Measure Effective Teaching
Trying to understand court decisions regarding public education has always been a challenge. Consider the latest ruling by a federal judge who reversed herself from her previous determinations about the same subject ("Judge Rules New York Teacher Exam Did Not Discriminate Against Minorities," The New York Times, Aug. 8).
The issue was whether the Academic Literacy Skills Test, which is one of four tests new teachers in New York State must take to receive a license, discriminated against minority applicants. Judge Kimba Woods had previously held that two earlier exams called the Liberal Arts and Sciences Test had a disparate impact and, therefore, could not be used. But this time, she ruled that the "content of the ALST is representative of the content of a New York State public-school teacher's job."
What's confusing to me is whether the ALST allows valid inferences to be made about an applicant's effectiveness in the classroom. It's not that I oppose diversity in the teaching force. On the contrary, I believe it is vital. But all that matters is whether the test in question has predictive value for teachers of any race. Woods relied exclusively on assurances made by New York State and Pearson, which designed the test. Neither is hardly a disinterested party.
I think the fairest way of determining with any degree of reliability who will be effective in the classroom is through performance assessment. That's why student teaching exists in the first place. It puts applicants for a teaching license in front of a class of real students. How student teachers perform is far better evidence than any written test.