Teacher Preparation

Teacher Performance Assessment Under Scrutiny

By Stephen Sawchuk — May 11, 2012 2 min read
  • Save to favorites
  • Print

A column from The New York Times takes aim at the Teacher Performance Assessment, a performance-based licensing test that about 200 teacher preparation programs across 25 states are now piloting.

In essence, the story says that a number of students and faculty at the University of Massachusetts are refusing to participate because they don’t like that Pearson, the New York City-based educational publishing and testing company, is in charge of arranging the scoring process, rather than teachers and faculty members.

Pearson has been caught up in “Pineapplegate”—a wave of criticism over an apparently bungled New York-based reading-test question. But that issue doesn’t even show up in the column, which instead seems to play into general fears about the “corporatization” of public education. “There is a whole education industry that is flourishing because it is built on the denigration of public schoolteachers,” its author writes.

The irony of all this is that the exam has been developed by Stanford University researchers and teacher educators (Linda Darling-Hammond is a proponent) and by the American Association of Colleges for Teacher Education, among others. They are not the groups that most observers in the education community would lump in the “corporate” camp. They don’t tend to be big fans of alternative certification, for instance.

Apparently to defend its participation in this exam, AACTE has issued a statement that appears to directly respond to the criticisms raised in the NYT column. Its basic point is that teacher educators have been deeply involved in providing feedback on the exam’s design, and that it will provide a baseline for programs to determine how well they’re training graduates, as well as help create a national database for discussing beginning teacher practice.

There has been a wave of policy interest in teacher education—for example, the Education Department’s negotiated rulemaking on Title II of the Higher Education Act. During that process, it was pretty clear that a lot of teacher educators oppose outcome-based measures based on students’ standardized test scores. But they have been increasingly pressed to come up with alternatives that can reliably measure teacher competence, and the Teacher Performance Assessment appears to be the main tool that many are pinning their hopes on.

So a lot rides on the pilot that is now underway. The TPA is based on other performance-based teacher exams, such as National Board Certification and the Performance Assessment for California Teachers, used in that state, and both of those exams appear to have some relationship to student achievement. But the TPA is a different, more streamlined version of those assessments. There are a lot of unknown factors about its technical properties. (There are also questions about just how it will be used in licensing by the six states that have already committed to adopting it. Will teacher-candidates get to take it multiple times?)

The researchers conducting the TPA pilot are collecting outcome data such as GPAs, scores on licensing tests, and standardized test data for the students taught by the pilot teachers, to figure out the relationship between scores on the exam and these other factors. So some of these data will hopefully begin to flow within the next two years as the pilot wraps up.

A version of this news article first appeared in the Teacher Beat blog.