Teaching Profession

Teacher-Evaluation Fears Playing Out in New York

By Liana Loewus — October 15, 2013 2 min read
  • Save to favorites
  • Print

A Syracuse high school is serving as a case study for how apparent glitches in an evaluation system can affect teacher morale—and potentially jeopardize teachers’ careers.

Teachers at Henninger High School, who picked up their performance evaluations last week, told Syracuse’s The Post-Standard that their scores were lower than they should have been. Of the 20 possible points for school improvement on the Regents exams, all 100 teachers at the school received zeros. But the teachers said that they should have received 12 points because Henninger students improved in three subject areas.

According to The Post-Standard:

The issue drove some teachers from an “effective” rating to “developing,” and others from “developing” to “ineffective.” Teachers in the developing and ineffective categories are required to create improvement plans. Those rated “ineffective” two years in a row are subject to an expedited dismissal procedure.

One veteran teacher told the paper, “There were a lot of devastated looks on people’s faces.” Another said, “There were teachers crying. ... People were furious.”

The local teachers union president agreed there was a problem but by yesterday the district had not confirmed whether there was a mistake.

However, according to sociologist Aaron Pallas, the evaluation problems extends well beyond Henninger. In a blog post for The Hechinger Report, he notes that preliminary teacher-evaluation results for the entire Syracuse district indicated that 40 percent of teachers were classified as “developing” or “ineffective,” “categories that suggest low levels of teaching performance, the need for teacher improvement plans, and the threat of eventual dismissal. Not a single elementary or middle-school teacher in the entire district was rated highly effective.”

Pallas goes on to blast discrepancies in the results, stating that most teachers were “in the effective range on the state growth scores” and were rated effective in observations, but that schoolwide measures of achievement brought scores way down. So what happened? Why the dismal local scores? Pallas says that Syracuse’s evaluation plan used growth measures but did not account for the fact that “the 2013 tests were aligned with Common Core curricular standards, whereas the 2012 tests were not.” Expecting teachers to attain higher rates of proficiency on 2013’s more difficult tests in order to be deemed effective, he writes, is a “wildly inappropriate metric.”

Pallas has been warning people about this potential flaw in evaluations for a while—and he’s not the only one who’s been worried. Oh, and it looks like Rochester is having similar evaluation outcomes. What say you, teachers?

A version of this news article first appeared in the Teaching Now blog.