Opinion
Assessment Opinion

Teacher Evaluation Means Nothing Without Teacher Buy-In

By Cristina Duncan Evans — April 06, 2015 4 min read
  • Save to favorites
  • Print

“The scores are too low!!” This message landed on my phone early one morning recently, from a distressed math teacher who was worried that her annual evaluation ranking would be crushed because students didn’t perform well on a recent test in her class. Her students’ performance on a classroom test currently makes up more than a third of her evaluation.

The students’ recent scores had put this teacher’s ranking at a 1. On the 4 point scale, a 1 meant that students had not approached the proficiency goal, while a 4 means students exceeded the goal. “I taught my butt off for this unit! We did three days of review before the test, and a lot of the scores went up, but the kids who struggle the most still aren’t passing.” I wasn’t sure how to answer her.

Here are some of the things I could have texted back:

“Retest the kids who failed one more time during class today. Scores aren’t due until midnight tonight, so you have time to give them another chance.”

“Eliminate the zeros for kids who didn’t show up for the test. That should bring the average of the entire class up, and it’ll increase the percentage of kids who passed.”

“You got to pick your own goal and write the test, so next time make your goal much lower so that your kids can easily pass it.”

“This entire part of your evaluation is self-reported. You don’t have to turn in the students’ tests or answer sheets, so you can report whatever you want and take the gamble that no one from our school administration or the district office will have the time to confirm your scores before the end of the year.”

These four options represent an escalating scale of corruption. And while I’m certain the last option is cheating, I don’t know if the first one is. Does giving students another chance at the test fall under the category of “remediation” or is it a blatant attempt to pad scores?

This uncertainty is no small matter—last week 11 Atlanta teachers were convicted on racketeering charges and could each face up to 20 years in prison. Teachers in Baltimore, where I teach, have essentially been tasked with designing their own evaluation tool, and then self-reporting their ranking. Adding another wrinkle to this situation, our evaluation is tied to salary increases, and highly effective teachers are the only ones who automatically get raises each year.

We haven’t been properly trained to do this part of our evaluation. The rules were established in the middle of the school year, and the lack of oversight for this process is troubling. Principals already struggle to complete classroom observations for all of their teachers, so I’m not sure how they’re going to validate and approve all of this student data in a meaningful way.

On the other hand, if principals actually are able to catch mistakes (or cheating), is it fair to hold teachers accountable for those if we weren’t given proper training on how to do this correctly? The most likely scenario is that this year district leadership turns a blind eye to how student data was collected because they know how lengthy and expensive an investigation would be, and that inaction sets an important precedent. At best, this precedent sends a message to teachers that the SLO part of their evaluation is meaningless, and a compliance exercise, not one that is actually about the quality of teaching. At worst, ignoring this problem means the district is complicit in future cheating.

And what of the fact that the evaluation of the teacher next door to me is based on data from 100 of her students, while my evaluation is based on 40 of my students? Does that difference influence the statistical reliability and fairness of the process?

What about the fact that teachers in different schools were given different rules about what assessments they could and could not use?

What of the fact that some teachers set a goal based on one common-core standard, while others created goals based on almost a dozen standards?

And what about the widespread belief among teachers that this part of the evaluation is a “gotcha” system designed to drive down evaluation scores, and therefore slow down increases in salary costs for the district? If teachers don’t believe the process is valid and worthwhile, then they won’t take it seriously and the potential positive effects will never trickle down to students.

Along with the unresolved questions, there have been important bright spots in this process. When I picked my assessment, I chose to focus on students’ writing, and I’ve been very pleased with how that decision has influenced my instruction, and the growth I’ve seen in certain skills of my students.

Allowing teachers the flexibility to create assessments that meet their students’ current needs, as determined by their teacher, is a powerful way to closely study the impact of instructional methods on student learning.

It is also important to make sure that an assessment that is used for a teacher’s evaluation genuinely benefits students and aligns with the instruction that the teacher is already giving, rather than adding more content or standards to be taught.

This part of our evaluation has the potential to be transformative to teaching, but, like anything, it needs to be done well. Teachers need more training on both ethics and pedagogy to be able to meet and make the most of this new challenge.

Related Tags:

The opinions expressed in Connecting the Dots: Ideas and Practice in Teaching are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.