« Why Is Collaboration So Controversial? | Main | A Teacher's Advice on Getting Teacher Evaluation Right »

Widening the Hoop on Teacher Evaluation

I jump through hoops. We all do. For our jobs, our families, and all the mundane parts of a modern life, we jump through hoops. My particular hoop of the moment is trying to take the complex work I do in the classroom and fit it into quick jargon-y buzzwords. I do this because it's worth it. The more people who know about the fantastic work my kids are doing, the more freedom I get to explore instruction and content in exciting ways.

For a long time, teacher evaluation was just another hoop to jump through. Teachers cared about their work, and they cared about their evaluations, but the two were seen as only tangentially connected. Our observation system was built in a way that produced a lot of inaccurate results, so it was easy to lose faith in it, and just see it as a meaningless exercise. For me, evaluation was two observations, twice a year, based on a pretty vague description of teaching. We jumped through hoops, but the hoop was wide enough so that almost anyone could fit through.

Now evaluation has swung in the opposite direction. The advent of multiple-measure evaluations means that each measure refines the target. Getting the highest rank in my evaluation system means that an exhaustive list of indicators must be present in a 45-minute class period. It also means that my students must outpace their projected growth on state assessments, including assessments outside of my content area. It further means that my high school's graduation rate and attendance must remain suitably high or increase each year. The hoop has gotten narrower and narrower.

At the same time, getting an evaluation means more than ever. In my district, evaluations are tied to pay. And many suggest that teacher's job protections should be weakened to make evaluation affect whether or not teachers keep their jobs.  In fact, Colorado has passed legislation mandating that consecutive poor evaluations automatically trigger the loss of tenure.

Think about the conflict inherent in those last two points. We're simultaneously raising the bar, and the stakes. That would be like me thinking up an incredibly challenging and tricky project for my students in May, then making a score of 95 percent a requirement to pass my class and graduate. That would be incredibly unfair. But that's kind of what we're doing to teachers. We're setting up a system where the number of teachers deemed ineffective is about to increase substantially, at the same time as we make it easier to fire ineffective teachers. We need to think through the practical implications of this.

The next teacher shortage may be the one caused by this conflux of policies. This will be especially true in urban districts and hard-to-staff schools. My urban school district of 82,000 students began the school year with close to 200 teacher vacancies, mostly in high-needs areas like math, science, and special ed. Since failing schools tend to be clustered in urban areas, and new evaluations are designed to equate school failure with teacher failure, it follows that urban areas are going to have more teachers deemed ineffective than their wealthier neighbors. We already struggle to find teachers in Baltimore City—how are we going to fill more openings?

We need to widen the hoop. In the first few years of PARCC, we can't make it so difficult to get a good evaluation. PARCC scores are expected to plummet, and if they drive down evaluation ratings, more teachers will be at risk of losing their jobs. We're in a climate where talking about lowering standards is an anathema, but I'm saying it: It needs to be easier to get a good evaluation. For practical reasons, we can't create an evaluation system that fires more teachers than can be replaced.

This is not to say that we should accept poor quality instruction in the neediest schools and students. Far from it: Challenging environments require the best instruction possible. But an evaluation system that is primarily focused on weeding out poor instructors isn't going to help the schools that lack the positive working conditions to attract talented teachers. Instead, evaluation needs to focus on developing instructional capacity. One evaluation system that successfully supports struggling teachers while fairly removing ineffective ones is Montgomery County Maryland's Peer Assistance and Review Program. In this program, mentor teachers observe and support new and struggling teachers, voting to remove those who do not show improvement. One notable and laudable feature of PAR is that it factors student achievement into the evaluation process without using test scores.  

PAR is widely lauded, but it's been slow to spread to other districts, even within the state of Maryland. PAR seems to be a system that has successfully gotten the size of the evaluation hoop just right—why aren't more places following its example?

Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.


Most Viewed On Teacher



Recent Comments