Opinion
Education Opinion

Disagreeing About the Use of Value-Added Measures

By Michelle Rhee — April 23, 2014 4 min read
  • Save to favorites
  • Print

On Monday, Michelle and Jack began discussing the divide over standardized tests. They continue their conversation today, addressing the use of such tests in evaluating teachers.

RHEE: We ended Monday with you arguing that value-added measures (VAM) shouldn’t be used to directly evaluate teachers. And you raised an important point: that VAM tends to scare teachers.

I don’t disagree with you that teachers are scared. But I think that’s largely because evaluations that take student academic progress into consideration are currently seen by many as a punitive measure. The reality is that very, very few teachers are removed from the classroom as a result of the new evaluation systems. The vast majority of teachers are now being given much more robust information and data about their teaching and the progress of their students than they ever have been. And yet, that’s not what we’re talking about, unfortunately.

SCHNEIDER: But there’s a reason we’re not talking about that. I hear so much bombast about how if we allow principals to conduct evaluations, they’ll simply give “effective” ratings to all of their teachers. And then I hear that the solution for this ostensible problem is to use VAM calculations. So the “problem” in this scenario is that not enough teachers are being identified as ineffective? And the “solution” is to remove the human element?

If I’m a teacher, that scares the hell out of me. It’s like a machine is being designed to fire me.

RHEE: First of all, the rhetoric you hear and the reason why so many teachers are scared is because they are being told that they will be assessed solely on the basis of test scores. Yet I do not know a single person, policy maker, state, or educational leader who is advocating for doing that. In fact, if there were, I would be that person’s most vocal opponent. I’ve found that when teachers understand how the new systems work and they see the value of understanding how student growth is being used, they are okay with it. Most effective teachers aren’t afraid of using student growth as part of their evaluation. They just want to know the system will be fair. When you explain the regression model and how it controls for factors outside of the teacher’s purview, teachers are much more comfortable with it.

SCHNEIDER: But my question is why base the evaluation on those scores at all, especially given how unstable they tend to be from year to year? Why not use those scores as a way of telling instructional leaders and trained coaches where they need to be looking, and then let them make the call? It’s the instability and unreliability of the tool that makes teachers uncomfortable, not just the weighting.

RHEE: Say evaluators see the weaknesses year in and year out. The coaches and administrators are telling the teacher what they need to do to improve their practice, but nothing changes. Then what? If there is no tie between what we’re seeing in terms of student academic growth and a teacher’s evaluation, it may be possible for an ineffective teacher to remain in the classroom for far too long.

SCHNEIDER: I’m not saying you keep ineffective teachers in the classroom. What I am saying, however, is that you don’t need to build VAM into personnel decisions. Those decisions can be made by trained professionals who work closely with teachers, and who use all kinds of data—VAM included—to help them figure out what they’re looking at.

The rhetoric right now is about VAM being a kind of silver bullet. So what teachers fear is that if 30% of their evaluations are based on VAM today, five years from now that figure will be 100%. And I’m totally uncomfortable with that. Because when the numbers are wrong—and they will be wrong quite frequently—they’ll be really wrong.

RHEE: VAM should never be used as 100% of a teacher’s evaluation. But are you arguing that we should develop policy based on fear about the future? Take DC for example. We started off basing 50% of the teacher’s evaluation on VAM on the District of Columbia Comprehensive Assessment System. Over time, based on teacher feedback, this was tweaked. Now 50% is still based on student academic growth but 35% is based on DCCAS and 15% is based on other standardized measures selected by the teachers and school (like DIBELS, etc.). That kind of constant improvement of the model based on educator feedback is what should be happening.

SCHNEIDER: Educator feedback is essential. I think we’re on the same page about that. But I’m still uncomfortable with the instability of the measure.

I’m also concerned about its narrowness.

I’d like to know, for instance, where we’re going to include a range of other factors in a teacher’s evaluation—factors like the kinds of caring relationships that great teachers have with kids. Because that has also been lost in the conversation about teacher evaluation.

RHEE: I agree that a good evaluation system includes multiple measures. Some places that those relationships can be picked up in an evaluation are in what is sometimes referred to as “contributions to school community” (which we had as a part of DC IMPACT), student surveys, and parent surveys. Also, a well-trained observer can also see those relationships playing out positively in the classroom and will factor that into the observation rating.

SCHNEIDER: Then why not let that well-trained observer do all of a teacher’s evaluation?

RHEE: Because any good evaluation should look at a teacher’s practice through multiple lenses, not just the eyes of one person.

To be continued...

The opinions expressed in K-12 Schools: Beyond the Rhetoric are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.