Teaching Profession

Research on ‘Value Added’ Highlights Tracking Problems

By Stephen Sawchuk — October 25, 2012 1 min read
  • Save to favorites
  • Print

“Value added” gauges of teacher performance at the middle and high school level seem prone to certain kinds of bias, I report in a story in this week’s edition of Education Week.

The culprit seems to be the problem of tracking, which begins in earnest in middle school and is in high gear by high school.

This is an important addition to the expanding literature on value-added, because most of the value-added research concerns elementary level students. Such students tend to be taught by the same teacher, so there are fewer explicit tracks separating students by academic ability.

But states and districts are already applying these methods to middle school, and in limited instances (think end-of-course testing) in high school, where tracking appears to be a cause for concern.

Teacher performance continues to be a difficult and contested thing to gauge. Other measures, such as classroom observations, have also been shown to be somewhat unreliable from year to year, unless they’re performed frequently by trained observers. And while student surveys are more reliable from year to year, they’re not all that predictive over time of teacher performance.

All of this leaves us with the knowledge that administrators and policy folks face a tough task designing these systems carefully—balancing fairness for teachers with that for those students who have historically had weaker instruction.

A version of this news article first appeared in the Teacher Beat blog.