Teacher Preparation

Research Group Latest to Caution Use of ‘Value Added’ for Teachers

By Stephen Sawchuk — November 11, 2015 2 min read
  • Save to favorites
  • Print

The membership group representing education researchers has released a statement warning of the “potentially serious negative consequences” of using “value added” models to judge individual teachers or teacher-preparation programs.

Not only do such models, which are based on growth in students’ test scores over time, potentially misidentify teachers and programs if not used carefully, they could also result in resources being misdirected and “the educational system as a whole being degraded,” according to the American Educational Research Association.

“Only if such indicators are based on high-quality, audited test data and supported by sound validating evidence for the specific purposes proposed, can they be appropriately used, along with other relevant indicators, for professional-development purposes or for educator evaluation,” the statement concludes.

The statement goes on to detail eight technical requirements that AERA says must be considered for using VAM. Among them, VAM should only be used on valid and reliable assessments, be based on multiple years of data from a sufficient number of students, should never be used alone, and be calculated only on tests that are comparable over time.

There have been reams of studies and commentaries written about the technical properties of value-added and its use, from the positive to the neutral to the critical; earlier this year, AERA itself had an entire issue of Educational Researcher devoted to the topic. Other groups, representing principals all the way up through statisticians, have put out their own statements. All of that leaves the education field far from consensus, and it’s certainly fair to say that the use of these models has outpaced the research literature on them.

States already use VAM for different purposes. Many states use some form of student-growth data in their teacher-evaluation systems; others use it, in the aggregate, as a component in reviewing teacher-preparation programs.

The timing of this statement comes just as the U.S. Department of Education is putting the finishing touches on a regulation to toughen up accountability for teacher colleges. Under the draft, student achievement would be one of the required indicators for judging programs, although the department has indicated it might soften that requirement.

for the latest news on teacher policy and politics.


Read more on value-added models:


A version of this news article first appeared in the Teacher Beat blog.