Principals' Group Latest to Criticize 'Value Added' for Teacher Evaluations
The National Association of Secondary School Principals has entered the loud fray over teacher evaluation, giving preliminary approval to a statement that says test-score-based algorithms for measuring teacher quality aren't appropriate.
In addition to criticizing the research on such "value added" systems, the statement says that the timing for using them comes at a a terrible time, just as schools adjust to demands from the Common Core State Standards and other difficult new expectations for K-12 students.
"New teacher evaluation systems demand the inclusion of student data at a time when scores on new assessments are dropping. The fears accompanying any new evaluation system have been magnified by the inclusion of data that will get worse before it gets better," the statement reads in part. "Principals are concerned that the new evaluation systems are eroding trust and are detrimental to building a culture of collaboration and continuous improvement necessary to successfully raise student performance to college and career-ready levels."
The resolution was given provisionary approval at the organization's November board of directors meeting. A public comment period follows, and the statement will go before the NASSP's board for final approval at its February 2015 meeting.
The statement cites several studies and reports critical of value-added measures and a statement by the American Statistical Association. (There is no mention of research that supports the opposite view—that the estimates do pick up some useful information about teachers—such as the Gates Foundation-funded Measures of Effective Teaching project.)
Value-added systems, the statement concludes, should be used to measure school improvement and help determine the effectivness of some programs and instructional methods; they could even be used to tailor professional development. But they shouldn't be used to make "key personel decisions" about individual teachers.
The statement appears to reflect principals' suspicion about the value-added systems. As my crack colleague Denisa Superville has reported, principals often don't use value-added data even where it exists, largely because a lot of them don't trust it.
It's important to note that other ways of measuring teacher effectiveness, such as observations of teachers' practice, have also been shown to contain bias. But those measures haven't kicked up nearly as much concern among educators, and they are generally far more popular among principals.