More Thoughts on D.C.'s Teacher Evaluations
Over at The Washington Post, Jay Mathews has done a great job of digging into the District of Columbia's IMPACT teacher evaluations, which couple five performance-based observations with value-added (growth-based) test scores and other measures.
His latest column talks about the quality of the feedback that the district evaluators are giving as part of the observations. Regarding an evaluation that one teacher sent to him, Mathews says that the written feedback was a bit vague or didn't list examples gleaned from the observation. The IMPACT system's key manager, Jason Kamras, responded here, noting that the written feedback is supplemented by a dialogue with the evaluator.
It may seem fairly obvious that high-quality feedback with lots of supporting examples is crucial for a good evaluation, but let's not forget that many districts barely even have performance standards for teachers. IMPACT, like other evaluations systems based on standards, is complex. It will probably take a while to get everything calibrated to the point where it functions with few snafus. Ultimately, details such as the quality of feedback will determine whether teachers embrace the system as a legitimate pathway for improving their teaching, or eschew it as arbitrary or unfair.
So let's hear your thoughts about what aspects of IMPACT, or about teacher evaluations in general, you'd like to see more reporting on from Mr. Mathews and yours truly.
I'll start off the conversation with some thoughts I've been having: Is it appropriate to base so much of an evaluation on student growth? Will D.C. will get rid of the "learning styles" requirement now that scientists say there's little research that supports them, as colleague Debbie Viadero writes here? Are there too many performance standards for a teacher to meet in a 30-minute observation?