Rosy Indiana Evaluation Results Trigger Soul-Searching
Indiana is the latest state to unveil results from an overhauled teacher-evaluation system, and similar to many other states, the results are almost entirely rosy.
The Associated Press reported that 88 percent of teachers and administrators were rated as either effective or highly effective under the system; only about 2 percent need improvement, and less than a half a percent were deemed ineffective. About 10 percent of teachers weren't rated because their collective-bargaining agreements hadn't been updated yet.
It's the first year the system has included some measure of student progress, such as standardized test scores.
Lawmakers don't seem very happy by this turn of events. After all, the new systems are costlier and demand much more of principals' time than before.
"Obviously we were hoping to have a true reflection of where everybody fell. And it's hard to believe, that, I mean it's good that everybody would be in those two categories, but it's probably not realistic," Rep. Robert Behning, the chairman of the House education committee, told WISHTV.com.
To be clear, all we know at this point is the overall pattern of scores, and there's plenty of room to argue about what they mean. State Superintendent Glenda Ritz, a former teacher, and the state teachers' unions argue that the scores are a reflection that most teachers are, in fact, doing a good job. Ritz also suggested that the high scores might be the result of the state's plans to tie the results to compensation.
For another thing, each district has the flexibility to tailor the evaluation process, so the results aren't strictly comparable from district to district.
As I've written before, there is no concensus—professional or in research—on what the breakdown in these figures "ought" to look like. There's a qualitative argument to consider, too: If, as is hoped, the feedback generated from these reviews is more helpful to teaching and learning, then perhaps the year-end score isn't the most important thing to consider.
Still, it does suggest that we are really only at the beginning of understanding how these systems work. And as a researcher friend of mine pointed out, this isn't just a matter of the value-added, test-score-based portions of the evaluation. Classroom observations potentially suffer from similar problems, such as instability or bias due to student characteristics. They just haven't been studied as much.