The 'Widget Effect' Endures: Teachers Still Rated High
A new report by Education Sector, a Washington-based think tank, takes a deep dive into school evaluation systems in Washington state, finding that nearly every school there failed to distinguish between effective and ineffective teachers and principals. By and large, the report, entitled "The Evergreen Effect: Washington's Poor Evaluation System Revealed," corroborates the findings from TNTP's 2009 widely referenced study "The Widget Effect," but from within a single state.
The report's author, Chad Aldeman, a former Ed Sector policy analyst who now works for Bellwether Education, looked at evaluation data from the 2010-11 school year that Washington submitted to the federal government. (The report does not cover changes stemming from a Washington law, signed last year, that will overhaul teacher-evaluation requirements).
Aldeman chose Washington, he explained by email, because it had made "public a rich dataset with new variables that no one had looked at for an entire state before." The data cover 2,251 public schools, including 54,781 teachers and 2,470 principals.
The Ed Sector analysis finds that 99 percent of teachers and principals were rated as satisfactory during that time. The majority of schools did not identify even one unsatisfactory principal or teacher.
The report, unlike TNTP's, also looks at the language used in the evaluations, and finds that it skews toward the positive. Aldemen writes:
About half as many categories describe poor performance as describe good performance. Many of the negative words, such as "weak" or "fails" or "unacceptable," send a strong, clear message. But there is also quite a bit of overlap between positive and negative terms. Words like "emerging," "developing," and "adequate" convey progress or sufficiency, belying the fact that they are applied only to the bottom 1 percent of educators with unsatisfactory ratings.
(Check out the fun word clouds associated with this section on page 6 of the report.)
Aldeman argues that failure to distinguish between the high and low performers hurts both schools and students. He says it encourages schools to make decisions based on seniority rather than quality, which, according to research he cites from the Center for Education Data and Research, "would result in a less-skilled workforce."
As Aldeman acknowledges toward the end of the report, however, a lot has changed in the last few years in the teacher-evaluation arena. According to the National Council on Teacher Quality's most-recent State of the States Report, 20 states required student achievement to be a significant factor in teacher evaluations in 2012up from only four states in 2009. Washington's own new evaluation protocol, which uses a four-level ratings system and factors in student-achievement growth data, will go into place next school year.
The issue to follow up on will be whether this new evaluation makes it easier to differentiate performance. Education Week reporter Stephen Sawchuk noted in February that, at least in a few states, revamped evaluations have not produced more differentiated results, with effective ratings continuing to hover around 98 percent in Michigan, Florida, and Tennessee. Generally, Sawchuk writes, it's the observation scores that have stayed high. Considering that this more subjective component of teacher evaluations is likely here to stay, it will be interesting to see where the evaluation conversation heads nextespecially if the broadly high ratings in Washington and other states hold fast as well. Or will the strict-accountability proponents cede to those who argue the results are high simply because the majority of teachers are doing great work? Seems unlikely.