A Look at Principal Ratings Systems in California
There's a ton of conversation swirling around how principal evaluations should look in the future—but how do school leader evaluations look right now? The Regional Evaluation Laboratory West, which is managed by the San Francisco-based educational reseach group WestEd, took a look at this question in a recent report based on a 2010 survey of close to 1,500 districts and charter schools. Among the group's findings:
For teacher evaluation, 70 percent of the survey respondents reported that they use student achievement outcomes or growth data as partial evidence for evaluating a principal's performance. Nine percent said that student achievement was the primary measure used for judging principals. And 21 percent said that student outcome data is not used at all for evaluating principals. (For teachers, 53 percent used student outcome data as a partial measure for effectiveness, 4 percent as the primary measure of effectiveness, and 43 percent did not use student outcome data for evaluating teachers.) The data demonstrates that, at least among the survey respondents, student outcomes mattered more to a principal's rating than to a teacher's.
When the researchers broke those responses down to compare charter schools to school districts, they found that charter schools were more likely to use student achievement or growth data as partial or primary evidence for teacher and principal evaluations. For principal evaluation, 85 percent of responding charter schools and 76 percent of responding districts used such data as a partial or primary measure of a principal's value. (For teachers, those figures were 82 percent of charter schools and 45 percent of districts.)
Principal groups have promoted the idea of evaluations being used as a way to steer professional development to school leaders, as opposed to just being used for punitive reasons (an idea I explored in this article). Among the survey respondents, however, only 23 percent said they use evaluations primarily for that purpose with principals. In contrast, 41 percent used evaluations as a primary factor in removal decisions, 38 percent used evaluations as a primary factor in retention decisions, 13 percent used the evaluations when determining promotions, and 5 percent used evaluations as a primary factor in determining compensation. Across all those areas, however, evaluations were seen as a part of the decision-making process.
Finally, the report looks at the structure of educator evaluations. Forty percent of the districts that responded had a two-step rating system for principals—essentially, satisfactory or unsatisfactory. Thirty percent had a three-step rating system, such as "highly effective," "effective" and "ineffective." For districts with a two-step rating system, 97 percent of principals were in the highest category; for districts with three ratings, 83 percent of principals were in the highest category. "When districts have five or six rating levels, you can see a little bit more rating distribution," said Melissa Eiler White, the lead author of the report. "But most districts aren't using those multiple category rating systems."
The findings suggest there's a lot of room to revamp these evaluation systems to make them more useful to districts and to school leaders, which is the goal of a recent set of reports and documents from the American Institutes for Research.
Want to keep up with school district news? Follow @district_doss on Twitter.