« Rights Are Not Rewards | Main | Focusing On The Customer Experience - In Hiring »

Mailbag: How Are HR Departments Supporting Educator Evaluations

| No comments

Since I began writing this blog back in October, I have gotten several interesting questions and comments from readers via social media, email, and through comments on K-12 Talent Manager. On occasion, I plan to use this space to respond to these questions as a learning and engagement opportunity for talent managers and educators across the country.


Source: Master Isolated Images / FreeDigitalPhotos.net

Recently, user "JPaz" asked, "In your opinion, how are leading HR departments contributing to the redesign and implementation of educator evaluation systems?"

JPaz, thank you for your question! Over the past three years, I have had the opportunity to work with teachers, principals, and central office staff in Alaska, Arizona, Colorado, Florida, Indiana, New York, North Carolina, Ohio, Oklahoma, Pennsylvania, Texas, Tennessee, Virginia, and Wisconsin. Evaluation is by far the hottest topic on every educator's mind at all levels. In two previous posts, I outlined state-level policy changes pertaining to evaluation and the use of data in these systems. But, passage of legislation is only part of the story. Every state is at a different point in the process of designing and implementing their models. For instance, Ohio (my home state) is setting up plans, rubrics, and processes, while Tennessee is blazing the way when it comes to using value-added data and other hard-data in evaluations for teachers in both tested and non-tested subjects.

I have seen state and local education leaders turn to many of the same tools in developing their evaluation systems, including Charlotte Danielson; TAP; the Marzano Causal Teacher Evaluation Model; CLASS - Classroom Assessment Scoring System, which was developed at the University of Virginia; Stanford's Protocol for Language Arts Teaching Observations tool; Harvard and the University of Michigan's Mathematical Quality of Instruction tool; and the UTeach Teacher Observation Protocol, developed at the University of Texas. On the other hand, there are many districts that believe they can and should create their own tool to evaluate their teachers. There are positives and negatives in both decisions.

No matter what educator evaluation tool they decide to use, everyone is struggling with the HOW question. HOW can a principal who has only evaluated staff once every three years now visit every classroom two, three, four, or five times for an observation? HOW can we be sure everyone is trained correctly and tested for reliability? HOW can and should we evaluate the physical education, music, and art teachers? HOW should we involve student growth data, such as value-added, AYP, peer evaluations, student perceptions, graduation rates, attendance, etc.?

In response, leading human resources and human capital groups are creating multi-stakeholder teams, facilitating conversations, and providing a framework for thinking about and reviewing tools, processes, and measures related to the design and implementation of educator evaluation systems. These groups are also working to ensure strategic alignment of the work as well as assist in the set-up of evaluation tools by integrating data from current staff information systems and processes.

A few weeks ago, I was told by a teacher (from a state I've never worked in) that her district's new evaluation process was "horrible." I asked if she knew whether teachers had been involved in the design and/or implementation of the new model. She was unaware, but noted that if the district had asked, she may not be 100 percent pleased, but would at least feel more informed and not so in the dark. (This is why communication and stakeholder involvement is so critical when leading change!)

Tulsa Public Schools (TPS) in Oklahoma has done some promising work on their Teacher and Leader Effectiveness (TLE) Observation and Evaluation System. TPS knew that their evaluation process needed to be designed by teachers, for teachers. The tool was created in 2010 with input from the teacher's union. A field test was conducted, and after tweaks were made, principals went through more than 40 hours of training and coaching. The system was used only to evaluate teachers in 2010-2011, but this school year, Speech/Language Therapists, Counselors, Nurses, Psychologists, and Librarians were also included in the evaluation process.

Last fall, the TLE Observation and Evaluation System was chosen as the default evaluation tool for the state of Oklahoma. (This means that if districts don't create or select their own tool, they must use the TLE model.) I encourage you to visit the TPS website for more information and to view the district's evaluation rubrics, handbook, process, and lessons learned. The TPS Office of Teacher and Leader Effectiveness, which is in charge of teacher evaluations, operates in collaboration with the district's Office of Human Capital, teachers, the teacher's union, and other stakeholders. This is a great example of successful collaboration around the design and implementation of an evaluation tool!

Have a question? Post it in the comments below or tweet me @EmilyDouglasHC. I will do my best to answer every question, and may feature yours in a future post.

You must be logged in to leave a comment. Login | Register
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

The opinions expressed in K-12 Talent Manager are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Follow This Blog




Recent Comments