Assessment

Tennessee Teachers Have Warmed to Evaluation System, But Not State Tests

By Sarah Schwartz — October 04, 2018 6 min read
  • Save to favorites
  • Print

When Tennessee quickly implemented a wholesale change to its teacher-evaluation system in 2011, the backlash was swift.

Teachers said the process was opaque and frustrating, and they argued that the time required of them to prepare for observations was unreasonable. Other critics said that the new system had been rushed through, without a thorough enough evaluation of a pilot program.

But now, most teachers feel differently about these evaluations. Annual statewide surveys conducted by the state’s education department have shown a continued increase in positive teacher opinion—in 2018, about three-quarters of teachers said the evaluations improved their practice. And a recent study from Brown University researchers found that teacher improvement in the state has been more pronounced in recent years.

How did Tennessee see these successes in teacher improvement, given such precarious beginnings? And what’s responsible for the shift in teacher opinion? A new report from the education policy think tank FutureEd attempts to answer these questions.

The report, based on interviews with teachers, administrators, and state level leaders, chronicles the changes to the evaluation system since 2011 and the development of teacher training and leadership initiatives.

It concludes that continuous improvement to teacher training and evaluation systems, driven by data and educator feedback, was key to sustainable change. The report also highlights the teacher-leadership networks that the state used to scale training and adapt initiatives to local contexts.

But despite teachers’ more favorable outlook on the evaluation system, and gains in student achievement measured by scores on the National Assessment of Educational Progress, educators are still concerned about the role value-added measures play in the teacher-evaluation process.

‘We Should Have Trained the Teachers’

The original teacher-evaluation system, put in place after the state won federal Race to the Top funding in 2010, based half of a teacher’s score on observations, 35 percent on student achievement, and 15 percent on other measures. Principals were required to observe teachers at least four times a year, and the results of those evaluations were tied to the awarding of teacher tenure.

Kevin Huffman, who was Tennessee’s new education commissioner in 2011, acknowledged in the report that the state should have communicated better about the system at the outset. “We should have trained the teachers, not the principals, or we should have trained both,” he said, quoted in the report. “And we should have done it all in-house, and with Tennessee people leading the way.”

After outcry from the state’s educators’, Tennessee’s general assembly directed the department to conduct an internal review of the evaluation system. The state opened up an electronic helpline, organized meetings and presentations that reached 7,500 teachers, met with superintendents, and surveyed teachers across the state, according to the report.

The resulting changes to the process, based in part on educator feedback, “saved the system and became a hallmark of how the state department would approach implementation going forward,” the report argues.

Among the modifications: Fewer observations for high-scoring teachers, and about one-third of districts were allowed to customize their evaluation plans. The report also says “shining a spotlight on the quality of instruction” helped lead to Tennessee’s 2013 NAEP gains.

The state applied lessons learned in the battle over teacher evaluations to other initiatives, said Lynn Olson, a senior fellow at FutureEd and the report’s author. “When they started to roll out state standards, they actually recruited teachers who were already doing good work in their classrooms to lead that effort, and train them to teach other teachers,” she said. “They understood that teachers trust teachers most.”

Teachers and principals are both much more comfortable and familiar with the observation process now, Josh Arrowood, a history teacher at Nolachuckey Elementary School in Greeneville, Tenn., said in an interview with Education Week. Principals across his district also created resources to support teachers’ success, like lesson plan formats that would help them meet the scoring objectives, he said. “It seemed like at first it was a learning curve for everybody.”

But the scoring system can still be confusingly opaque, said Arrowood. He thinks observers may sometimes score a teacher’s lesson based on how well they think her students will do on the test, so that the full evalution looks consistent, rather than objectively evaluating performance in the moment.

Pushback on Value-Added

When first implemented, the state’s evaluation system used schoolwide growth to evaluate teachers in untested subjects and grades—a choice that many teachers found unfair. For teachers in these grades and subjects, the state reduced the weight given to growth data from 35 to 25 percent of the evaluation score, and also allowed teachers to use student portfolios as measures of student growth instead.

Last year, kindergarten and 1st grade teachers started using this portfolio system, and complaints echoed many of the same concerns that Tennessee saw when it first implemented new evaluations in 2011: Teachers found the system overly complicated and felt they hadn’t gotten enough training in how to use it, the Tennessean reports.

And some teachers in untested subjects are still having schoolwide data used as a measurement of their performance, a part of the evaluation system that remains unpopular, said Arrowood. "[Teachers] feel like they have really minimal impact on what their scores are,” he said.

The Tennessee Value Added Assessment System, known as TVAAS, measures teacher performance through growth on student standardized test scores, and is used for the student growth portion of the teacher evaluation.

In 2014, the Tennessee Education Association filed a lawsuit alleging that the state’s use of value-added models for teacher evaluations was unfair, which the report notes.

“Most teachers don’t trust the TVASS scoring, because we have never really been told how that works, how they actually measure growth,” said Arrowood. What Arrowood perceives as a lack of transparency in the system, combined with recent testing snafus, have led him and other teachers he knows to lose faith in the assessment process, he said.

Over the past three years, implementation problems with Tennessee’s online testing system prevented thousands of students from submitting or finishing their tests. As a result, the state has given districts some flexibility in how the data are used and instructed that teachers not face consequences for low student scores.

In a 2018 survey by the department, 61 percent of teachers said the information gleaned from the tests wasn’t worth the time and effort it took to administer them.

Thomas Toch, the director of FutureEd, cautions against making “the perfect the enemy of the good when it comes to teacher evaluations.” Before the new system was put in place, tenured teachers were observed once every five years. “We’re much better off with the system that we have now, despite its shortcomings,” he said. “And the evidence from teacher surveys in Tennessee suggest that teachers and principals agree.”

There aren’t “easy lessons” when it comes to using value-added measures as part of evaluations in the state, said Olson. She thinks it’s an issue that the department will continue to evaluate.

Image: FutureEd

A version of this news article first appeared in the Teaching Now blog.