Teaching Profession

Teacher Classroom Observations Are a Waste of Money, Economist Argues

By Emmanuel Felton — December 13, 2016 3 min read
  • Save to favorites
  • Print

Teacher observations cost the American education system nearly $1.5 billion a year, but the feedback teachers are receiving from them isn’t improving student achievement, argues Mark Dynarski, an economist who has spent decades trying to get education policymakers to use data to inform policy.

Like many researchers and policymakers, Dynarski points to the divergence between students’ test scores—in this case, Dynarski uses scores from the National Assessment of Educational Progress, or NAEP, which tests representative samples of students across states and districts—and educators’ performance on teacher evaluation systems. He points out that while the majority of American students fail to hit the proficiency mark on NAEP’s reading and math assessments—a fact that has remained relatively constant through reform efforts like the institution of teacher evaluations—that nearly every teacher is being rated effective at their jobs through their states’ rating systems.

Dynarski points to a 2015 report from The National Council on Teacher Quality, which tallied up teacher effectiveness rates around the country:

“Measures of teacher effectiveness vary state by state but results are consistent--nearly every teacher is effective. This consistency was named the ‘widget effect’ in a 2009 report (all widgets are the same). A number of states implemented new teacher evaluation systems in the last ten years, and there is still a widget effect. In Florida, 98 percent of teachers are effective; New York: 95 percent; Tennessee: 98 percent; Michigan: 98 percent.”

Often, those systems combine measures of students’ growth on standardized test scores and observations from a principal or outsider evaluator. Dynarski argues that the latter is too squishy to provide any real insights.

“The system is spending time and effort rating teachers using criteria that do not have a basis in research showing how teaching practices improve student learning,” he writes in a new article for the nonpartisan think tank, the Brookings Institution.

He continues: “But what principals observe is whether teachers are teaching. The crucial question is whether students are learning. To answer that, we need some measure of learning: a test.”

Dynarski’s article comes at a time when his view is falling out of favor with policymakers, as states across the country re-evaluate the appropriateness of using students’ growth on standardized tests to measure teacher effectiveness. Teachers and their unions have been effective at questioning the scientific validity of directly linking students’ results on standardized tests and educators’ teaching prowess. They often point to a 2014 statement by the American Statistical Association, the main professional organization for the country’s statisticians, that concludes “the majority of the variation in test scores is attributable to factors outside of the teacher’s control such as student and family background, poverty, curriculum, and unmeasured influences.”

But beyond rehashing the same old fight over whether student test scores should be used to evaluate teachers, Dynarski, a trained economist, goes about tallying up the costs associated with the alternative method, having evaluators rate educator performance, which teachers often argue is a more valid approach. He estimates that administrators spend an average 10 hours a year observing and rating each educator, and writes:

“In 2015, there were 3.1 million K-12 public school teachers, which means 31 million hours spent annually on observations. The average school principal salary is $45 an hour. Applying this hourly rate to the number of hours, the system is spending $1.4 billion a year to observe teachers. This is spending a lot of money to find that nearly all teachers are effective and to generate teacher feedback that does not improve student learning.”

Dynarski doesn’t rule out the possibility that classroom observations could yield useful information, but instead concludes that we need to learn a lot more about what teaching that increases student achievement actually looks like:

“We need a more solid research-and-measurement foundation about what aspects of teaching improve learning,” he concludes. “Until we build that foundation, observing teachers and rating them is pointless, or worse—the ratings signal that all is fine, but it’s not.”


Related Tags:

A version of this news article first appeared in the Teacher Beat blog.