Education

Teacher Observations: More ‘Accurate’ Data, Less ‘Cupcake Bias’

By Liana Loewus — December 07, 2010 2 min read
  • Save to favorites
  • Print

Live from the Learning Forward annual conference in Atlanta.

In the hopes of getting some insight from administrators about the sensitive nature of post-observation discussions with teachers, I attended a session entitled “Classroom Observations: Reducing Defensiveness and Increasing Professional Dialogue.”

Walking in, I noticed several attendees leave as soon as they looked at the session materials they’d been handed. I discovered why when I glanced at my own. The handouts were brightly colored with comic book-like typeface and looked like an advertisement for a computer software program called eCove. I decided to stick around and give the presenter, John Tenny, a chance to reel me back in.

Tenny, a retired classroom instructor now at Willamette University in Salem, Ore., explained the process of conducting data-based observations, where the evaluator counts and times teacher actions to come up with “defensible data.” It sounds similar to a functional behavioral assessment for a child with special needs (for instance, tracking when and how often a student screams during class, to determine the desired outcome of the screaming). Tenny says observations using the eCove software are effective for gathering data about teaching practices, curriculum, individualized education plans, and response to intervention.

He focused mainly on teacher evaluations, though. Data-based observations take the judgment out of information gathering, he said. They’re a tool for both “protecting teachers and gathering data on ineffective ones.” Even rookie teachers can conduct observations of experienced teachers using this system, said Tenny.

Once evaluators come to an agreement about key terms—for instance, what “learning time” looks like—they simply have to “recognize the data and punch the right button,” Tenny claimed. He asserted more than once that the observations are “accurate.”

But isn’t “recognizing” the data chock-full of subjective decision-making? Even with a shared definition, aren’t evaluators forced to make judgments about what applies? While I haven’t used the software yet, it seems to me Tenny’s process constitutes using a detailed rubric. Many schools are already doing this and yet the controversy over teacher evaluations is going strong (take D.C.'s IMPACT evaluation system, for instance).

One thing I did pick up from Tenny, and hope to find use for again, is the term “cupcake bias"—that is, teachers who bring in cupcakes to share with others tend to be seen as more effective instructors and score higher on their observations. He said focusing on concrete data eliminates this bias. (Though isn’t it possible to be lenient in determining the data?)

Tenny also said that his mantra for evaluators is, “Don’t praise, don’t criticize, don’t solve the problem.” He advises them to just give the data and have a professional discussion following an observation. The “no praise” part struck me as a bit harsh, especially having spent the morning in the teacher-led collaboration session, in which they emphasized the importance of giving “warm” before “cool” feedback. Also, would anyone advise a teacher to use this mantra with their students? Don’t adults, for the most part, favorably react to the same kinds of rewards and incentives (read: cupcakes) as kids?

A version of this news article first appeared in the Teaching Now blog.