Collecting the Wrong Data: Fundamental Attribution Error in Teaching Quality
Is teaching quality the same as teacher quality? Kim Marshall recently pointed me to this excellent article by Mary M. Kennedy of Michigan State University, which makes a strong case for focusing more on the conditions of teacher practice than on stable teacher characteristics.
As studies such as MET have shown, teacher value-added scores are highly unstable from year to year, and even between different sections of the same class taught by the same teacher. I've argued that this is because value-added is a poor technique for measuring teacher quality, but Kennedy's article makes a more intriguing assertion: VAM scores are unstable because teaching performance is about teaching practice, not about fixed characteristics of the teacher, and practice is heavily influenced by the conditions under which it occurs, not just the characteristics of the teacher.
This makes great intuitive sense, and I can't imagine any teacher finding fault with the idea that working conditions have a powerful influence on the results one achieves with students. As any teacher knows, the students themselves have a substantial impact on the learning environment; this helps explain MET's finding that value-added ratings vary between different sections of the same subject taught by one teacher.
Most interestingly, though, Kennedy ties her claims to the psychological principle known as Fundamental Attribution Error, which is the idea that we tend to erroneously conflate actions (and our interpretation of them) with personal characteristics. Instead of concluding that a teacher isn't very good, perhaps we should look at how many different subjects the teacher has to prepare for, how much planning time they actually have, how many reforms and disruptions they have to deal with, and so on.
But we don't. We don't collect data on how many subjects someone teaches, how little prep time they have, how often we interrupt this prep time with meetings and bus duty, and the myriad other non-instructional responsibilities that characterize work in US schools. We analyze test scores and jump to conclusions in an effort to be data-driven, while ignoring perhaps the most important data of all: data on the contexts in which teaching and learning take place.
Teaching quality matters tremendously, and I welcome new efforts to define and measure it more rigorously and meaningfully. But as Kennedy's article suggests, perhaps it's wise to think more about teaching quality as a function of both the teacher and the ever-changing conditions, and make judgments about teacher quality accordingly.