Opinion
Education Opinion

Insignificant Figures: Mathematica Scores TFA Teachers

By Ilana Garon — September 16, 2013 3 min read
  • Save to favorites
  • Print

Over the weekend, my cousin sent me policy wonk Dylan Mathews’ blog entry in the Washington Post, entitled, “Teach for America is a deeply divisive program. It also works.” Well, there you have it! The article discusses a new study by Mathematica Policy Research, which apparently vindicates criticisms of TFA by showing that the gains produced by teachers in the TFA program--based on their students’ performance on state math tests--surpass not only those produced by teachers in other emergency-certification programs (such as Teaching Fellows) but also those produced by teachers who entered the teaching profession though traditional means (education programs at the college level, etc.) Proponents of the TFA program believe this study provides irrefutable evidence that TFA does “active good.”

Better statistical minds than mine (such as the awesome Dr. Julian Vasquez Heilig, an education policy professor at UT-Austin) have already found major flaws in this study, some of which I’ll summarize here: First, the sample included only secondary math teachers, yet the majority of TFA-trained teachers teach elementary school--thus, the study cannot claim to represent TFA as a whole; second, student gains in reading are inherently far more difficult to produce than in math, and reading is ignored in Mathematica’s study; third, as interpreted in Mathews’ article, the study seems to imply that EXPERIENCE, teacher training, studies in a one’s own subject, as well as work towards advanced certification actually detract from student performance, which makes literally NO sense--and thus should already call aspects of the study’s methodology into question.

But all of that is ancillary to the main claim of the study (as far as TFA is crowing about it) that TFA teachers produced gains in their students of .07 standard deviations above the norm (average yearly growth is .27 standard deviations), which roughly translates to one quarter of the year, or 2.6 extra months of instruction. (You can look at the article again here--scroll down for stats.) “These are, frankly, devastating for many critics of past positive TFA studies,” the article intones dramatically.

Devastating how? I’m sorry, were we reading the same study?

My criticisms of these results are not borne out of dislike for the organization TFA itself; I’ll discuss the pros and cons of the program in a different blog post. What irks me about the excessive crowing is that this statistic is completely unimpressive! In the communities in which TFA corps members are placed, students’ skill levels often lag several YEARS behind grade level, due to interrupted schooling, language gaps, inconsistent academic reinforcement at home, and a plethora of other issues. The gains produced by these TFA teachers--2.6 extra months--are frankly nominal; students in the bottom third will stay in the bottom third, even with a TFA teacher as opposed to a traditionally trained peer.

Not only is TFA brass generalizing the results of a highly specific set of circumstances to apply to the program as a whole, they tout this study--with its meager positive results--as though it justifies the existence of the entire program, astronomical cost and all. Frankly, we have computer programs that produce better student gains than 2.6 months of instruction over the course of a year. So, in the grand scheme of education reform, TFA once again proves not to be the panacea its boosters would like to believe. Its teachers are roughly on par with peers, if you believe this study; not significantly better or worse. Now if only we can convince them to stay past the two year mark and, ultimately, become really good--that’s the issue that really needs to be discussed, if the program is to improve student outcomes in a meaningful, on-going way.

The opinions expressed in View From the Bronx: An Urban Teacher’s Perspective are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.