« Skoolboy Strikes Again: Research on Schools, Neighborhoods and Communities (& Value-Added Bonus!) | Main | AERA Continued: Dropout Factories »

Teach For America Study Wrap-Up

| 17 Comments
Some readers requested a closer look at the Urban Institute's Teach for America study presented at AERA last week. To this reader, the study is convincing, and provides strong and viable evidence against those who argue that Teach for America teachers negatively affect their kids’ educations. However, I was not sold on the authors’ conclusion that teacher retention should take the backseat to teacher selection.

First, what did the study find? If we take the study's most conservative estimates for all eight high school subjects (7 math and science subjects, plus English I, and comparing North Carolina TFA teachers with non-TFA teachers in the same school )- the Teach for America advantage is .064 standard deviations, while teachers with 3-5 years experience provide an advantage of .024 standard deviations (compared to those with <3 yrs experience), teachers with 6-10 years of experience offer a .015 gain, and those with 11 or more years of experience offer a gain of .007 standard deviations.

The authors concluded that "the Teach for America effect, at least in the grades and subjects investigated, exceeds the impact of additional years of experience, implying that TFA teachers are more effective than experienced secondary school teachers….programs like TFA that focus on recruiting and selecting academically talented recent college graduates and placing them in schools serving disadvantaged students can help reduce the achievement gap, even if teachers stay in teaching only a few years.”

But small is small. I’m all for Teach for America as a stopgap, but the achievement gap claim is fanciful thinking. Why? By comparison, the black-white gap in NAEP math achievement in grade 12 is approximately 1 standard deviation (and is likely larger because many black students have left by grade 12). An advantage of .04 standard deviations over teachers with 3-5 years experience in the same school is not going to significantly close the achievement gap. This is not an advantage over teachers in the nearest suburb or the best schools in the city that don’t staff Teach for America teachers, and is hardly a convincing rationale to permanently staff tough schools with a revolving corps of academically talented 2-year teachers.

So my primary disagreement with this study stems from its conclusion, “policy makers should focus more on issues of teacher selection, and less on issues of teacher retention, if the concern is the performance of disadvantaged secondary school students especially in math and science.” For this to be true, we must assume that a school is simply an amalgam of pods in which teachers teach, such that a teacher’s decision to leave is independent of other teachers’ future efficacy. In other words, the authors presuppose that teacher turnover has no effect on the school as an organization, and that teacher quality is solely an individual attribute, rather than the joint product of individuals and organizations. (And what do we make of the tiny effects of experience? Is it possible that the most talented math and science teachers left to pursue more lucrative opportunities?)

It’s nearly impossible to build a stable school community and an ethos of sustained change in the face of regular turnover. Herein we have the classic chicken and egg problem in education: how do we create places where good teachers want to work - a key component of which is a stable professional community – if we can’t get strong teachers to stay? Programs like Teach for America are a fine band-aid, but they are hardly a solution.
17 Comments

This is the fairest review of the study that I've heard, so I don't have much to add.

The line about selection being more important also caught my eye but, other than that, what I'd really like to see added to the study is a comparison of teachers that teach similar students.

This may be heresy in high-level statistical circles (folks over at eduwonk certainly seemed to get upset when I suggested it) but I'm skeptical that there can be a valid comparison of performance between teachers of different types of students. I want to know if TFA teachers outperform other teachers with classes full of minority kids from poorer households. I see the utility of comparing TFA teachers to all others, but I'm not convinced that it's the most compelling comparison.

Whoa. Am I reading this study correctly? On p. 30 of the full report, it states (at the bottom of the table) that 69 unique TFA teachers were evaluated in this study, as opposed to 5,678 traditional teachers. Doesn't this imbalance of numbers skew the study a little?

Forgive me if this concern has already been brought up. I'm not sure how one would get around it--but it seems problematic.

Diana,

Nice catch -- I was wondering how many TFA teachers there actually were in the sample. I believe the presentation focused on the fact that TFA teachers taught 5,758 students, but it may have also noted the 331 classes they taught.

The difference in the size of the comparison groups doesn't bias the results in any direction that I can think of, but the relatively small sample of TFA teachers does limit how sure we can be about the generalizability of the findings.

I've heard a number of people who know a lot more about statistics than me say that the methods in this paper are very solid but, as with any paper, there are a number of reasons (many of which are out of the hands of the authors) why the paper is unable to give us a definitive answer to the question.

That is, of course, not to say it's not without value.

Hi Diana,

Nice catch - certainly, this does raise some questions about external validity...

With the caveat that all I know about this study I learned from blogs...

I think not only is there a danger in drawing overly sweeping conclusions from small effects, I think there are a variety of different conclusions that could be drawn, and I think setting up selection vs. retention as an either-or isn't the lens through which to look at the study.

Clearly it's interesting that TFA teachers do as well as, or better, than more experienced teachers, and it suggests that a strong academic background may be more important for good teaching than many educators have come to believe, and attracting talented college graduates to teaching -- and to teaching in low-achieving schools -- may be a bigger piece of the education reform puzzle than its been so far.

But it's hard to believe that it wouldn't be even better if these teachers were also attracted to staying in teaching, and staying at the schools where they start.

So instead of accepting, as a lot of people seemingly do, that talented college graduates would never consider teaching as more than a short side trip on their way to a "real" career, maybe more thought needs to go to how to make teaching a career that appeals to talented college graduates.

How exactly does a small number of TFA teachers in the sample bear on generalizability or external validity?

skoolboy, Always the pesky questions when I'm half asleep! I think your point is that if the population to which we want to generalize is TFA in North Carolina, we do have the entire population of NC TFA teachers; hence, there's no external validity problem.

As I read the study, the scope conditions weren't framed as TFA in NC. Corey and I are both pointing out that there are 6 years of data, during which TFA placed 2,000 teachers a year, and we have a non-random sample of 69 of the 12,000 TFA teachers, or a .5% sample of all TFA teachers. A tiny and non-random sample presents an external validity problem, no?

I would have thought that there wasn't any reason to believe that the small size of TFA sample would bias it in any particular direction, but that smaller the sample the greater the uncertainty of the result -- in the sense that the greater the chance that if the experiment were repeated, the answer would be different. Unless that's somehow taken into account in the way the result is presented.

I probably ought to read the study ... but my take is that the extent to which the sample represents the underlying population of interest is an external validity issue, but the size of the sample (and also the fraction of the population it contains) is not. If you believe that the case of TFA teachers in North Carolina is not representative of TFA teachers in other states or the nation as a whole -- because of the nature of the NC tests, or how TFA and other teachers are placed in schools, or the characteristics of the TFA teachers in NC, or the kinds of kids in NC, or anything else -- then there could be 69 TFA teachers or 6,900 TFA teachers in the sample, and either way you'd worry about external validity. Conversely, if you believe that NC is typical or representative of other states, 69 TFA teachers was sufficient to detect statistically significant effects (an internal validity issue), and there's no obvious concern about external validity.

If the authors didn't frame the scope conditions for their claims as TFA in NC, they are doing their readers a disservice. Data from multisite program evaluations often show variability across sites in program effects, even when the sites are supposed to be independent replicates of the same program. Policymakers don't want to hear this; they'd rather believe that a program has an effect, period. And policy researchers who want to be influential don't want to acknowledge the uncertainty in their estimates, because policymakers hate uncertainty. This is a variation of the point that Dan Goldhaber made in discussing the TFA paper: even in NC, there is a range of possible effects of TFA teachers on students' achievement, and statistically significant point estimates don't illuminate that range, whereas confidence intervals and the kernel density estimators of the distribution of possible outcomes does.

1. What did the authors do to test the simple hypothesis that teachers with higher Praxis scores are better able to teach high school students? The Praxis scores of the TFA teachers were .37 of a standard deviation higher than the traditional teachers' scores. Could it be that the effects are due solely to that difference and say nothing at all about TFA? Given the large sample of traditional teachers, the authors could have found teachers with similar Praxis scores and done a matched sample comparison of achievement gains.

2. Given that the TFA teachers were working with students well below the state mean scores and the tests appear to be so top heavy in terms of the distribution (30% and better scoring Superior), could it be that the TFA students simply had more room to grow at a thick part of the distribution? In Algebra 1, 31% of Traditional students scored in the highest category while only 15% of TFA students did. In Algebra II, the difference is greater with 35% of Traditional versus 15% of TFA students scoring Superior. The differences in means are stunning: TFA classrooms had students scoring -.39 of a standard deviation lower on Algebra I and -.55 of a standard deviation lower on Algebra II. The other tests show similar patterns. This raises several questions.

What other steps was the school taking to improve the performance of these very low performing students assigned to TFA teachers? What percent of those students were taking the class for a second time? At the other end of the distribution, how much better can students already Superior go? There are test points high into that range, but does anyone actually obtain them? I understand what might motivate a TFA student who is at risk of failing the test, but what would motivate most students to excel when a third of them already score in the highest possible category? Given the large sample of traditional teachers and students, the authors could have found similar low-performing classrooms and done a matched sample comparison ruling out those hypotheses.

3. Finally, .10 of a standard deviation is, at best, barely one multiple-choice question on an end-of-course test. If these were NCLB tests and all of the children being affected were one multiple-choice question shy of proficient and the school was about to be restructured for failing, that would seem very important. But none of that is true. Maybe we need to ask a policy question in a different way. Ask any parent: If I told you I could assign your child to a teacher guaranteed to get your child to answer one more multiple-choice question correctly on an Algebra test would you even care? If I told you that hundreds of millions of dollars from nonprofits and your own taxes are going into a program to train that teacher, what would you say? Go ahead and ask a parent. I dare you.

Another question for the sociologists: is it meaningful to compare a specialized group (TFA) with a non-specialized group (traditional teachers) when the ratio of one to the other is so small? The latter group represents a wide range of educational backgrounds, years of experience, and so forth, as well as family obligations and other competing responsibilities.

If one is to assess the specific effect of TFA on performance, why not compare it with other specialized groups (for example, teachers with a BA or BS, in a field other than education, from UNC)? It is altogether possible that a four-year college education at a strong college or university does as much to help student achievement as TFA participation in particular.

Given that the study wasn't even controlled for college education (or many other variables that could come into play), it's hard to see the validity of comparing a tiny specialized group with a large general one. Please set me straight if this is incorrect, or if the ratio is irrelevant here!

Also, I wonder how many TFA teachers actually taught in a given year of the study--far fewer than 69, I would presume. I'm guessing that in a given year the ratio of TFA to traditional teachers would be even smaller than 69/5,678 (roughly 1/82).

Three Questions: I wrote the author and suggested that she do a comparison of TFA teachers and teachers teaching similar students (she was actually really nice -- she actually thanked me for writing despite the fact that I was being a pain in her butt) for many of the same reasons in addition to the ones I suggested above.

I also wondered about whether there was something special about the TFA folks other than their higher levels of performance on tests and such. I did not see a comparison of equally high-performing "regular" teachers. In fairness to the authors, they seem to be suggesting that recruiting high-performing teachers is important for schools -- not that only TFA is capable of recruiting such teachers.

Off the top of my head, I think the gains ranged from .07-.11 SD. Is that small? Absolutely. But, it's larger than in a lot of papers. So, in the world of education research it's a "big" effect. In the beginning of the paper, the authors asked the (important) question of whether TFA teachers were harming students and it seems like the answer is that in NC they're not -- they're actually helping a bit.

Skoolboy: It seems to me that the smaller the sample size the less likely it is to be representative of the larger population. Hence, I'd hesitate to generalize to the entire country based on 69 teachers. And, actually, as Diana points out, it's probably less than 69 since some (if not most) of those teachers taught more than one year. It's probably somewhere between 30-50 teachers I'd guess.

Three questions--you had some questions that you dared anyone to ask a parent. I'm a parent, so I'll answer. Since nobody has ever asked me in the past who I would prefer to teach my child (I'm supposed to believe that a teacher is a teacher is a teacher--all alike, and all just as good), yes, I would like to consider the past results of their students on standardized tests.

And .10 SD, might tip the balance in favor of a particular teacher, all other things being equal (as they never are). But I wouldn't assume that .10 SD is the equivalent of one multiple choice question. It would really depend on the spread of the scores, the number of questions on the test, etc. But it is hard to respond to the cost question. Primarily because the "hundreds of millions" in taxes and non-profit dollars is pretty vague and lacks context. What is the cost to train one teacher pluck from the bottom of his/her high school class and run through 4 year of a state university? How much in stipends and tuition to move that person to the master's level? How much to mentor/coach/supervise that person on the job. And, in the end, mighten't I just prefer the TFA teacher just because they were selected for their academic ability (reflected in the .3 SD difference in Praxis scores) in the first place?

It's embarrassing that we are hiring young teachers who have trouble writing a literate sentence, or performing basic mathematics (or statistics), who aren't curious about the workings of our legal system or government. Personally, as a parent, I don't care whether TFA or state colleges do the selection and provide the training and mentoring to become a good teacher. TFA does two things that make sense. They are selective up front, and they provide structured support for classroom experience. I say we keep on, and keep studying. Perhaps TFA is not the ultimate model for the long-term, but it certainly provides fertile ground for figuring out some of the variables that can be incorporated into more traditional teacher training.

As a TFA alum I find this conversation very interesting. As a result of my own experience, I personally decided that TFA is not the answer to closing the achievement gap. I know some TFAs who have been incredibly successful and transformative to their students' academic trajectories but they are in no way the norm.

This may not answer any of your questions. But I would guess that TFA teachers in North Carolina have a very different experience from TFAs in other regions. My TFA Summer Advisor taught in North Carolina and in the end I would say her experience was very different from the one I had teaching in NYC.

Also, is this article only focused on TFAs who teach High School or secondary teachers in general? In New York City where there are at least 1,000 corp members I would say a very small percentage teach high school (As a secondary English teacher, in my corp member class there were only 2 high school English teachers; the majority taught middle school), so that may explain why the author only focused on 69 teachers.

I'd like to believe that TFA teachers do not negatively impact their students. But I do agree that it is very hard to create a positive school culture when there are new people every year. But to add to that conversation: In two years I taught in three schools (this is in no way a normal TFA experience). At one of the schools more than 2/3 of the TFAs stayed for at least a third year, if not a fourth or fifth. At the other two schools the TFAs left immediately after finishing their commitment. I believe a lot of this has to do with the difference in the school communities. Even TFAs stay longer if they feel successful and empowered....

Could the presence of TFA'ers cause poor-er performance in their non-TFA comparison group? I've worked in enough schools to see the subtle effects of 20-somethings, with freshly minted upper class degrees,unbridled idealism and enthusiasm, and voluminous vocabularies, who don't quite fit in with the general staff, many of whom graduated from Average Local College with a default degree in psych, $30k in debt, and may not have been immediately employable in any other field. TFA'ers are constant reminders that education is just a job in a dready urban school, little more than a 30 year countdown to receiving pension benefits, with no hope to move to something more exotic as TFA'ers inevitably do.

If the goal is to focus on teacher selection, and specifically on whether educational background is a key predictor of teacher effectiveness, why focus on TFA teachers?

Why not just look at the overall teacher population subdivided by number of years teaching, and by what ever you choose as your educational background indicator (SAT score, college selectiveness, college GPA, etc)?

I would have thought that would get rid of sample size problems, and other complications -- such as whether TFA teachers consciously or unconsciously given more or less difficult classes than "regular" teachers.

This article gives a somewhat uneducated assesment of the TFA process. There is no doubt that teacher retention is beneficial to the success of a school, but it is not essential. As a Corps member I have experienced first-hand the training these young teachers receive, and one of the core values taught, is how to immediately involve yourself in the school community. A leader does not have to be a part of something for several years before he/she can stand up and lead, it happens from day one. The teachers TFA is placing into schools are an elite class of individuals and are all very capable leaders. That said, it is better for policy makers to focus on teacher selection, that way they come in from day one and begin working along with everyone around them, rather than have to wait a year or two for a qualified, but maybe not an elite, teacher to get his/her feet wet.

Comments are now closed for this post.

Advertisement

Recent Comments

  • Ron: This article gives a somewhat uneducated assesment of the TFA read more
  • Rachel: If the goal is to focus on teacher selection, and read more
  • horacette womann: Could the presence of TFA'ers cause poor-er performance in their read more
  • Marnie: As a TFA alum I find this conversation very interesting. read more
  • Margo/Mom: Three questions--you had some questions that you dared anyone to read more

Archives

Categories

Technorati

Technorati search

» Blogs that link here

Tags

8th grade retention
Fordham Foundation
The New Teacher Project
Tim Daly
absent teacher reserve
absent teacher reserve

accountability
accountability in Texas
accountability systems in education
achievement gap
achievement gap in New York City
acting white
admissions
AERA
AERA annual meetings
AERA conference
AERJ
Alexander Russo
Algebra II
American Association of University Women
American Education Research Associatio
American Education Research Association
American Educational Research Journal
American Federation of Teachers
Andrew Ho
Art Siebens
ATR
Baltimore City Public Schools
Barack Obama
Bill Ayers
black-white achievement gap
books
books on educational research
boy crisis
brain-based education
Brian Jacob
bubble kids
Building on the Basics
Cambridge Education
carnival of education
Caroline Hoxby
Caroline Hoxby charter schools
cell phone plan
charter schools
Checker Finn
Chicago
Chicago shooting
Chicago violence
Chris Cerf
class size
Coby Loup
college access
cool people you should know
credit recovery
curriculum narrowing
D3M
Dan Willingham
data driven
data-driven decision making
data-driven decision-making
David Cantor
DC
Dean Millot
demographics of schoolchildren
Department of Assessment and Accountability
Department of Education budget
desegregation
Diplomas Count
disadvantages of elite education
do schools matter
Doug Ready
Doug Staiger
dropout factories
dropout rate
dropouts
education books
education policy
education policy thinktanks
educational equity
educational research
educational triage
effects of neighborhoods on education
effects of No Child Left Behind
effects of schools
effects of Teach for America
elite education
ETS
Everyday Antiracism
excessed teachers
exit exams
experienced teachers
Fordham and Ogbu
Fordham Foundation
Frederick Douglass High School
Gates Foundation
gender
gender and education
gender and math
gender and science and mathematics
gifted and talented
gifted and talented admissions
gifted and talented program
gifted and talented programs in New York City
girls and math
good schools
graduate student union
graduation rate
graduation rates
guns in Chicago
health benefits for teachers
High Achievers
high school
high school dropouts
high school exit exams
high school graduates
high school graduation rate
high-stakes testing
high-stakes tests and science
higher ed
higher education
highly effective teachers
Houston Independent School District
how to choose a school
IES
incentives in education
Institute for Education Sciences
is teaching a profession?
is the No Child Left Behind Act working
Jay Greene
Jim Liebman
Joel Klein
John Merrow
Jonah Rockoff
Kevin Carey
KIPP
KIPP and boys
KIPP and gender
Lake Woebegon
Lars Lefgren
leaving teaching
Leonard Sax
Liam Julian

Marcus Winters
math achievement for girls
McGraw-Hill
meaning of high school diploma
Mica Pollock
Michael Bloomberg
Michelle Rhee
Michelle Rhee teacher contract
Mike Bloomberg
Mike Klonsky
Mike Petrilli
narrowing the curriculum
National Center for Education Statistics Condition of Education
NCLB
neuroscience
new teachers
New York City
New York City bonuses for principals
New York City budget
New York City budget cuts
New York City Budget cuts
New York City Department of Education
New York City Department of Education Truth Squad
New York City ELA and Math Results 2008
New York City gifted and talented
New York City Progress Report
New York City Quality Review
New York City school budget cuts
New York City school closing
New York City schools
New York City small schools
New York City social promotion
New York City teacher experiment
New York City teacher salaries
New York City teacher tenure
New York City Test scores 2008
New York City value-added
New York State ELA and Math 2008
New York State ELA and Math Results 2008
New York State ELA and Math Scores 2008
New York State ELA Exam
New York state ELA test
New York State Test scores
No Child Left Behind
No Child Left Behind Act
passing rates
Pearson
picking a school
press office
principal bonuses
proficiency scores
push outs
pushouts
qualitative educational research
qualitative research in education
quitting teaching
race and education
racial segregation in schools
Randall Reback
Randi Weingarten
Randy Reback
recovering credits in high school
Rick Hess
Robert Balfanz
Robert Pondiscio
Roland Fryer
Russ Whitehurst
Sarah Reckhow
school budget cuts in New York City
school choice
school effects
school integration
single sex education
skoolboy
small schools
small schools in New York City
social justice teaching
Sol Stern
SREE
Stefanie DeLuca
stereotype threat
talented and gifted
talking about race
talking about race in schools
Teach for America
teacher effectiveness
teacher effects
teacher quailty
teacher quality
teacher tenure
teachers
teachers and obesity
Teachers College
teachers versus doctors
teaching as career
teaching for social justice
teaching profession
test score inflation
test scores
test scores in New York City
testing
testing and accountability
Texas accountability
TFA
The No Child Left Behind Act
The Persistence of Teacher-Induced Learning Gains
thinktanks in educational research
Thomas B. Fordham Foundation
Tom Kane
Tweed
University of Iowa
Urban Institute study of Teach for America
Urban Institute Teach for America
value-addded
value-added
value-added assessment
Washington
Wendy Kopp
women and graduate school science and engineering
women and science
women in math and science
Woodrow Wilson High School