« What the Election Means for Teacher Policy | Main | EdWeek's Special Report on Professional Development »

Study: TAP Schools Outperform 'Synthetic' Counterparts

Schools in the Teacher Advancement Program apparently would produce higher-achieving students in math than non-TAP schools with similar characteristics, concludes a recent analysis released by the Stanford Institute for Economic Policy Research.

The paper's author, Sally Hudson, a graduate student at Stanford University, looked at student growth in 151 TAP schools across 10 states. Her study uses an intriguing research methodology I've never seen before called "synthetic control matching" to create a control group of schools. In essence, she combined features from a variety of non-TAP schools in each state to create an ideal comparison school for each TAP school.

It's a creative solution for one of the common problems with quasi-experimental research studies: Oftentimes, the treatment and control schools vary somewhat in size, population, achievement levels or student makeup. Here, creating the synthetic controls keeps the control group as similar as possible to the TAP schools in question.

The study found that students in TAP schools outperformed students in the synthetic-control schools in math by about 0.15 of a standard deviation. That's roughly equivalent to a sixth to a half of a year of student growth. The study found a boost in reading scores in TAP schools, too, but that finding didn't hold up when some of the model specifications were changed.

While this study doesn't have quite the same level of "power" as a random-assignment study—after all, the comparison schools here don't actually exist in any brick-and-mortar sense—it is, nevertheless, a strong indicator that something going on in the TAP schools is producing positive results for the students in them.

The findings are good news for TAP after some less-than-stellar findings elsewhere: A random-assignment study of the program in Chicago found no differences in achievement between TAP and non-TAP schools, as I reported back in June.

There are plenty of implications here for the broader field of school improvement. But please don't go thinking that this study is somehow a counterweight to the big merit-pay study from last month, on Nashville's POINT program. POINT was a "pure" experiment, with the only major difference between treatment and control groups being the performance-pay element.

By contrast, in TAP schools, teachers also get group-based professional development, individual feedback keyed to an evaluation framework, and opportunities to take on additional roles in schools and to be compensated for them, all features that didn't apply in Nashville. So use caution in trying to compare these studies.

Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.


Most Viewed on Education Week



Recent Comments