Teacher Preparation

Can Value-Added Be Applied to Teacher Preparation? Scholars Weigh In

By Stephen Sawchuk — August 09, 2012 4 min read
  • Save to favorites
  • Print

Can the value-added method be used, fairly and reliably, to differentiate among teacher preparation programs? According to scholars who have studied the issue, this remains something of an open question.

The folks at the Center for Analysis of Longitudinal Data in Education Research have put together a bunch of papers on this topic, complete with a helpful, plain-language discussion from the scholars about their results.

One of the papers, by the University of Washington’s Dan Goldhaber and a colleague, found that most programs produced graduates who did no better or worse than those teachers trained out of state. Only a small number of institutions produced graduates that, on average, had a significant, positive effect on student math scores.

A study of teachers from various training programs in New York City, however, found some “meaningful” differences between different preparation routes, including both traditional and alternative routes. The paper, by five well-known teacher-quality researchers, also found that stronger oversight of the student-teaching experience seemed to produce better teachers.

A third paper, by Cory Koedel of the University of Missouri and three colleagues, sounds the most cautious note on applying value-added to teacher preparation. Their study found virtually no aggregate differences in the effectiveness of graduates of different teacher preparation programs in Missouri, using a value-added approach. The variation within programs was so great, the scholars found, that a good number of teachers from the lowest performing program likely would still outperform the average teacher from the highest-performing program.

“A key finding from our study, and one that we feel has not been properly highlighted in previous studies and reports, is that the measurable differences in effectiveness across teachers from different preparation programs are very small,” the authors write. “We encourage policymakers to think carefully about our findings as achievement-based evaluation systems, and associated accountability consequences, are being developed for [teacher preparation programs].”

A final paper, primarily by researchers from the policy-evaluation nonprofit RAND, examines one of the conceptual challenges to this type of analysis: Programs are often geographically isolated, and therefore send many of their graduates to the same few schools. This makes apples-to-apples comparison of graduates difficult, since they often end up teaching in very different contexts.

The study found that, as predicted, preparation programs tended to supply particular communities and school systems, but using additional years of data and certain statistical modeling seemed to alleviate this potential problem. Like the paper on Washington state, though, it found that most of the program effects were not significant except for a few outlier programs that had particularly high or low results.

Here’s one additional factor the researchers stress in their papers: It’s difficult to determine whether any of the effects observed are a function of the training the graduates receive or the population of teacher candidates they’re attracting. This is an interesting quandary for the field, which is being pressed not only to upgrade the quality of training but also to make the entry bar higher in the first place (i.e., the Finland or Teach For America effect.)

It makes sense, of course, that the quality of a complicated enterprise like teacher preparation will depend on a lot of different variables. But it’s also a bit of a challenge for program administrators, who need to know which specific features of their programs are working well, or need to be rethought.

Policy Interest

The new research comes during a period of policy interest in teacher preparation. The U.S. Department of Education is shortly expected to release regulations that require states to consider value-added information, among other things, in making judgments about program quality to fulfill a federal reporting requirement.

Louisiana appears to have had some success with using the the data to help programs improve; so far, it remains the only state that has formally used the information for program accountability. More than a dozen states plan to use it in some form in the future.

A number of higher-education groups have raised concerns, based on issues like those raised above, about using value-added for high-stakes purposes. That debate is likely to continue for a while.

Perhaps one way to think about the issue is to compare value-added analysis to the other systems that purport to make judgments about program quality—states’ individual program-approval processes, national accreditation, or ratings projects such as the one underway by the National Council on Teacher Quality.

Readers and teacher educators, which of those systems, if any, would you put your faith on?

Related Tags:

A version of this news article first appeared in the Teacher Beat blog.