Education

Researchers Question Data in Gates Foundation’s Teacher Study

By Sarah D. Sparks — February 14, 2013 2 min read
  • Save to favorites
  • Print

Guest blog By Stephen Sawchuk. Cross-posted from Teacher Beat.

The recommendations in the final Measures of Effective Teaching work products may not be supported by the project’s hard data, the National Education Policy Center contends in a review of the MET project.

The review, released last week, was written by Jesse Rothstein, of the University of California-Berkeley, and William Mathis, of the University of Colorado at Boulder. The NEPC has taken issue with several prior releases from the MET study.

In the critique, the scholars take aim at the study’s randomization component, the basis for the MET report’s headline finding that projections based on the three measures studied, which include “value added” test-score analysis, seemed to be quite accurate overall. But Rothstein and Mathis note that there was a high degree of noncompliance with the randomization, and also suggest that teachers of certain students appear more likely to have dropped out of the study. (Rothstein made a similar point in Education Week’s story on the final MET results.)

The scholars also say that none of the three main measures studied--student surveys, value-added test-score growth, and observations of teachers--was particularly predictive of how teachers’ students would do on the alternative, “conceptually demanding” tasks. That’s potentially worrisome, since the tests being designed to measure the Common Core State Standards are purportedly more in line with such tasks. “There is evidently a dimension of effectiveness that affects the conceptually demanding tests that is not well captured by any of the measures examined by the MET project,” the authors write.

The scholars also question one of the very premises of the MET study: its use of growth in student test scores as the baseline standard for comparing the impact of all the measures it tested.

“It is quite possible that test-score gains are misleading about teachers’ effectiveness on other dimensions that may be equally or more important,” the paper states.

On the other hand, there’s also evidence that value-added-based teacher-quality measures are linked to students’ future earnings, as noted in a widely publicized study last year, “The Long-Term Impacts of Teachers: Teacher Value-Added and Student Outcomes in Adulthood.”

In the interest of disclosure, note that the NEPC has received funding from the National Education Association, a critic of using test scores in teacher evaluations. Also, the Bill & Melinda Gates Foundation, which financed the MET study, is well known for its support of a number of groups that favor changes to teacher evaluation, including several nonprofit advocacy organizations. It also helps support Education Week’s coverage of business and innovation.

Related Tags:

A version of this news article first appeared in the Inside School Research blog.