Teacher Prep. Evaluation Depends on Better Longitudinal Data, Studies Find
As teacher preparation programs come under increasing pressure to track the effectiveness of their graduates, a new report by the Data Quality Campaign finds that fewer than half of states provide annual data on teacher performance to colleges of education.
"We've been asking teacher education programs to improve, but we've been asking them to do it without critical feedback," said Aimee Guidera, the campaign's founder and executive director, in a briefing on the report.
The interactive chart below shows state progress on different aspects of providing postsecondary feedback:
At the same time, the National Academy of Education's new guide for policymakers, "Evaluation of Teacher Preparation Programs: Purposes, Methods and Policy Options," maintains that most current teacher evaluation systems do not measure many aspects that would be useful for teacher training programs looking for ways to improve, such as differences in actual teaching practices, attitudes toward teaching, or noncognitive student progress.
"It's difficult to gather evidence on the quality of instruction," said Robert E. Floden, a co-author of the guide and a professor of teacher education at Michigan State University, in a discussion of the academy guide in Washington. Moreover, "just doing an evaluation of a program doesn't make it better. In any evaluation system, there are constraints on resources, selectivity, faculty qualifications, substance of instruction, and so on."
Data Implementation Continues
The state feedback data were part of the DQC's annual evaluation of how well states are implementing a series of 10 "critical actions" to make their student longitudinal data systems relevant to parents, educators, and policymakers.
For more on the report, see my colleague Benjamin Herold's analysis.