Teacher Preparation

As Criticism Continues, NCTQ Corrects Some Teacher-Prep Scores

By Stephen Sawchuk — January 02, 2014 2 min read
  • Save to favorites
  • Print

The National Council on Teacher Quality, a Washington-based research and advocacy group, has corrected scores for a handful of programs it rated in last year’s controversial teacher-prep review.

The council claims it made more than 16,000 ratings, so these changes represent a 0.4 percent error rate. And many of the programs that received revised scores on certain standards didn’t end up having their overall rating changed. (The final rating was based on a subset of the standards; for elementary programs, it’s the average of scores on standards for selection criteria, early reading, common-core content in elementary math and reading, and student teaching.)

One of the NCTQ’s bigger mistakes was in misinterpreting an Oklahoma math-certification policy, which required the council to change a rating on the elementary math standard for five programs in that state.

You can find all the details on the NCTQ’s website.

The changes aren’t likely to soothe critics of the NCTQ and its methods. As I’ve reported, from the start the project was condemned by education faculty and other groups who questioned its motives and methods. Only about 10 percent of programs included in the review voluntarily participated.

Recently, a critique of the project appeared in the peer-reviewed Journal of Teacher Education, which is published by the American Association of Colleges for Teacher Education (itself a critic of the NCTQ.)

The article finds fault with NCTQ’s approach in many areas, including the council’s focus on syllabi, the development of its standards, the application of those standards, its use of rankings, and its exclusion of alternative-route programs. (Such programs will be included in this year’s edition of the review.)

Possibly the article’s most interesting criticism, though, is that the NCTQ didn’t examine the relationships between its own star ratings and outcomes, such as licensure-test results by program or “value-added” correlations, which look at how much a program’s graduates boosted students’ test scores. For example, it found no statistically significant correlation between the number of stars awarded to each program in Texas and the percentage of its candidates passing a state licensing exam.

“NCTQ’s refusal to even attempt to validate their own effort gives substantial support to those who believe NCTQ has absolutely no intention of helping traditional university-based programs and has every intention of destroying such programs and replacing them with a market-based system of providers,” the author, Edward Fuller of Penn State University, writes.

What to make of all this? For my money, teacher-preparation policy increasingly seems to be caught in a web of conflicting agendas and contexts. These include:


  1. A lack of very clear and specific research links between particular aspects of teacher training and student outcomes.
  2. A lack of clear and accessible information about what’s happening in current programs.
  3. Institutional resistance to grading schemes like the NCTQ’s, and to accountability measures, some of them crude, put in place by states.
  4. Frustration from lawmakers about student achievement, leading to pressure to improve teacher training, often through the institution of more grading systems.

Such disparate positioning is possibly why teacher preparation has become such a pressure-cooker of a field to write about, and if anything, likely to get more so in the future.

A version of this news article first appeared in the Teacher Beat blog.