Federal

Licensing-Test Gaps Exist in Every State, Federal Data Show

By Stephen Sawchuk — May 07, 2013 2 min read
  • Save to favorites
  • Print

Every state sets the cutoff score on its teacher-licensing tests below the mean of test-takers, according to federal data—a pattern suggesting that most of the tests are probably pretty easy for a majority of those candidates taking them.

Released in an annual report issued last week by the U.S. Department of Education to fulfill requirements in the Higher Education Act, the data compare the average cutoff scores on teacher exams against the average performance of candidates taking the test. The gaps range from a low of 10.1 points in Arizona to 22.5 points in Nebraska. For the nation as a whole, the average certification-test cutoff score is set nearly 15 points below the mean score of candidates.

Use caution in comparing the gaps, though: Some states use ETS’ Praxis series for their licensing tests, and others use state-specific exams designed by Evaluation Systems Group, a Pearson entity.

The data represent test-taking from the 2009-10 year.

A little over a year ago, I reported on this very same phenomenon for Education Week using preliminary data and came to a similar conclusion. (It’s always nice to have one’s hunches confirmed by federal data.)

Here’s a bit of history for the policy nerds like me who want to know how we got here: State reporting on passing rates on teachers’ tests has been required since 1998. But only in the 2008 rewrite of the HEA did lawmakers require states to report both passing rate and the average scaled score of all test-takers on each test.

The overall pattern means that states appear to set relatively low bars for passing these exams. But, as the Education Department report dutifully notes, large gaps could alternatively mean that the test-takers are generally high performing, and small gaps, relatively low performing. As I reported last year, it’s impossible to know exactly how “difficult” these tests are without knowing the spread of scores on the tests’ scales.

Also, most states permit teachers to take certification tests multiple times, and it’s not clear from the state-generated data how that policy affects the scaled scores.

All this raises lots of questions for the state panels that determine where to set the bar on the exams.

For one thing, most of these tests merely measure whether a teacher knows some specified content, not that he or she will actually be able to teach it, so it’s not at all clear that moving the bar higher would result in better instruction. There are some pretty sensitive political factors going on, too: Given what we know about historical performance trends by teachers with different characteristics on licensing tests, raising the bar significantly would almost certainly mean fewer teachers of color passing the exams.

Think this is a pretty obscure policy area? Think again. It’s going to come up more and more now that states are discussing raising the bar for admissions into teacher-preparation programs, as I report in a story in this week’s issue of the newspaper.

Related Tags:

A version of this news article first appeared in the Teacher Beat blog.