Opinion
Education Opinion

The LA Times: Practicing Educational Research without a License

By Anthony Cody — August 20, 2010 3 min read
  • Save to favorites
  • Print

Dr. Stephen Krashen is a professor emeritus at the University of Southern California. He has written numerous books on his research into literacy and language acquisition. In recent years he has emerged as a persistent voice pointing towards the basic steps we should take to build literacy and strong academic skills for our students. In this guest post he offers a critique of the recent Los Angeles Times article rating teachers by their test scores.

by Stephen Krashen

A recent LA Times article, “Who’s teaching L.A.'s kids?” (August 15), presented readers with the results of an LA Times-sponsored “value-added” analysis of teaching in the Los Angeles Unified School District. The statistical analysis was done by an economist, and was supplemented by classroom observations made by LA Times reporters.

“Value-added” appears to be a common-sense idea: Teachers are rated by the gains their students make on standardized tests of reading and math. The assumption is that good teachers produce large gains and poor teachers produce small gains or may cause back-sliding. The Times assumes that the value-added method is a valid measure of teacher quality. It isn’t.

Problems with value-added analyses

Value-added evaluations of teachers make several assumptions.

First, they assume that higher test scores are always the result of teaching. Not so. Test scores are influenced by other factors:
- We can generate higher scores by teaching “test preparation” techniques, that is, strategies of getting higher scores without students learning anything, e.g. telling students when and how to guess, and familiarizing students with the test format.
- We can generate higher scores by testing selectively, e.g making sure the lower scorers are not in school the day of the test.
- And of course we can generate higher scores by direct cheating, getting inside information about specific test questions and sharing this with students.

Second, value-added analyses assume that teachers are randomly assigned to classes. They aren’t. Some teachers are given high-achieving students who will make rapid gains on standardized tests, and some teachers are consistently assigned to teach lower achieving students who will not make clear gains.

Third, value-added analyses assume that the value-added score for a teacher is stable, that a teacher producing high gains one year will always produce high gains. But studies show that value-added estimates for individual teachers can be unstable over time (Schochet and Chang, NCEE 2010-4004). There is also evidence that a teacher’s value-added score can be substantially different for different reading tests (Papay, 2010, American Educational Research Journal 47,2).

Fourth, there is always some fluctuation in scores. Even if all teachers were equally effective in raising test scores, a value-added analysis would still find students of some teachers making higher gains than others, due to random factors.

Finally, some standardized tests focus on knowledge of specific facts and procedures. Teachers who prepare students for higher scores on such tests are not teaching, they are simply drilling students with information that will soon be forgotten.

Neglected factors

The heavy focus on measuring teacher quality can give the false impression that teacher quality is everything. Study after study, however, has shown that poverty is a stronger factor than teacher quality in predicting achievement. The best teachers in the world will have limited impact when children are undernourished, have high levels of lead in their bodies, live in noisy and dangerous environments, get too little sleep, and have no access to reading material.

Beyond Cold Fusion

The scientific world was outraged when cold fusion researchers presented their work to the public at a press conference before submitting their results for professional review. The Times has gone beyond this: They clearly have no intention of allowing professional review, and feel that it is their right to present their conclusions on the front page of the Sunday newspaper.

The Times also supplemented their findings with comments from reporters who observed teachers in their classes. This procedure sends the message that the Times considers educational practice to be so straight-forward that it requires no special background.

The Times is a newspaper, not a scientific journal. It has, however, been practicing educational research without a license. Would we accept this in other areas? Would we trust the Times to do a value-added analysis of brain surgery, with reporters critiquing surgical procedures?

What do you think of Dr. Krashen’s analysis? Is there any value in the LA Times’ approach? Or are their methods flawed?

photo provided by Stephen Krashen, used by permission.

The opinions expressed in Living in Dialogue are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.