Opinion
Education Opinion

Common Errors in Coverage of Education

By Walt Gardner — June 24, 2011 3 min read
  • Save to favorites
  • Print

Once upon a time, errors in reportage and commentary about education warranted scant attention because relatively so little was at stake. But today, these errors have widespread implications. I’d like to focus on three common mistakes, in the hope that by doing so readers will become more discerning.

The first is cherry picking. As the term suggests, this means carefully choosing self-serving data. Since so much of the accountability movement depends on quantification of outcomes, it’s easy to see why those with a particular agenda engage in this practice. They can then claim that they have evidence to support their position.

Charter schools provide an instructive example. In 2009, Caroline Hoxby of Stanford University looked at charter schools in New York City, and concluded that students there performed better on state tests than students who attended traditional public schools. But about the same time, Margaret Raymond of Stanford University compared the academic performance of charter schools with traditional public schools serving demographically similar students in 15 states and the District of Columbia. She found that only 17 percent of charters posted results that were significantly better, while 46 percent were similar, and 37 percent were significantly worse. It’s little wonder which study will be cited by charter supporters and which one will be cited by charter critics.

The second is the base rate fallacy, also known as the base rate neglect. It means making a generalization based on a selected sample without bothering to determine if the same thing exists in the general population. If it does, then it’s very difficult to make a compelling case for a particular conclusion.

The release of scores on the latest National Assessment of Educational Progress illustrates the base rate fallacy. Fourth- and eight-graders posted modest growth in reading and math achievement from 2003 to 2009. Based on this data, Joel Klein, chancellor of New York City Public Schools from 2002 to 2010, wrote an op-ed for The Washington Post claiming that the reforms he instituted were responsible for the gains in New York City (“Why great teachers matter to low-income students,” Apr. 9). But he neglected to point out that during the same period, other urban school districts, which did not implement such reforms, reported very similar results.

The third is Simpson’s Paradox. This refers to the trend of a whole group from one time to another sometimes being the opposite of the trend of the subgroups during the same period. That’s because the composition of the subgroups has changed in the interim. As a result, perhaps the omitted variable fallacy would be more descriptive.

The SAT serves as a case in point. Scores posted today cannot be compared fairly with scores posted even two decades ago, let alone in 1926 when the test was conceived. That’s because the pool of students taking the SAT has changed dramatically. When the SAT- then an acronym for Scholastic Aptitude Test - first came into being, test takers were largely an elite group. Today’s test takers, however, reflect to a much greater degree a normal distribution. A diverse population tends to depress scores more than an elite population. For example, a record 1.53 million students took the SAT in 2009. About 40 percent were minority students, compared with 29 percent in 1999. Differing sizes of subgroups affect the overall results.

Another example of Simpson’s Paradox was on display in the fall of 1973 at the University of California, Berkeley. Its graduate division admitted about 44 percent of male applicants but only 35 percent of female applicants. Fearing a lawsuit, the associate dean asked a statistics professor to parse the data. He found no evidence of bias. Instead, he discovered that more women had applied to departments that admitted a small percentage of applicants than to departments that admitted a large percentage of applicants (“When Combined Data Reveal the Flaw of Averages,” The Wall Street Journal, Dec. 2, 2009).

Despite these caveats, the media too often continue to take at face value the claims made by advocacy groups. I don’t expect education reporters and commentators to become experts, but I hope they will take time to become more skeptical about assertions made in today’s accountability movement. Too much is on the line to do otherwise.

The opinions expressed in Walt Gardner’s Reality Check are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.