International

Study: Suburban Districts Falter in Global Competitiveness

By Sarah D. Sparks — September 27, 2011 4 min read
  • Save to favorites
  • Print

As policymakers debate potential changes to accountability under the No Child Left Behind Act, a new report from the George W. Bush Institute argues that even America’s top school districts are “mediocre” compared to other industrialized countries.

The report, published in an online preview of the Winter 2012 issue of Education Next, argues that when average district achievement in math or reading is compared to average achievement in those subjects for students in industrialized countries, as measured on the Program for International Student Assessment, or PISA, even wealthier districts in areas like Beverly Hills, Calif., or Fairfax, Va., come out in the middle of the pack or worse.

The report’s accompanying interactive website allows users to look at how 13,636 school districts would rank in student achievement in comparison to the average student performance in 25 industrialized nations in math and reading. It also breaks out comparisons of average student performance in three countries: Canada, Norway, and Singapore. In the well-to-do Washington suburb of Fairfax, Va., for example, the average math performance would place its students in the 49th percentile, just below the average of the 25 peer countries.

“We thought it was important to name the districts, because people have an amazing ability to rationalize away the results they see. They have an amazing ability to say, well, that’s very sad, but it’s not what’s happening in my district,” said Jay P. Greene, a professor of education reform at the University of Arkansas and an institute fellow, who co-authored the report with Josh B. McGee, the vice president for public accountability initiatives at the Laura and John Arnold Foundation.

“As long as the suburban elite believe that, they won’t commit to real education reform on a large scale,” Greene told me. “To do a lot better, what’s becoming clear to me is we have to make suburbanites understand that education reform is not just about poor kids in the city, but that education reform is about their own kids.”

The report is likely to become a hot talking point in the ongoing debates about global competitiveness and international benchmarking, but education watchers may want to go through the findings with a fine-toothed comb. The report is the first to attempt to create international rankings at the district level, and researchers are already raising serious concerns about the validity of its global comparisons.

“The methodology in this report is highly questionable,” warned Jack Buckley, commissioner of the National Center on Education Statistics.

The problems stem from the difficulty of making international comparisons in the first place. There is no test given to every student across industrialized countries, so Greene and McGee came up with their rankings by using three testing systems made for very different purposes:

• Myriad state assessments used for NCLB accountability annually at grades 3 through 8 and once in high school, usually at 10th grade;

• The National Assessment of Educational Progress, which is given to a representative sample of 4th and 8th grade students in math and reading in each state in odd years, including 2003, 2005, and 2007; and

• PISA math and reading exams, which are given to a representative sample of 15-year-olds in each of the participating countries in the Organization for Economic Co-operation and Development once every three years, including 2003, 2006, and 2009.

The researchers had to consolidate average proficiency scores for students across different ages and tests covering different material, which in many cases were not administered in the same years.

Greene said that while state, NAEP, and PISA tests cover different content across different grades, “the tests should correlate with each other because they are getting at some underlying amount of knowledge.

“That’s why we feel comfortable with making the comparison across tests and across grades,” he said. “If you are at the top of the heap in 6th grade, you are likely to remain at the top in 10th grade.”

Greene noted that the rankings of 15 large urban districts on the NAEP’s Trial Urban District Assessment match up with those districts’ rankings in the current report’s calculations, but he added that the study was not able to compare its estimated PISA rankings for districts with actual PISA rankings in states or districts that took the test, such as Scarsdale, N.Y.

That is likely to be a problem, Buckley said, because without outside confirmation of the international comparisons, the study must assume that students would not change as they grow and countries would not improve or decline relative to each other over the years.

“There’s more interpolated data for PISA in this than there is actual data,” Buckley said of the report. “Simple linear modeling is unlikely to capture the true growth characteristics of the PISA countries.”

The report is likely to percolate conversation about how best to compare American students to their peers—and eventual competition—in other countries, but also may help spur the next level of international education research, going beyond rankings to look at how policies, teaching, and resources in different countries affect student achievement.

“We’re not measuring the value-added of schools or of teachers; we’re just measuring the level of achievement,” Greene said. “We’re just like a big thermometer; we want to know what temperature it is, not the weather patterns that got us that way.”

Related Tags:

A version of this news article first appeared in the Inside School Research blog.