National Math Panel: Under A Microscope
Less than a year after a federal panel offered its blueprint for how to improve teaching and learning in math, a number of academic researchers have put some sharply worded critiques of that work in print.
Their reviews have been published in a special issue of the Educational Researcher, a journal of the American Educational Research Association. The AERA, a well-known, nonpartisan Washington organization, invited and published the essays, which examine the final report of the National Mathematics Advisory Panel, titled “Foundations for Success.”
The math panel was appointed in 2006 by President Bush to study effective strategies for improving student learning in math, particularly in steeling them for algebra. In sum, the panel, comprised of academic scholars, cognitive psychologists, and others, called for a more streamlined pre-K through grade 8 math curriculum, with a strong emphasis on making sure that students master certain content at early gradesparticularly whole numbers, fractions, and aspects of geometry and measurement. The panel’s 19 voting and 5 nonvoting members reviewed about 16,000 total documents over a 18-month period. The final, 90-page report, released in March, struck a conciliatory tone with regard to the so-called “math wars,” ideological disputes about how to teach math, calling for a mix of curricular approaches and teaching styles.
Many of the essays in the AERA journal, not surprisingly, take issue with one of the more controversial aspects of the panel’s work: the standards of evidence its members relied on to judge the effectiveness of math programs and curricula. The panel gave the strongest weight to scientific studies that “meet the highest methodological standards,” and which have been replicated in different kinds of settings. To critics, those standards resulted in too much weight being given to a research method known as a randomized control trial. The panelists’ reasoning (as explained in one of the AERA essays) is holding math programs to high standards was necessary, if the panel’s recommendations were to have relevance on a national scale in schools around the country.
One of the essays, written by Paul Cobb and Kara Jackson, criticizes the panel’s “unflagging adherence” to experimental studies, which they say “adversely affects the quality and usefulness of [it’s] recommendations.” Another essay, whose lead author is Jere Confrey, asserts that the panel applies its own standards inconsistently from math topic to topic, which results in “serious breaches” of the panel's ability to produce a high-quality, objective report. (A few years ago, Confrey led a panel of the National Research Council, which produced a 2004 report on how to judge the effectiveness of math curricula. The NRC is an independent research entity chartered by Congress.)
Confrey and her co-authors also allege that the panel’s work is already “contributing to a marginalization of mathematics educators and to the neglect of decades of research on children’s learning of mathematics.”
Another essayist, Finnbar C. Slone, of Arizona State University, has a different take. He takes issue with the panel’s reliance on randomized trials, but also suggests a new “working model” for studying math education.
The panel’s chair and vice-chair, Larry Faulkner and Camilla Persson Benbow, respond to these critiques with their own essay defending their standards of evidence. They also seek to explain the panels’ methods, and the constraints under which its members worked. They note that the panel needed to establish clear criteria for judging math research, even if definitions of what constitute scientific evidence amount to a “moving target.” Several panelists, during the group's open discussions, voiced surprise at the lack of research about what works in K-12 math education, despite the broad public worry about U.S. students’ mixed performance in that subject. Faulkner and Benbow write that they hope the panel’s work can direct academic research where it is most needed.
Readers of the report should see it not “as the end of an initiative” they write, but as “the first step of a more formalized process that moves from rhetorical handwringing to the framing of initiatives and the development of future research directions.”
After you’ve sampled the AERA essays, I hereby solicit your own commentaries in this forum.