Education

Has the Research on Formative Assessment Been Oversold?

By Stephen Sawchuk — May 21, 2009 2 min read
  • Save to favorites
  • Print

Over the last decade, the teacher practice of using “formative assessments” has become a huge topic of interest.

Though called assessments, in practice they’re more like exercises teachers use to gather immediate feedback on whether a student is responding to an instructional technique, with reference to a particular curricular objective.

Proponents say the practice has a strong research base showing it can dramatically improve student achievement. (And now that testing companies are labeling a lot of products as “formative,” it’s a big moneymaking endeavor, too.)

But recently, some experts have suggested that it may be time to take a closer look at the practice and its research base. At an event held by the Educational Testing Service company earlier this month, ETS Distinguished Presidential Appointee Randy Bennett walked attendees through the research literature.

And like the child’s game of telephone, something seems to have been lost in translation.

In 1998, two researchers from King’s College London, Dylan Wiliam and Paul Black, published an article in the journal Assessment in Education based on a review of hundreds of studies on formative assessment. In the article, they noted that the studies were too diverse to be meaningfully summarized through a meta-analysis (a wonky term for a scientific research synthesis) into a single effect-size statistic. In fact, they noted that only a handful of the studies were quantitative ones that were rigorous.

But they went on to publish a second article in Phi Delta Kappan that alleged that the positive effect sizes of formative assessment ranged from 0.4 to 0.7 across 40 quantitative studies, a medium-to-large gain.

Subsequently, the biggest experts in the testing industry have referenced this article to support the practice of formative assessments. But they haven’t gotten the details of the review right. They have called the review a meta-analysis. (It wasn’t.) The effect sizes suddenly sprang up to as high as 1.0. And the number of quantitative studies supposedly reviewed jumped to 250.

And a handful of more recent studies, Mr. Bennett indicated, suffer from selection bias and other methodological issues.

“The research is not as unequivocally supportive of the effects of formative assessment as it is sometimes made to sound,” he said.

What does this mean for teachers? In essence, it means formative assessment, though promising, isn’t necessarily a silver bullet.

It’s an old theme, but there is still a lot more work to be done to make sure that such assessments are valid, well designed, and yield useful results for teachers.

Related Tags:

A version of this news article first appeared in the Teacher Beat blog.