Assessment

Computerized Essay-Scoring Helpful for Student Feedback?

By Liana Loewus — August 21, 2014 2 min read
  • Save to favorites
  • Print

This article is cross-posted from the Curriculum Matters blog.

Despite the many negative associations with automated essay-scoring, there’s some evidence it may actually be more effective in changing student behavior than human scoring, Annie Murphy Paul wrote recently in the Hechinger Report.

For the most part, discussions about computerized grading (a.k.a. robo-scoring, machine scoring, A.I. scoring) center on how it’s used in assessment. Both PARCC and Smarter Balanced, the two state consortia developing common-core-aligned tests for next spring, considered using artificial intelligence for scoring writing tasks, but are likely to go with hand scorers for now.

The typical back-and-forth on the topic (and there’s plenty of it) is about accuracy. A 2012 study found there were no significant differences between human and computer scorers. But can computerized scoring systems truly gauge a student’s grasp of language and writing skills? Can the systems be cheated, as Les Perelman, a former director of writing at the Massachusetts Institute of Technology, suggests? Are they able to measure creativity?

Paul, in the Hechinger Report, comes at this from a different angle. She suggests computerized scoring can be a valuable teaching tool because students react better to receiving edits from a computer than from a human.

Citing a 2010 study by Khaled El Ebyary, a professor of applied linguistics at Alexandria University in Egypt, and Scott Windeatt, a senior lecturer in the Center for Research in Linguistics and Language Science at Newcastle University in England, she writes that, when interacting with the computerized editor, students were much more likely to have positive feelings and to revise their work. Some students even saw it as game-like, which boosted their motivation. Meanwhile, “comments and criticism from a human instructor actually had a negative effect on students’ attitudes about revision and on their willingness to write,” according to Paul.

Students experience a “disinhibition effect” with the technology, she said—similar to the one that makes medical patients more likely to answer health questions truthfully when on a computer than when sitting face-to-face with a practitioner.

“It’s the very non-humanness of a computer that may encourage students to experiment, to explore, to share a messy rough draft without self-consciousness or embarrassment,” Paul writes. “In return, they get feedback that is individualized, but not personal--not ‘punitive.’”

Earlier this year, Education Week contributing writer Caralee Adams wrote that the technology for automated scoring is becoming more sophisticated, and that more players are entering the market. She quotes a 7th grade English teacher whose students use an online essay-scoring program to get feedback before they hand in their work.

“Is it perfect? No,” he said. “But when I reach that 67th essay, I’m not real accurate, either. As a team, we are pretty good.”

A version of this news article first appeared in the Digital Education blog.