Study Shows How Discrimination Creeps Into Grading Practices
A study released this month by Harvard University's Kennedy School of Government uses an innovative experimental design method to get an intricate picture of how cultural discrimination plays into the grading decisions that teachers make.
For their study, researchers Rema Hanna and Leigh Linden traveled to India, a country known for its deeply entrenched caste system. The researchers then recruited elementary and middle school-level students from all levels of society to take part in a contest. Students took a battery of tests in mathematics, language, and art and were told that the highest-scoring student in each age group would win 2,500 rupees, or $58, which is about half of their parents' average monthly income. I'd say that's pretty high stakes.
Likewise, 120 teachers from both government and private schools were recruited to grade the test and were paid about $5.80 for their efforts. The tests, however, were randomly assigned different student characteristics. One student, for instance, would be listed on a cover sheet as a member of the Brahmin caste, the highest of India's four social groups, while another might be described as being in the lowest caste, the Shudra. The tests were also graded separately by a research staff member who had no knowledge of any of the students' characteristics.
As might be expected, the results showed that teachers, on average, assigned scores to students from low castes that were 3 percent to 9 percent lower than those of students who were described as being from a high-caste group. What was particularly interesting, though, was that teachers from low-caste groups were driving most of that discrimination; no evidence of bias could be found for teachers from high-caste groups. And they tended to direct it most often toward the lowest-performing students in the low-caste group—the students who presumably best fit the stereotype.
On the other hand, the researchers did not find any evidence that teachers were grading boys' or girls' tests any differently—except in the cases of the highest-performing girls, who were graded slightly more harshly than their high-performing male counterparts. (Incidentally, check out the Web-chat that I moderated on the gender gap at the top in mathematics and science. Could this be another possible explanation?)
Another new wrinkle in this study: Teachers' caste biases seemed to lessen as they got closer to the bottom of the test pile. The researchers hypothesize:
"It appears that when grading students early in the process, when the overall distribution of scores in unknown, teachers may use the caste of a student not as a signal of performance, but rather as a signal of where the child will eventually land in the overall distribution of tests." Hanna and Linden suggest that schools can counteract some of that bias by taking time to familiarize teachers with new tests.
This is all important, of course, because a long line of studies have documented what they call the "Pygmalion effect" in education. That is, children tend to fulfil their teachers' expectations for their performance. That's one strike that disadvantaged children don't need—in any society.