Should Teachers Be Grading Common-Core Tests?
Motoko Rich, intrepid education reporter at the New York Times, has a piece out called "Grading the Common Core: No Teaching Experience Required," and it raises an important question: should teachers be grading the tests associated with Common Core? Here's the set-up:
The new academic standards known as the Common Core emphasize critical thinking, complex problem-solving and writing skills, and put less stock in rote learning and memorization. So the standardized tests given in most states this year required fewer multiple choice questions and far more writing on topics like this one posed to elementary school students: Read a passage from a novel written in the first person, and a poem written in the third person, and describe how the poem might change if it were written in the first person.
The problem, of course, as Rich points out, is this: "the results are not necessarily judged by teachers."
Instead—you guessed it—the testing titans responsible for scoring the exams associated with Common Core (Rich's article focuses on Pearson) have done exactly what you would expect testing titans to do. They have attempted to cobble together a group of scorers in the most efficient (read: cost-effective) way possible:
There was a onetime wedding planner, a retired medical technologist and a former Pearson saleswoman with a master's degree in marital counseling. To get the job, like other scorers nationwide, they needed a four-year college degree with relevant coursework, but no teaching experience. They earned $12 to $14 an hour, with the possibility of small bonuses if they hit daily quality and volume targets.
On the list of things that don't make sense about education reform in the 21st century, one that ought to rank right near the top is the idea that wedding planners, retired medical technologists, marriage counselors, and people with one year of teaching experience 45 years ago should be scoring these rigorous new tests so diligently created for us by education experts to ensure that our children are not being left behind. It's fair to ask: if so much care is being taken to ensure that the tests were created by professionals for professionals to accomplish something so important, why are we entrusting the scoring of them to temporary employees found on Craigslist?
It's difficult to overtstate how ridiculous this situation is, but it's not hard to try. It's even easier just to let Pearson grab a shovel and keep digging. Rich quotes a guy named Bob Sanders, who has the impressive title of "vice president of content and scoring management at Pearson North America," and he wastes no time making the situation even worse. "From the standpoint of comparing us to a Starbucks or McDonald's, where you go into those places you know exactly what you're going to get," he says, unhelpfully.
"McDonald's has a process in place to make sure they put two patties on that Big Mac," he continued. "We do that exact same thing. We have processes to oversee our processes, and to make sure they are being followed."
Here's the thing: most people, when they think of McDonald's, probably don't think about the fact that they could order one cheeseburger in Omaha and another one in Ohio and pretty much get the same thing. I think when most people think about McDonald's they think about SuperSize Me. Maybe I'm wrong, but whatever McDonald's is doing it doesn't seem to be working. This is the analogy you choose?
And, anyway, we're talking about education, not cheeseburgers. The biggest problem with Sanders' comparison is that you don't know what you're going to get if you're scoring tests that are supposed to assess a student's ability to think critically. As Lindsey Siemens, a teacher quoted in Rich's article, puts it, "to take somebody who is not in the field and ask them to assess student progress or success seems a little iffy." I'll say. Even the scorers Pearson has hired talk about "going deep" to try to figure out what a student is trying to say. But without any experience in the classroom—without an understanding of the rhythms of the classroom, the culture of classroom spaces, the ways students and teachers talk to each other—it's exceptionally difficult to make those judgments. I doubt very much that it would stop most scorers from trying, but, in the end, a marriage counselor who hasn't spent a whole lot of time with fifth graders is not really in a position to even make an educated guess.
Naturally, Pearson says this is a good thing—the last thing it wants is for scorers, no matter what their backgrounds are, to be making assumptions about what students may or may not have meant when they wrote a response. But the idea that anyone reading a student essay that is supposed to be able to assess "critical thinking" and "complex problem solving" can do so in a "neutral, impartial way," as "independent consultant" Catherine McClellan put it in the article, suggests that we have already settled the question of what we want students to learn and what it would take for them to show it to us. Her statement actually reveals something deeply problematic about this whole enterprise: the idea that assessment of student learning can or must be neutral or impartial suggests that assessment is not about "taking sides," but about simply analyzing what is correct and what isn't. Unfortuately, there is no "right" way to think critically or solve problems or, for that matter, to write. If we try too hard to define what critical thinking looks like we literally change the definition of what it means to think critically.
The back side of McClellan's suggestion, of course, is that teachers do "take sides" when they grade student work (presumably, they take the side of the student), and therefore can't be trusted to do it right. Any good teacher will tell you that assessing student work is about making connections between where a student was when he started a learning experience and where he is now. It's about looking for progress, not penalizing deficiency, and that's not something that happens in the same way or at the same speed for every student. It's also highly dependent on the relationship between teacher and student. In that sense, of course teachers "take sides" when they grade their students. Why wouldn't they? Teachers exist to help students learn. Whose side would you expect them to take?
Of course Pearson has no real incentive to seek out professional teachers to score these tests; after all, Pearson exists for the primary purpose of earning a profit. But there's also probably another reason they don't want teachers doing this work: then those teachers would see the tests. The Great Unspeakable Fear of testing advocates is that teachers who have seen the tests will then turn around and tell their students all about what they saw—how many more ways can they say they don't trust teachers?—but there's also the possibility that teachers will realize that the product isn't quite all it was cracked up to be. I have no doubt that Pearson could recruit chronically underpaid teachers who are otherwise just "wasting" their summers on professional development to grade these tests. It seems quite apparent that they don't want to.
And that's too bad. Because it bears repeating yet again that teachers need to be seen as allies in the effort to remake education, not as enemies. Again and again, proponents of school reform overplay their hand and reveal that they have so little respect for teachers that they can't work hard enough to marginalize their participation in the process. Of course teachers should be grading Common Core tests; they're the ones who teach Common Core. To pretend that they shouldn't makes assessment seem like an afterthought, which only reveals, once again, that this is not about helping students and teachers get better but simply about imposing some simplistic notion of accountability on public schools. The poverty of our thinking, sometimes, is breathtaking in its short-sightedness.