« A New Preface for the Ed Tech Developer's Guide | Main | A Year without One-to-One »

Looking at Class Sets of Work with MathMistakes.org

Last week, I had the great pleasure to have Michael Pershan of MathMistakes.org into my MIT Introduction to Education class to discuss the joys of looking closely at student data. Last year, Michael helped me run a lesson using material from his site, and this year I was lucky that he was in town for the National Council of Teachers of Mathematics conference.

Most of MathMistakes.org is organized around single problems, and we used three of those single problems last year. Michael has been noodling recently over the limitations of single problems and the value of class sets, so this year just before class, he posted 14 responses from fourth graders to four fraction problems. All four problems ask which of a pair of fractions is larger: 2/3 and 3/2, 2/3 and 3/4, 2/5 and 3/10, and finally 3/7 and 2/5.

To guide our inquiry in class, we borrowed from the ATLAS Looking at Student Work Protocol developed by the National School Reform Faculty. This protocol offers a specific series of steps for analyzing student work. We amended the protocol slightly to serve our particular purposes, but the ATLAS protocol offered a foundation.

As a starting point, our protocol asks reviewers (what I'll call people looking at student work, in this case, my own MIT students) to make two important leaps. First, we ask reviewers to keep their contextual knowledge to a minimum. Reviewers look at work without a detailed knowledge of the lesson, the unit, or the composition in the classroom. The idea is to look at the student work on its own terms. Next, we ask reviewers to assume good faith of students. Assume that the student is doing their very best thinking and putting their best effort into the work. Not every student does, but when reviewers go down the path of "this kid's not really trying," it distracts from the most interesting question of, "well, what is this kid trying to do."

So we had class sets of 56 problems: 14 students completing 4 problems each. We then systematically examined the class sets in four steps. Each step moves progressively up the "ladder of inference." We try to begin with low-inference observations before moving on to discussing correlations, hypotheses, and interventions.

First, we asked reviewers (working in groups of four) to notice salient details. This is an extremely difficult step without practice. The goal is simply to observe what students were doing. "This student drew a circle. This student drew a rectangle. This student started with a circle, crossed it out, then drew a rectangle." Noticing without judging is actually quite difficult.

The second step was to observe patterns. Several groups noticed that students all started with circular area representations of fractions. With fifths and sevenths, however, circles proved difficult to divide evenly (unless you grew up with six brothers and sisters, it's basically impossible to divide a circle into even sevenths), so students often abandoned circular representations for rectangular ones in the last two problems. This was quite a realization for groups; it wasn't totally obvious from looking at the problems, but the pattern seems quite clear in retrospect.

Screenshot 2015-04-25 20.36.27

Reviewers also noticed that many students answered "The Same" for problem four. Those of you who can convert fractions to decimals might note that 3/7 is in fact bigger than 2/5, but not by much. Were the students mostly right, or just wrong? If I asked you "would you like 2/5ths or 3/7ths of a candy bar?", would you care which part one you got?

The third step is to hypothesize student understanding, or as the ATLAS Looking at Student Work Protocol puts it, to ask the question "From the student's perspective, what is the student working on?" Here we tried to figure out which students were applying ideas of least common denominator, which were trying to ensure accurate representations, which seemed to be following some kind of protocol and which seemed to be winging it. The most fascinating debate of this step was looking at one student who had drawn a rectangular representation of fifths, and then a second rectangular representation of tenths created by replicating the fifths and then drawing a line through middle to create tenths. Did the student go into the problem knowing that particular strategy, or did she draw a line through the second rectangle in a flash of insight? At this stage, reviewers begin to realize how easy it is to generate reasonable competing hypotheses of student thinking. We know more than we did before starting the exercise, but we still can't say for sure what students are thinking.

Finally, we began brainstorming possible interventions. We had a long discussion about what our goals were. Of course, in the end, we want students to convert to lowest common denominator and compare with automaticity, but we generally agreed with Michael that it wasn't enough to just teach them the algorithm; we wanted them to develop some intuition about what fractions are and why the algorithm works. Generally, the group settled on the need for more arrays in student practice, and teaching students a particular graphical protocol where students create their own arrays from two fractions, by drawing horizontal lines in a rectangle from one denominator and drawing vertical lines in the same rectangle from the second denominator.

We asked Michael what he planned to do next, and a playfully pained expression fell over his face. He didn't know. There was no right answer. He had been doing lots of array work with them, with arrays of 24ths, and they still hadn't developed a good intuition for comparing area models. It was wonderfully unclear exactly what to do next, but of course by Sunday night, he would have to pick something.

In the end, I had 25 MIT undergraduates spend 75 minutes discussing the intricacies of fourth-grade mathematical thinking by exclusively looking at answers to four problems. What the exercise suggests, more than anything else, is that the very close examination of student work reveals a rich complexity in their thinking, a complexity that raises as many questions as answers.

What MathMistakes.org teaches us is that anyone who wants to engage in conversations about this rich complexity now has friends to do it with. The Math TwitterBlogoSphere, hashtag #MTBoS, is filled with people eager to join in conversations about lesson design, student thinking, and evidence from student work. Having Michael join me in class was a great introduction, for my pre-service teachers, to all of the wonderful educators out their online excited to support each other's growth as educators.

For regular updates, follow me on Twitter at @bjfr and for my publications, C.V., and online portfolio, visit EdTechResearcher.

You must be logged in to leave a comment. Login |  Register
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Follow This Blog

Advertisement

Most Viewed on Education Week

Categories

Archives

Recent Comments