Four Surprising Truths About U.S. Schooling
The OECD recently issued its new book-length report, "Measuring Innovation in Education 2019." As I've previously noted (here and here), the authors offer some fascinating peeks at how the OECD nations compare when it comes to policy and practice around STEM and language arts. Today, I'll dig into this one last time to flag a few surprising findings that seem to challenge received wisdom.
Emphasis on testing is pretty typical by international standards: There's been much talk in recent years about the rise of "test and punish" schooling, and I certainly share the sentiment that testing mania went too far. Notably, though, for better and worse, it appears that U.S. practice during the 2007-2015 period has broadly reflected OECD norms. In the OECD, in 2015, 77 percent of math teachers reported that they "put major emphasis on classroom tests to monitor students' progress"; the U.S. figure was 83 percent. Both figures rose about the same half-dozen points from 2007 (Figure 9.3). In science, the OECD average increased from 60 percent to 73 percent; the U.S. figure from 52 percent to 66 percent (Figure 9.4). In other words, the U.S. was a bit above the international norm in math and a bit lower in science. While there may be too much testing across the globe, it's hard to look at this and argue that the U.S. is an outlier.
Elementary students are using computers less frequently: The rise of digital devices in schools has provoked a range of responses—from cheering the ability of ed tech to transform learning to lamenting its baleful effects. Well, in an interesting twist, it looks like elementary students were using less classroom technology in 2016 than a decade before. In 2006, 74 percent of U.S. 4th graders said they used "computers at school at least once a week" (Figure 11.6). By 2016, that share had declined 13 points to 61 percent. Meanwhile, the OECD average also fell, though by less than half as much—from 46 percent to 41 percent. It's not clear whether this means that students are using phones instead, that schools are moving to restrict use of ed tech, or even if the meaning of "use a computer" has changed over time.
Hardly anyone does ability grouping (or at least, admits to doing it): Heated debates over de-tracking have often given the impression that the U.S. has tended to engage in ability grouping to an unusual degree. Whatever may have once been true, that's no longer the case. In 2006, across the OECD, just 9 percent of school principals said their schools had a "policy of grouping students by ability into classes"; in 2016, that figure was eight percent (Figure 12.1). In the U.S.? The comparable figure was 4 percent in both years. Indeed, the nations that saw the biggest jump between 2006 and 2016 were the Netherlands, South Korea, and . . . wait for it . . . Finland, which went from zero percent to 5 percent tracking over that period. Now, trusting these results requires trusting that principals were willing to tell the truth about tracking when filling out the materials. Are they? Your guess is as good as mine.
Teachers report an increasing amount of collaboration: One complaint about the Bush-Obama years is that overemphasis on testing and evaluation left teachers feeling alienated from one another and more isolated than ever. The data suggest a different story. In 2007, 37 percent of U.S. math teachers said that they "often or very often" collaborated with peers to plan and prepare materials, a bit under the OECD average of 40 percent (Figure 13.31). By 2015, the U.S. figure was up to 61 percent, surpassing the rising OECD average. In 2007, just 5 percent of U.S. fourth-grade teachers said they visited another classroom "often or very often." That matched the OECD norm (Figure 13.33). By 2015, the U.S. figure of 21 percent outdistanced a rising OECD average. In short, through nearly a decade of No Child Left Behind, Race to the Top, and teacher evaluation, teachers reported an increase in collaboration.
What to make of all this? I'm left with two thoughts, really. One is that a lot of our fevered education debates are fueled by assumptions which can be off-base, or flat wrong. A second is that our efforts to determine the truth regarding so many of these questions—from tracking to collaboration to computer use—are hostage to the data we can collect, and to the veracity of the information we receive. It seems to me that it's a good thing if this makes us a little more humble about what we think we know and about which policies and practices are "evidence-based."