Assessing Deeper Learning, Part I
This post is by Rafael Heller, Principal Policy Analyst at Jobs for the Future.
Last week, in this space, Bob Lenz described the encouraging results of an important new study by the American Institutes for Research, tracking the progress of students who attended high schools in the Deeper Learning Network and comparing their outcomes to those of similar students from non-Network schools.
As Lenz notes, the effort to conduct rigorous empirical research into deeper learning is still in its infancy--indeed, AIR defines this as an early "proof of concept" study. Still, though, the findings are exciting: relative to the comparison group, students who attended Network schools were more likely to finish high school on time, went on to four-year colleges in greater numbers, got higher scores on state achievement tests, did better on assessments of problem solving, and rated themselves higher on measures of engagement, motivation, and self-efficacy.
This is great news, suggesting that well-designed high schools can succeed at teaching to the ambitious goals collected under the banner of deeper learning. But while it's important to celebrate these findings, let's not overlook the other, equally good news to come out of AIR's study.
The other concept that this research aimed to prove is the notion that the so-called "hard-to-measure" aspects of deeper learning--the development of inter- and intra-personal competencies--can in fact be measured.
Not everybody finds methodological achievements to be all that exciting, but this aspect of AIR's study deserves a rousing cheer as well. It shows that--as James Taylor, the study's director told me--"there's a there there." The study proved (i.e., put to the test) the concept that deeper learning's component parts can be picked out from the background noise of life in schools and measured in serious ways. And that provides a powerful rebuttal to anybody who would argue that we're chasing ephemeral, feel-good-but-fuzzy-headed educational objectives. Actually, we can reply, the research suggests that these are meaningful outcomes that can be taught, learned, and assessed.
Which brings me to a second important report that was released this week, David Conley's paper--the first in Jobs for the Future's series of Deeper Learning research reports--charting a new course for assessment and accountability in the nation's high schools.
As I'll describe in my next post, Conley's analysis suggests that it won't be easy for school systems and policymakers to get over their long-standing addiction to cheap, one-dimensional achievement tests. Nor will it be a quick, simple matter to create and scale up better assessment systems that provide more useful, multi-layered information about students' progress. However, given recent research into how people learn and what it means for them to be "ready" for college and careers, it has become impossible to keep pretending that our current testing approaches are adequate. Our only choice is to commit to doing the sort of hard, slow, methodologically sophisticated work that AIR has started, and which will lead, over time, to the building of large-scale assessment systems that measure the things that really matter.