Is Ed. Data a Fitbit or a Post-Mortem?
California has dumped its single-indicator, test-score driven accountability measure and adopted eight state priorities, some of which can be constructed partly of local measures. Now the hard part: what will the state measure and how? And with what consequence?
Will the new data system work like a Fitbit, one of the physical activity monitors that provide constant feedback, or as University of Oregon professor David Conley put it, like a post-mortem examination of deceased schooling? Conley's question, which he raised at a recent California State Board of Education meeting, crystallizes the design problem the state faces: a single set of indicators is incompatible with what the system needs at different levels.
There's both philosophy and plumbing here.
The philosophy flows from what I've been calling the California Exceptionalism: the state's increasingly notable effort to disassociate education reform from a generation of negative incentives. The state's vision is to use its new Local Control Funding Formula to create a virtuous circle of data, resource allocation, feedback, and assistance where necessary.
The multiple indicator idea is spreading rapidly. It's present in the U.S. Senate version of legislation to replace the No Child Left Behind Act with a much belated update of the federal government's basic elementary and secondary school law. And multiple indicators are part of the 12-state Innovation Lab Network assessments.
Philosophy leads to plumbing. In response to legislation, the State Board of Education is creating evaluation rubrics, essentially the data elements that schools will be required to collect and report. It's required to have something in place by fall. The board is trying to design visual indicators: a dashboard, to display its rubrics.
The dashboard data are supposed to lead districts and schools toward cycles of continuous improvement as they develop the annual spending and academic plans called the Local Control Accountability Plan (LCAP).
The first round took place last year. The results weren't perfect, but school districts universally preferred the new system to the old categorical funding formula. The new system fostered integration of financial and educational planning. Parents and community groups were more involved, at least in some places.
The LCAP system is complex (see graphic below) with lots of parts.
County offices of education are supposed to support districts in their planning when necessary, and districts send their LCAP plans to the county office for approval. Then districts implement programs of ongoing professional learning all built around the goals and data, and the cycle begins again.
But as my annotations (the red scratching from the talented 'On California' graphics department) show, folks in Sacramento face a huge design problem. Assembling the dials and buttons for a dashboard faces three requirements that are ultimately incompatible with a single set of indicators.
First, multiple indicators face a skeptical public. Most immediately, the dashboard concept faces critics that want an easy-to-understand single indicator of school performance. It will take considerable effort to extinguish that mindset.
Second, schools and districts will need to turn from documenting the results from last year on the eight state indicators to using data to help their organizations improve. Practically, this means moving from post-mortem data to leading indicators, looking for the kinds of things that suggest that a school will have success in the future.
Unless they do, the new data system will become yet another exercise in compliance behavior. School districts will collect and report the needed data, but the information won't help their schools perform better, or even worse, schools will target the indicator itself instead of the underlying learning goals.
Third, at the teacher and student levels, how do the dashboard indicators create information that helps students learn? States and school districts, to a certain extent, work on an annual feedback cycle as exemplified by the LCAP cycle illustrated above. The student-teacher feedback cycle requires much more frequent looping. In a Fitbit world, students would get data that they could react to, self-monitor, and perhaps take some pride in.
As it stands now, we're a lot closer to post-mortem than a Fitbit.
(Next: The politics of multiple indicators and a skeptical public.)