Opinion
School & District Management Opinion

Will a Skeptical Public Accept Multiple Indicators?

By Charles Taylor Kerchner — June 03, 2015 4 min read
  • Save to favorites
  • Print

California is designing a dashboard of indicators to replace its single-number accountability system, based almost entirely on student performance on a once-a-year standardized test. The underlying question is whether the critical public will accept it?

To implement the multiple indicator idea, California suspended its test score-driven Academic Performance Indicator, replacing the single number with eight state priorities. In addition to performance on the new Smarter Balanced Assessment Consortium tests, other pupil outcomes will be considered, including the rates at which English learners are reclassified as fluent, and the percentage of students who graduate “college and career ready.”

Also, schools will be evaluated on the conditions of learning they create. Do students have access to high quality coursework, do fully credentialed teachers teach them, and are state curriculum standards implemented? And schools will have to consider levels of student and parent engagement.

Tracking these indicators will throw off lots of data, perhaps as many as 50 different items, which then need to be disaggregated by race, ethnicity, and other student conditions.

Dashboards provide a way that people can visualize lots of indicators, and the developers of the eight state priorities hope that educators and the public can pick out problems that need immediate attention. The critical indicators may differ from school to school. As California State Board of Education President Michael Kirst has said, “If you’re driving over the mountains, and your car starts to steam, your primary concern is not whether the car has enough gas. Schools need to look at the indicators for the problems that they are trying to solve today.”

As I wrote earlier, multiple indicators face a skeptical public.

The larger skepticism surrounds trust in local governments, a keystone of Gov. Jerry Brown’s subsidiarity policies of pushing problems down to the level of government that can deal with them most directly. “Advocacy groups are generally very supportive of multiple measures,” said Oscar Cruz, president of Families In Schools, in a recent interview. “They want to move beyond test scores.” But, still, distrust of local government is palpable among civil rights advocates, and it extends to indicators. “Unless we recognize the power dynamics, local control accountability will just be a conversation between unions and administrators,” Cruz said. “By the time it comes to public comment, it’s a done deal.”

There’s also fear that so many indicators will confuse parents, and that locally created ones are not valid.

These parents and a skeptical public want an easy-to-understand single indicator of school performance. University of Oregon professor David Conley calls this “the realtor problem.” Realtors need an easy way to tell prospective buyers that the neighborhood schools are good. The need-for-a-single-number problem might also be called the journalist problem, the economist problem, or the civil rights advocate problem. All depend on an unambiguous indicator to make their case.

There is broad agreement that the “test and punish” mode of single-indicator accountability doesn’t work, but that does not make it any easier to create trust and working relationships among parties whose primary occupation has been suing one another.

One approach to “the realtor problem” comes from the California Office to Reform Education (CORE): those districts that banded together to seek a waiver from the requirements of the federal No Child Left Behind Act. CORE’s member districts educate about 1-million students and include some of the state’s largest districts. Noah Bookman, CORE’s chief accountability officer, has been working with GreatSchools, whose web site contains ratings of over 200,000 schools nationwide to expand their grading system to create a multiple indicator dashboard.

The mockup (above), using made up data, presents a score on CORE’s School Quality Improvement Index, and a 5-year trend, providing an immediate overall rating. Then it ranks the school on seven dimensions, compares it to the district and the state, and provides an arrow to illustrate the trend.

Finally, it provides a placeholder space for a state rating, if the state, again, adopts a single indicator rating system.

A set of tabs at the top of the report shows that data on Latino students are being displayed. In this illustration, a user could also show data on all students or low-income students.

Rick Miller, CORE’s executive director, spoke of the philosophy behind their new indicators:

Our work is a rejection, if you will, of two central problems that we found with No Child Left Behind. It takes a too narrow look at what makes a school successful. If you are going to get a kid college and career ready, simply paying attention to test scores in two subjects is not a robust look at how successful a school is.... The second part that we think that No Child Left Behind got wrong is that once you've found the measure, what happens if the school is not succeeding. What NCLB said is blame and shame or punitive accountability. And ultimately what it says is that 'we don't know how to fix it, but we figure if we yell at you and shame you, you'll fix it.' [Part of this is bringing in outsiders, which doesn't work.] Our argument is that the only way you are going to improve the system is through capacity building. It's not through firing people; it's through working with the folks in the system and helping them learn and get better. The fundamental idea in No Child Left Behind is that if you force them to do something different they will get better, and our fundamental argument is that teachers want to get better, they want to improve."

History gives us a bit of hope, and a caution about the use of indicators and underlying school improvement efforts. The Chicago School Reform Act of 1988 seems a bit ancient, but it ushered in an era of local school reform and intensive use of data and indicators. In the process, local school councils learned to interpret statistical data and monitor budgets. A civic infrastructure developed to undertake statistical analysis of results and track them over time.

A few years later, the Los Angeles Educational Alliance for Restructuring Now devolved most educational and operational decisions to participating schools, and educators and parents adapted to parents reasonably well. More than 500 schools joined the effort.

However, both reforms fell to political upheaval. The coalition that sponsored the reforms was not strong enough or long lasting enough to allow them to mature.

Thus, for multiple indicators, the problem is partly the dashboard display, but it’s largely about the political determination to make the system persevere.

(Next: Combining numbers and narratives into a system that helps schools and districts get smarter about their work.)

(Graphic courtesy of GreatSchools and CORE)

The opinions expressed in On California are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.