The intellectual and informational inaccuracy, sloppiness and thoughtlessness of so much of education reporting still shocks me—I know I ought to have gotten over it. The story you told me about the reporter who bought NYC's claims that the latest NAEP results are a sign of the DOE's success is such a perfect example.
Investigative reporting is a lost art. They too often see themselves as conduits for press releases, with perhaps one quote from someone "on the other side" to show that it's impartial. It's their style even when I'm delighted with it! E.g.: I loved the front-page story in the Boston Globe proclaiming the success of the Pilot Schools in Boston, with its obligatory demurer from one critic. But that's not good newspaper reportage.
Thanks for alerting me to the fact that NAEP scores—representing the only national comparisons available—demonstrate that NYC hasn't moved forward or back since Klein and Bloomberg took over; except in one test at one grade level. Since tests are all the DOE cares about they can't, like me, claim that testing is not the only way to judge their work.
Another example. I ran into a headline today—NY Times, I believe—proclaiming that while we're not doing badly compared to most European rivals, we are being badly beaten by Asia. Yes, Japan and Singapore do better. But Asians? Neither China nor India are doing better—nor are their scores even mentioned in these international comparisons. Americans believe it must be true because they are the primary locations of outsourcing, and we've bought the lie that outsourcing is related to our poor public educational system—rather than appallingly low wages in outsourced sites. The headlines shouldn't obscure these realities.
Yes, wouldn't it be nice if there was something comparable to Consumer Reports on education. Maybe we can interest them in working on this?
Such a "simple" idea. Reporting that is not self-interested, that tries to explain the complexity of automobiles (toasters, etc.) in ways that acknowledge that we're not all looking for the same thing, and that we might want to easily scan the alternatives to see what the trade-offs are. I want 4-wheel drive, but….I also want…. And on and on. We don't actually have to reinvent the wheel.
I think "accountability" has to start by separating the different purposes and audiences to which we feel accountable. For example, I'm accountable to my students—they have a right to know what I think of their work and how it can be improved. I'm accountable to my colleagues to share my work in the interest of improving collective practice. I'm accountable to families to bring them into the full picture of what I see happening and what I hope together we can do about the situation. This can include standadized tests. But since such tests are designed to be statistically indirect "indicators"—at best (see psychometrician W. James Popham's piece in Education Week…), it would be odd if we ignored the fact that we have access to direct evidence. We have the hard data—the kids and their work.
But while individual schools are the best place for this assessment to originate, there's the kind of data needed by more distant publics: professional and lay, politicians and academics.
Some years ago we designed a 5-year "experimental" project in NYC—with $50 million in Annenberg funds, to explore a large-scale experiment with the above in mind. It involved an institute at Columbia headed by Linda Darling-Hamond, Michelle Fine at CUNY and other research backup, about 130 schools with 50,000 students, organized into 15 networks. It gave schools direct access to their full budgets and a great deal of freedom from union, city and state mandates in return for developing new forms of accountability. It managed to get the support of the then-chancellor, mayor, teachers' union and state commissioner. Unfortunately, as we were about to "go", both the chancellor and the commissioner departed and their replacements said "no way."
It's an oddly distorted version of this idea that emerged 10 years later under Bloomberg and Klein. It gutted what we believed was the essence of the plan: that it was voluntary, small scale (the size of the average American city), invited networks to develop self-designed plans, and had the support of some of the best independent research institutions in town to track different aspects of the work as it played out over time. We hoped that the work would help us find answers suitable to the various audiences involved. We were genuinely curious and thought it quite likely that we would end up with some shared agreement about "what works" and many different answers as well!
We lost that chance. So, now maybe you and I can try to imagine what some of these different solutions might have looked like.