Measuring the Efficiency of Schools
Why don't we look at school effectiveness as a question of efficiency - given its inputs, do this school's outputs suggest that it underperforms, or that it exceeds expectations?
Instead of looking at overall achievement, in terms of test scores or graduation rates, what if we sharpened our focus on efficiency, and set out to learn exactly what it takes to educate all students well?
We've known for some time that some students are easier to educate - those with higher incomes, more stable families, fewer learning disabilities, and the like - while other students are more difficult, and thus more expensive, to educate. Weighted student formulas, which allocate funding to schools on the basis of their student populations, are used in many districts around the country. Here's an excellent primer from EdWeek's Christina Samuels.
An efficient school is one that can educate students better with less money, relative to how other schools do with similar students and similar funding. If we can figure out what works - what's "efficient" - it would follow that we can sustainably replicate it. In quasi-mathematical terms:
Efficiency = (Achievement x Student Weight) / Expenditures
Instead, we have widely varying models and approaches to educating students, widely varying funding levels, and virtually no examination of efficiency on a weighted student basis. Per-student expenditures, which are typically reported as simple averages, tell far too little of the story.
For example, a New Jersey district that spends $12,000 per pupil, yet serves upper-middle-class suburban students from two-parent families may in fact be quite inefficient, whereas an urban charter school that receives the same amount of money while serving a high-poverty population could turn out to be several times more efficient.
As a result, we fail to learn transferable lessons from the truly efficient schools, and treat as successful many other schools with high expenditures and easy-to-educate populations. We use weights to allocate funding, but not to judge success.
Innovative models are often hailed as the wave of the future, but in many cases they are simply to expensive to be replicated. We simply don't know how efficient our schools are, because we don't do the math. So why are test scores never reported in efficiency terms?
I think one reason is the inherent "ickiness" of preparing actuarial tables of students. I certainly don't want to be in charge of assigning "difficulty to educate" points to students based on their demographic characteristics. But this is precisely how life insurance works - you're quoted a price based on your characteristics such as age, gender, smoking history, and so forth.
Another reason is that parents of easy-to-educate students will have trouble accepting the idea that their children should receive less funding than other students. This may already be the reality, but being explicit about weighted funding and achievement is likely to rock a lot of boats, particularly in "inefficient" suburban schools with high overall achievement.
There's also natural resistance to the idea that we can use brute force to turn dollars into student achievement. Throwing money at something is not necessarily a good solution, but then, that's why a focus on efficiency might be very illuminating.
Finally, there's of course the important point that the purpose of education isn't to turn dollars into test scores. It's to educate students, and to prepare them for citizenship and adulthood. We should be careful not to confuse our purposes with our metrics.
But I think we should give efficiency measures a try. There are plenty of secondary data analysis experts who could crunch the numbers fairly easily.
More importantly, such an effort would spark a real debate about the differential cost of educating students, and would force us to acknowledge as a society that we need to put our money and effort behind addressing the gaping inequities in our current system.