« Grading skoolboy | Main | Obama-Biden on the New Report Cards »

Lessons for No Child Left Behind from "No Cardiac Surgery Patient Left Behind"

New AYP numbers are out, folks. In California, only 48% of schools made AYP, and only 34% of middle schools did so. In Missouri, only about 40% of schools made AYP. Pick almost any state, and you'll see that there are soaring numbers of schools designated as "in need of improvement." With numbers like these, it's worth considering whether NCLB's measurement apparatus is accurately identifying "failing schools."

One way to get leverage on this question is to consider how other fields approach the issue of accountability. Doctor and hospital accountability for cardiac surgery - also the topic of a NYT commentary today - is instructive in this regard. Borrowing heavily from previous work, let me outline how state governments have approached doctor and hospital accountability in medicine. In subsequent posts this week, I'll write about the outcomes of medical accountability systems, as well as some of their unintended consequences.

Medicine makes use of what is known as “risk adjustment” to evaluate hospitals’ performance. Since the early 1990s, states have rated hospitals performing cardiac surgery in annual report cards. The idea is essentially the same as using test scores to evaluate schools’ performance. But rather than reporting hospitals’ raw mortality rates, states “risk adjust” these numbers to take patient severity into account. The idea is that hospitals caring for sicker patients should not be penalized because their patients were sicker to begin with.

In practice, what risk adjustment means is that mortality is predicted as a function of dozens of patient characteristics. These include a laundry list of medical conditions out of the hospital’s control that could affect a patient’s outcomes: the patient’s other health conditions, demographic factors, lifestyle choices (such as smoking), and disease severity. This prediction equation yields an “expected mortality rate”: the mortality rate that would be expected given the mix of patients treated at the hospital.

While the statistical methods vary from state to state, the crux of risk adjustment is a comparison of expected and observed mortality rates. In hospitals where the observed mortality rate exceeds the expected rate, patients fared worse than they should have. These “adjusted mortality rates” are then used to make apples-to-apples comparisons of hospital performance.

Accountability systems in medicine go even further to reduce the chance that a good hospital is unfairly labeled. Hospitals vary widely in size, for example, and in small hospitals a few aberrant cases can significantly distort the mortality rate. So, in addition to the adjusted mortality rate, confidence intervals are reported to illustrate the uncertainty that stems from these differences in size. Only when these confidence intervals are taken into account are performance comparisons made between hospitals.

Contrast this approach with that used by the New York City Department of Education's progress reports, where "point estimates" are used to array schools on an A-F continuum with no regard for measurement error. Readers know well that your friendly neighborhood "statistical nut" has no beef with the use of sophisticated statistical methods to compare schools. But I would just ask that we have some humility about what these methods can and cannot do. (Sidenote: The only winners when we ignore these issues are educational researchers, who can then write regression discontinuity papers using these data. Thanks for the publications, Joel and Mike!)

And it's quite eye-opening to compare the language used by state and federal governments used to explain their accountability systems with the rhetoric we hear in education. Consider this statement from the Department of Health and Human Services to explain the rationale behind risk adjustment:
The characteristics that Medicare patients bring with them when they arrive at a hospital with a heart attack or heart failure are not under the control of the hospital. However, some patient characteristics may make death more likely (increase the ‘risk’ of death), no matter where the patient is treated or how good the care is. … Therefore, when mortality rates are calculated for each hospital for a 12-month period, they are adjusted based on the unique mix of patients that hospital treated.
If you replace the word "hospital" with "school" above, you can imagine the reception this statement would receive in the educational accountability debate. Soft bigotry of low expectations, and you probably kill baby seals for fun, too.

Readers, why is the educational debate so different? Full disclosure: I will shamelessly appropriate your thoughts in my dissertation, which attempts to answer this question, and also establish the effects of each of these systems on race, gender, and socioeconomic inequalities in educational and health outcomes.

You took my post topic for today! (I'll finish writing it anyway, but it will probably be tomorrow since I'm hoping to be able to go see McCaskill this afternoon.)

Discussing percentage of students meeting AYP in a state (like Missouri at 40 percent) is meaningless since the tests differ so dramatically. We need to start by comparing students on the same test.

Hi EduDiva,

Glad to see you're back commenting! You were missed.

So my point was not that we can make between-state comparisons about quality with these data, but rather that there are a whole lot of schools in these states that are being labeled as "in need of improvement." To me, this raises the question of what exactly NCLB is really measuring.

While I understand your point, the medical analogy raises some problems of tone. It necessarily assumes some people will be incurable, candidates for palliative care at best. While out-of-school factors certainly influence student and school performance, what's the analogy for a terminally-ill patient? What's the educational equivalent of palliative care?

Fascinating comparison. As a teacher at an at-risk school myself, I've often struggled to come up with appropriate metaphors. But I think the medical comparison is very apt.

There are, of course many subtle differences. I can't believe I'm really saying this, but teaching certainly involves more variables than open-heart surgery! Everything from the school to the teacher to the pedagogy to the parents to the neighborhood to the state brings it's own dynamic network of intersecting variables.

But honestly I think many people don't want to admit that there are actually systemic problems we face in our nation with regard to poverty and inequality. If we simply say "it's the schools & teachers", then we are left off the hook for perpetuating generational poverty, as the problem becomes one of simple competence, not social policy.

Hi CV,

I have to disagree that the medical comparison assumes that patients, and by extension, students, are incurable. It simply says that we want to evaluate organizational performance, rather than conflate the advantages or disadvantages that students have outside of school with a "succeeding" or a "failing" school. The approach used in medicine is consistent with how educational researchers estimate school effects by trying to equalize (i.e. control for) the non-school factors that will result in some students doing better than others.

And the context in which these report cards are used in medicine - coronary artery bypass graft surgery and angioplasty - could not be any more different than palliative care. They are intended to literally rework the internal plumbing of the heart in order to both extend patients' lives and improve the quality of their lives.

"this raises the question of what exactly NCLB is really measuring"

If you just look at the number of district passing or not passing, NCLB isn't measuring much. (Every St. Louis county district failed--even the Ivy feeder schools. Only small, homogenous districts can pass.) However, we do have a lot more data now.

Hi Eli,

I couldn't agree more that identifying the "risk factors" is considerably more difficult than in the case of cardiac surgery. Because a lot of out-of-school environments are behavioral and frequently changing, they are difficult to measure. There is a similar problem with cardiac surgery in capturing factors like smoking that undoubtedly take a toll on the body but may not yet be manifest as a specific condition (i.e. emphysema). In addition, patients vary in the longevity of smoking and how much they smoke, which makes capturing these contingencies difficult.

But I think you have put your finger on the biggest measurement challenge. A lot of the relevant variables are not independent of the behavior of the school and teachers. For example, parental involvement can rise or fall depending on what the school does, as can the amount of time that a student watches TV. Other variables that proxy these activities, like the level of parental education, are not affected by the school itself and thus could be used as control variables.

Great analogy, eduwonkette. I never understood the belief that teachers can produce identical outcomes regardless of the type of students, parents, administrators, state mandates and all the other factors that influence an individual child's achievement. Thanks for pointing it out!

I think that the medical analogy as presented here has limited utility. Certainly as regards comparing mortality rates from cardiac disease to the achievement of base levels of education. First--everyone dies. A standard based on the survival of every patient, then, is doomed.

Next, hospital care is by its nature crisis oriented, rather than ongoing or developmental in nature. If we switch to something a bit more analogous, we might be looking at public health functions such as ensuring vaccination against disease (and absolute eradication is certainly a goal that has been set and met with some diseases. Know anyone who has had polio lately?) The educational equivalent of cardiac care might be something more like retraining auto workers for jobs in technology, or as language translators. Many more factors are likely to come into play regarding ultimate outcomes.

Certainly the attempt to insert an acknowledgement of baselines and progress through value-added data provides an additional picture of how well schools are doing. Some schools working with disadvantaged kids really are excelling, but still below the mark. But others are in fact providing less annually to kids who walked in the door with less to begin with.

As far as what AYP is measuring, I think that is pretty clear, albeit obscured by all kinds of oversimplistic journalism ("failing" schools, showing improvement and all that). It is measuring whether schools are on a track to get all of the demographic groups to the state defined grade level minimum standard in reading and mathematics by 2012. In a lot of states, that standard is pretty low, and the curve has started out as not too steep.

What we know, from the fact that so many schools didn't make AYP, is that some well-defined groups, are not getting what it takes to be able to read and cipher at grade level (whether from home, school, the community or Sesame Street), while other kids, in other well defined groups, have no problem.

If we want to use a medical analogy, we might want to examine the problem from an epidemiological point of view. What are the kids who succeed getting that the others are not? And how do we rectify the situation? Do we put iodine in the salt, or flouoride in the water? Do we launch a public reading campaign? Do we record the multiplication tables and play them on public busses until they are as universally known as "two all beef patties, special sauce, lettuce, cheese, pickles, onions, on a sesame seed bun?"

I don't think the medical analogy is totally useless, but we have to get the correct one. Public education could learn a lot from public health.

Hi Margo,

I agree that education has a lot to learn from public health. But I think you miss the core of the analogy, which is fundamentally about how education and medicine have approached measuring the performance of organizations in very different ways. State departments of health have explicitly attempted to isolate the effects of the hospitals on patient outcomes of cardiac surgery. Hospitals are "held accountable" for the 30-day in-hospital mortality rates of the patients on whom they operate, and the system acknowledges that sicker patients are, on average, more likely to have poor outcomes, no matter how good the care provided is. In education, we have asked schools to achieve the same outcomes irrespective of the advantages or disadvantages their students have outside of schools, and designated schools as "in need of improvement" when they fail to meet an arbitrarily set target.

Value-added models do go a long way towards providing a more accurate picture of which schools are "failing," but not as they are used in the context of NCLB. NCLB growth models are actually projection models, where progress is measured vis-a-vis a slope that gets kids to proficiency by a designated time. Certainly it is the case that some schools are excelling, but the central finding of school effects studies is that there are only a very small number of these exceptionally positive outliers.

I am intrigued by your point, "Hospital care is by its nature crisis oriented, rather than ongoing or developmental in nature," which hits nicely on how our views of the social goals of education and medicine have influenced the choice of measurement systems in each field. Education is framed as a collective problem, one in which there is a compelling public interest. The age of the clients further drives this sentiment. Health problems, in contrast, are framed as personal problems that are the outcome of personal eating and exercise choices, as well as our genetic fortunes, or lack thereof.


Think I'd have to side with Margo on the education/medical analogy, at least with regard to the expectations of the respective clienteles.

Mortality rates for cardiac geriatrics are significantly different than test results for eight or nine year olds on a state math or reading test. The former can be construed as a life or death situation. I HOPE - I HOPE the latter is not.

If you’re a cardiac patient and a geriatric, there's a certain level of (low) expectations inherent to your cohort. I would like to believe there are no teachers that have a similar level of expectations for their students. They're kids. If they don't get it by eight years old, hopefully by the time they're eleven or twelve, the teacher(s)/school will be able to "cure" them by providing the appropriate remediation necessary to get them to proficient.

Hi Paul,

I'm glad you echoed this point about what is often called "multi-chance structuring" in explaining why education and medicine have followed different paths. What I'm most interested in, at least from a "scholarly" perspective, is not whether the analogy works or doesn't work, but identifying the reasons why these two professions diverged. So your point is very helpful in this regard!

Andy Abbott, a scholar of professions, explains the relationship between the time sequencing issue and a profession's likelihood of external regulation in the following way:

"Inference is undertaken when the connection between diagnosis and treatment is obscure…Reasoning by exclusion is a luxury available only to those who get a second chance. The impact of reasoning by exclusion on a profession’s vulnerability to outside interference is thus conditional on the effects of time structuring. Multichance time structuring, curiously, is more vulnerable. Other things being equal, an incumbent profession with several chances to work on a problem will have more failures than will a profession that only gets one chance. For multichance professionals may be conservative, taking short-run failure in exchange for a greater chance of long-run success. Since treatment failure is the first target of attacking professions, professions with multiple chances are generally more vulnerable, ceteris parabis.” (p. 49)

Eduwonkette, You are missing the larger picture on the medicine/education analogy. Hospitals can be evaluated with adjusted mortality rates because the standard of care is well defined and rigorously tested.

For a new treatment to become approved, there is an extensive approval process (by the government) that ensures both safety and efficacy. This approval process gives the public great confidence that the treatment that they might receive from a hospital is the best available.

We have no equivalent guarantee in education. We have no equivalent confidence in our schools.

In medicine the public has confidence that the standard of care is the best possible treatment approach. The standard of care has been rigorously tested and doctors are only allowed to prescribe treatments that have been approved.

If doctors suddenly started making up their own concoctions in their offices and giving it to their patients, how long do you think that it would take for a lawyer to slap a malpractice lawsuit upon him/her?

In medicine, the practioners (clinicians) do not make up their own treatments. The public expects that the clinicians use well established, well vetted therapies.

Conversely, in education we expect the practioners (teachers) to both develop their own curricula and teach. But we (the public) have no idea if the teachers techniques or materials are of good quality or not.

Public schools have no systems in place to determine if there are better curricula or better teaching techniques. Just measuring test scores does not tell us anything about *what* enabled children to learn well.

So if our public school did have quality systems in place to determine if "best practices" really improve learning and if schools had a complete analysis upon the quality of available curricula then perhaps a "adjusted mortality rating" might be appropriate for evaluating how our schools are doing. But we don't have that system in place, without which I fear that the "adjusted mortality rating" for schools will become one more excuse about why low SES children can't learn.

Perhaps the better analogy to medicine is in how rigorous science allowed American doctors and medicine to transform itself from "quackery" to the forefront of cutting edge medicine. (The Great Influenza by John Barry did a very good job of outlining the transformation of American medicine.)

Our schools are completely dependant upon the goodwill and experience of its teachers. But this is a system of status quo, not a system of improvement.

To improve takes something quite different. It takes a coordinated effort to evaluate if a curricula or teaching technique actually improves student learning.

Medicine was able to transform itself into a high value, respected field by putting checks and balances on practioners and therapeutic inventions. Perhaps education could learn from their experience.

With some minor exceptions, I think Erin's right. The concept of risk assessment makes sense when outcomes are clearly defined (i.e., life or death) and when there is reasonably reliable background information. Neither is the case with schools.

Hi Erin and Sherman,

Sherman, Aren't outcomes clearly defined under NCLB - i.e. test scores alone? Though I believe this is an incredibly anemic view of the goals of public education, federal law has defined the only relevant outcome of education as test scores.

Erin, Your points about the evidence base in medicine and education are critical to understanding why medicine and education have developed differently, but don't provide support for the idea that we should compare the performance of schools as we currently do.

We need to separate the issue of social goals from the issue of measurement. I'm not suggesting that we should leave well enough alone if schools are not performing as well as we would like them to be but are not identified negative outliers in a risk-adjustment model. If we want to identify which organizations are performing better than expected, and also identify organizations that are not performing as well as they should be given their inputs, we have to address the fact that students are non-randomly sorted to schools.

If we were talking about holding teachers and hospitals accountable for process-based outcomes, you're right that it's a totally different story. I agree that medicine and education are in very different places in terms of the evidence base about what practices work, and under what conditions. But the debate about cardiac surgery report cards was about variability in doctor and hospital outcomes for the same procedure (bypass surgery). Proponents of cardiac report cards argued that some hospitals and doctors were achieving superior outcomes, just as many argue that some schools are achieving much better outcomes.

Sherman, I know you would never let a paper through your journal that called a school effect the difference in the performance of schools without any controls for student background? Why do we let policymakers label schools as failing without firmly establishing that the organization is in fact to blame?

I think the reason we don't want do inject the idea that student achievement is based partly on what they come to school with (parent support, poverty rates, etc.) into the NCLB debate is because it comes too close to admitting that our public education system doesn't help everyone equally. And that education does give everyone the same advantages is one of our cherished public ideals, like the ideal that democracy means everyone has an equal voice in our government (when lobbyists for companies/organizations with deep pockets prove the equal voice ideal wrong all the time). We don't want to admit that there are problems that are too big for education as it exists right now to fix. I've often wished that if I am going to be held so accountable for student performance that we had a boarding school system, so I could make sure my students had a quiet place to do homework, a good dinner and breakfast, etc. So many go home to chaos, and we wonder why their school performance is lacking!
I like the idea of value-added assessments; we get value-added scores for each classroom teacher in my state. I wish that NCLB took those scores into account; if a child grew as much as could be expected, then you escape the 'failing' label, even if the student isn't on grade level yet. I know one year, our value-added scores were great, yet we still didn't make AYP because our students were so far behind to begin with. It's very demoralizing to be labeled in the news as a failing school when you've made so much progress based on where the students started.


We should not be comparing the performance of schools the way we do. It is completely counter to providing a quality education for our students.

And the development of medicine and education was not random. Both were a function of very specific decisions made by key opinion leaders and laws passed both on the state and federal levels.

We take for granted the the scientific, evidentary basis of our medical system, but it was not pre-ordained to be so.

Regarding the evaluation of organizations. Any aggregate model that tries to summarize the very complex parts will always be of limited use. Certainly, the measurement errors of any one particular aspect of schooling (teaching, learning, background etc...) become multiplied when aggregated. To the point that many of these analyses are only useful for trending data rather than an accurate view of performance.

Because of the inherent complexity of schooling the current trend is to focus on testing outcomes. A grave mistake. This type of "accountability" is over-reliant upon a very fallible device.

Even worse, many of the tests that we rely upon are standardized (as opposed to specifically related to classroom instruction).

The international evidence strongly suggests that the use of standardized testing is *negatively* correlated with student learning. When school systems fail to make explict learning goals and connect them with the tests, both teachers and students become confused about what/how to teach/learn and student learning is reduced.

So the current school evaluation schemes will do nothing to improve student learning and may likely hasten the demise of the few quality education traditions we still have left in our schools.

So no, we should not let policymakers label our schools as failing. We need to convince them that it is our lack of systemic support for quality learning that is hampering our schools, not a lack of testing or accountability.

I will shamelessly appropriate your thoughts in my dissertation,

Is this a promise? (wicked cackle)


"Readers, why is the educational debate so different?" I've attempted to compare and contrast these two professions on a number of occasions.

A medical doctor sees their patients individually and diagnoses and then treats them accordingly.

Teachers, on the other hand, spend 70-80% of their day working with whole groups of students, on the same lesson, at the same pace. This to me is the inherent difference in the two professions.

A doctor could never call all of their patients for the morning into their office and say to them as a group, "OK, today we're going to cover influenza." In a group of twenty patients there might be only two or three with the flu and the rest would have other ailments.

So why do teachers think they can employ this delivery system with their students, and have this practice deemed acceptable? Part of it is inertia. This is the way they were taught when they were in school, likewise their parents and grandparents. In addition, very few were trained to teach using alternative strategies when they were in college.

And teachers wonder why they are not viewed in the same light as medical doctors in our society? Even lawyers deal with clients individually. It would be insane of them to think they could employ the "whole group" method. Again, why don't teachers deal with their students as individuals?

This is a bit of a rhetorical question because they can and they should. Their students are all different, regardless of their age. They all show up in September with different strengths and weaknesses and they ALL LEARN AT DIFFERENT RATES because of these preordained differences.

Differentiated, or better yet, individualized/customized instruction is available for those so inclined. It's more work and a bit more difficult to pull off but it's clearly superior to whole group instruction when one considers the differences between the twenty or so students in their classroom.

Paul: Your goal of differentiated instruction could be accomplished much more easily (although not perfectly) with greater use of tracking in the schools. With the current trend, it seems, of putting students with different abilities into the same classroom, there is less chance for the teacher to instruct all students at the level and pace they need.


I wonder about your assertion that international evidence suggests a negative correlation between standardized testing and student achievement. I wonder where this is coming from, as I have had the opposite impression--particularly with regard to standardized exit exams.

DC: Again, coming from the international arena, (early)tracking appears to have a negative correlation with achievement. To the best of my knowledge, high achieving countries have pretty much abandoned the practice, although around age 15 most have divergent paths in preparation for whatever comes after secondary education. Frequently these paths are designed in such a way as to allow some movement from one to the other--or to avoid "dead ends."


The type of test matters greatly to student learning. If the exit exams test to a specifically delineated course of study, then this type of external evaluation greatly supports quality student learning.

But when the exams are deesigned to test "general" abilities as many standardized tests are, then there is a negative correlation with student learning.

The systems that do exceptionally well with external evaluations (and that is all the school systems that do statistically better than us) first define the learning goals and second develop a test to measure if those goals were met. Not the other way around.

Tests can be a very important part of encouraging quality student learning. But they must come after specifically defining classroom study.


Don't know how things work in your state, but in mine, the standards determine the test. In fact, I thought that state standards were an NCLB requirement. The standards are also intended to be the guiding factor in determining classroom study--although the degree to which this happens is variable (with perhaps the largest variance occuring at the teacher level, according to something that caught my eye the other day).


The degree to which state standards are explictly reflected in state tests vary greatly with states.

The international evidence is very strongly in favor of specific well-delineated tests that are completely aligned with classroom instruction. More generalized reading and math skill tests whose content/coverage is not specified ahead of time is not conducive to enabling quality learning.

While the type of test is important, a good school system needs more than just good tests.

Great teaching, quality curricula, external evaluations with little/no teacher grading, checks and balances so that the test doesn't drive the educational process, etc....

We don't have the tests we need now to encourage quality learning because our school system is not set up to produce them. Just copying the format and not the intent of the international school systems will fail to improve our students' learning.

This has been a fascinating conversation for me to read, because I have often related people's ideal "dream" model of education to the medical model. I teach seventh grade math, and most parents come to me wanting individualized lessons, customized to each student's current level of understanding, taking into account all factors affecting performance, including previous teachers' impact. This is what the expectation is when you go to the hospitall. And to many people, who are accustomed to getting the best treatment, this seems like a normal expectation. Unfortunately, the way most schools work more closely resembles the auto industry model of mass production on a rigidly moving assembly line, where first one teacher places a part on the product (child), and then another teacher adds another component. In all actuality, this is almost what has to happen when we see thirty students for a 55-minute class period. We can pay lip service to differentiated instruction by attempting to do differentiated projects and activities, but the reality is that there is very little time to deliver the truly spectacular individualized teaching that each parent expects for his/her child. I DO suspect that this model IS in existence but I believe it is formally known as home-schooling.

I was raised in an environment of daily striving to improve, and have a great desire to be a good teacher. Sadly, many times I am demoralized by the constant demonization of schools, and of teachers particularly. And having raised five children of my own, who have all gone on to achieve college degrees, and hold down good jobs, I DO know that good parenting will overcome almost any other influence.

Please continue to research. NCLB is not the panacea government leaders believe it to be.

Attorney DC,

I would encourage you to revisit Brown (1954). One of the underlying themes/outcomes of this landmark decision was that tracking was considered both discriminatory and unacceptable in our public schools. Among many of the faults of the defendant school districts (inequitable funding, separate but equal, etc.), the plaintiffs were found to be victims of tracking. They were labeled "low" achievers from an early age, a label from which many would never escape.

Over the past three to four decades in this country tracking has become (thank goodness) a practice both frowned upon and avoided by most reputable public school officials. Homogeneous or ability grouping (code for tracking) has become a thing of the past.

math teacher:

I remember touring an auto assembly plant back in the day when you actually ordered your automobile and selected the interior color and the exterior color and whether the radio was AM or AM/FM and if the windows were cranked or auto and whether or not there was air. It was a fascinating process because each car that came down the line was different, based on the individual order, but the process was sufficiently coordinated that the guy putting on doors always had the right one for the car in front of him and the radial tires showed up for the right car. Fascinating. Each worker performed repetitively a small piece of the work, but they all came together. Like a symphony.

Compare to the American system of education. Same assembly line, same diverse parts. The intended product, and the contribution of each worker, is determined individually--at each work station. So, one guy puts power windows only onto green cars, another guy lets three cars go by while he goes for a break and when he comes back he takes the available tires at random and assigns them to the cars that are coming next. The guy on the driver side is attaching a blue door while the guy on the passenger side is attaching a red one.

At the end of the line, there's some pesky quality control guy with a beef about how the cars are turning out. But what can you do with the kinds of material that you're getting?

Tracking exists almost everywhere in the form of advanced placement classes. If the argument works that teachers need to separate the 'cream' for sound educational reasons,don't those same reasons ring true for the argument that there will still be significant variations (achievement, ability, effort etc.) among the remaining group? Is it that only the very gifted and talented deserve a differentiated form of instruction? Tracking does not always mean discrimination. Taken to its logical conclusion tracking, (student centered focus rather than curriculum centered focus) ultimately translates to individualized instruction.

Math Teacher,

I taught in a traditional classroom for one year and came to the conclusion there had to be a better mousetrap.

I remembered back to my days in school and in most cases having to spend the majority of the school day waiting for the rest of the class to “get it.” I also had rude awakenings in my chemistry and physics classes of being one of those kids who didn’t catch on to everything on the first go-around. That too, was an eye opener.

Whole group instruction for most subjects, and most of the day was simply not fair -not fair to the brighter kids who understood the lesson immediately, not fair to the slower kids who needed more time to grasp the concept or skill, and not even fair to the so-called "average" kids either because some things they got right away while others they needed more time to understand.

For the next thirty-three years I individualized the instruction in each of the four major disciplines with the focus being on the pace of learning for each student in each subject. Some kids could advance quickly through much of the material while other kids simply needed more time. Some kids were quick to catch on in one subject while challenged in others. My message was, that was ok. The only thing I asked of each student each day was that they do their best, that if they did their best each day, at the end of the marking term or the end of the year it didn’t matter who wound up where on the continuum of learning. Their best effort was all that mattered to me and all that mattered to their parents.

It was a great deal of work establishing the system, but once in place it worked, and it worked great. No one was ever bored by the pace of their instruction and no one was ever overwhelmed. They simply had to demonstrate mastery of a concept before I moved them on to the next skill or level.

Apologies to Erin, Margo, et al, for having have to suffer through this explanation again.

Paul: I agree with you that lessons can be structured to allow students to progress at more of an individual pace. There is also a place for small group instruction (in some classes). However, as a former teacher, I can tell you that it is MUCH easier to teach at the appropriate level for the students if students are divided into classes based on their general levels of prior knowledge, skills, and ability to the extent possible.

I once taught an 8th grade reading class in which the students ranged from 2nd grade to 12th grade reading levels. Don't tell me that the kids did better in a class where almost no children were at the same level, than they would have in a class with similar students (e.g., 2nd to 6th grade reading levels in one class).

Dear Eduwonkette
I think the reason that educators as opposed to medical people refuse to take seriously the role of both hereditary and social conditions in the matter of school success is deeply rooted in larger American cultural traditions, traditions we sum up in the concept of “individualism.” Going back to Franklin we have championed the notion of the infinite potential of individuals over against their origins. Franklin posited an unheard of capacity for self-shaping that served as an energizing ideal for educational reformers throughout our history. Interestingly, up until the last few decades only liberal educational reformers fought for opening the schools for longer periods of time to larger and larger segments of the population. Conservative educational figures emphasized the idea that only a small segment of the population, for hereditary and social reasons, was capable of benefitting from extended educational programs.
A few decades ago conservative educators began to get the idea that the American belief in the infinite potential of individuals was too powerful an idea to buck. They latched on to it; but with a twist. If individuals were free to make of themselves what they wished, then they could be held accountable. Educators since Horace Mann had been claiming they were capable of shaping this individual potential, so they too, in an amendment to American individualism, could be held accountable. Nothing beyond the individual’s own effort and that of the school was to be seen as implicated in educational success. I am of course here greatly oversimplifying the connection between the larger American cultural belief in individual potential and responsibility and our unwillingness to look at hereditary and social factors in assessing student success. I offer a much lengthier and only somewhat less simplified analysis in High Expectation: The Cultural Roots of Standards Reform in American Education (Teachers College Press). Incidentally, the whole mind-cure tradition in American medicine also reflects our American individualism, and can be, like its educational cousin both energizing to patients and cruel in holding them responsible for their own illness.

Actually, if the medical system were evaluated by NCLB, it would have a 100% failure rate: all its patients die, eventually. Essentially, this is what schools are being asked to do--make everyone live, regardless of their condition upon entry to the system (by 2014, or whenever the idiocy of NCLB decrees all children shall be adequate). I fail to understand how people can take the premise of this legislation seriously, that all children will achieve at the average level, regardless of disability or any other thing. It's laughable, and by discussing any of it rationally, the premise escapes refutation.

S F is right - it is difficult to take the premise of NCLB legislation seriously. I imagine legislators elated at finding such a neat solution to the "problem" - children can't read on grade level...hmm...here's what we'll do...we'll pass a law that says they must be proficient by 2014...problem solved! If Congress would just pass a law that says all cardiac surgery patients will survive, that would take care of the patient mortality problem as well!

Kayte: Touche!

Perhaps the NCLB testing is accurate: many, many school are failing the published standards. Don't kill the messenger(the test), but change the system.

Why can't each student do self tracking? Studying what he's interested in?

What exactly is NCLB really measuring? To me, it is measuring how effective the states are in gaming the system but the intent is to measure how effective the schools are at remediating. Where is the value added data of regular ed teachers remediating students out of sped? Schools know the demographic of their students and establish controls through subgroups so they must meet these needs. Social goals are parsing if the student is not receiving or on track to receive a high school diploma and not a certificate of completion.

How are the policymakers who label schools as failing to blame when they are measuring school progress in remediating students they themselves have identified as in need? Students are to take the NCLB tests based on their enrollment grade but, take Maryland for example, they take it based on curriculum level. Gaming the "multi-chance structuring" and we see the greater risk of failure in graduation rates.

A school already places controls on what their individual students come to school with as students get assigned and ability grouped based on these factors. School politics is brutal just as office politics can lead one to wear a mask or reveal an identity :) What a failing school is measuring is not what the student's life experience is but what they are experiencing in school. Please don't tell me Johnny can't read because he is homeless when he has a 90 min block of reading instruction every day.

I'm sorry if working in a school labeled as "failing" or not making AYP is demoralizing, and that doctors feel the need to lobby for controls to ensure their true ability is being advertised so patients are charged accordingly. It seems, stature equals worth in fees or collective bargaining. I wonder if it is demoralizing for students to be labeled "lifers," unmotivated, or with a socially disadvantaged gap?

For an accurate view of school performance, one must also include just how each State and district is gaming the system. Students who are not adding to AYP scores are going to be the students in sped, certificate track, or dropouts of tomorrow. How well does your district do in addressing and meeting struggling students needs? All data points to poorly. If you want to use the medical model you have to look at the individual and not at the whole. A failing school label does not indicate the staff and students collectively are failing but that staff and administration is failing to address individual remediation. If there is no dyslexia program in a school tree, are dyslexics being served in the district forest?

Improvement is currently re-examined as with current practices, materials and techniques to see how well they are aligning with overall expectations through IEPs and IDEA/NCLB. Unfortunately, it is the teachers who point out to parents the design of collective bargaining to maintain the status quo system-wide through administrations lack of data or action. The educational establishment knows best.

A failing school is a matter of perspective. Much has been listed from the administrations view of student demographics. Can we see reporting or data from the view of the Dep of Ed to the states? It is amazing to me just how many states are still not in compliance with NCLB 6 years later. I'm rather tired of hearing that every child matters but with an asterisk; school X* or subgroup A* are below proficient in reading (*90% qualify reduced meals), or school X* or subgroup B* are failing math (*70% African-American). Because I know the school staff see it as Timmy* below proficient in reading or Bobby* failing math but it's to be expected as he ages out of the system.

Can we focus on remediation data for that is the true correlation to the medical model. What specifically was done for the individual that helps the collective whole or preventative measures. Procedure vs outcome = data. How states have responded to NCLB/IDEA by adding high stakes testing, adjusting tests, teaching to the test, etc. is how principals and admin get data but it is not the procedure. The procedure would be remediation instruction, programs, or curriculum.

One problem with the medical analogy is that while it is acceptable to be satisfied with a mediocre outcome for a very sick patient, it is not politically correct to be satisfied with a mediocre outcome for a disadvantaged child. The whole point of the standards movement is to fight "low expectations", while the risk adjustment procedure is all about setting realistic expectations.

American education isn't realistic, it's idealistic. It is rarely about what children are REALLY likely to do with their lives, it is usually about pretending they are all going to become president one day. At least that is the attitude we have had ever since the 1960s.

This system would never be adopted because a good 30% of the population could be identified as early as age 8 as candidates for pallitive care. That is, the probably of them ever achieving functional reading proficiency or functional quantitative proficiency within the K-12 system is close to nill. According to the medical analogy, these children shouldn't even be in school.

Nobody wants to face these statistical realities.

Comments are now closed for this post.


Recent Comments

  • James: One problem with the medical analogy is that while it read more
  • FeFe: What exactly is NCLB really measuring? To me, it is read more
  • Falstaff: Perhaps the NCLB testing is accurate: many, many school are read more
  • Attorney DC: Kayte: Touche! read more
  • Kayte: S F is right - it is difficult to take read more




Technorati search

» Blogs that link here


8th grade retention
Fordham Foundation
The New Teacher Project
Tim Daly
absent teacher reserve
absent teacher reserve

accountability in Texas
accountability systems in education
achievement gap
achievement gap in New York City
acting white
AERA annual meetings
AERA conference
Alexander Russo
Algebra II
American Association of University Women
American Education Research Associatio
American Education Research Association
American Educational Research Journal
American Federation of Teachers
Andrew Ho
Art Siebens
Baltimore City Public Schools
Barack Obama
Bill Ayers
black-white achievement gap
books on educational research
boy crisis
brain-based education
Brian Jacob
bubble kids
Building on the Basics
Cambridge Education
carnival of education
Caroline Hoxby
Caroline Hoxby charter schools
cell phone plan
charter schools
Checker Finn
Chicago shooting
Chicago violence
Chris Cerf
class size
Coby Loup
college access
cool people you should know
credit recovery
curriculum narrowing
Dan Willingham
data driven
data-driven decision making
data-driven decision-making
David Cantor
Dean Millot
demographics of schoolchildren
Department of Assessment and Accountability
Department of Education budget
Diplomas Count
disadvantages of elite education
do schools matter
Doug Ready
Doug Staiger
dropout factories
dropout rate
education books
education policy
education policy thinktanks
educational equity
educational research
educational triage
effects of neighborhoods on education
effects of No Child Left Behind
effects of schools
effects of Teach for America
elite education
Everyday Antiracism
excessed teachers
exit exams
experienced teachers
Fordham and Ogbu
Fordham Foundation
Frederick Douglass High School
Gates Foundation
gender and education
gender and math
gender and science and mathematics
gifted and talented
gifted and talented admissions
gifted and talented program
gifted and talented programs in New York City
girls and math
good schools
graduate student union
graduation rate
graduation rates
guns in Chicago
health benefits for teachers
High Achievers
high school
high school dropouts
high school exit exams
high school graduates
high school graduation rate
high-stakes testing
high-stakes tests and science
higher ed
higher education
highly effective teachers
Houston Independent School District
how to choose a school
incentives in education
Institute for Education Sciences
is teaching a profession?
is the No Child Left Behind Act working
Jay Greene
Jim Liebman
Joel Klein
John Merrow
Jonah Rockoff
Kevin Carey
KIPP and boys
KIPP and gender
Lake Woebegon
Lars Lefgren
leaving teaching
Leonard Sax
Liam Julian

Marcus Winters
math achievement for girls
meaning of high school diploma
Mica Pollock
Michael Bloomberg
Michelle Rhee
Michelle Rhee teacher contract
Mike Bloomberg
Mike Klonsky
Mike Petrilli
narrowing the curriculum
National Center for Education Statistics Condition of Education
new teachers
New York City
New York City bonuses for principals
New York City budget
New York City Budget cuts
New York City budget cuts
New York City Department of Education
New York City Department of Education Truth Squad
New York City ELA and Math Results 2008
New York City gifted and talented
New York City Progress Report
New York City Quality Review
New York City school budget cuts
New York City school closing
New York City schools
New York City small schools
New York City social promotion
New York City teacher experiment
New York City teacher salaries
New York City teacher tenure
New York City Test scores 2008
New York City value-added
New York State ELA and Math 2008
New York State ELA and Math Results 2008
New York State ELA and Math Scores 2008
New York State ELA Exam
New York state ELA test
New York State Test scores
No Child Left Behind
No Child Left Behind Act
passing rates
picking a school
press office
principal bonuses
proficiency scores
push outs
qualitative educational research
qualitative research in education
quitting teaching
race and education
racial segregation in schools
Randall Reback
Randi Weingarten
Randy Reback
recovering credits in high school
Rick Hess
Robert Balfanz
Robert Pondiscio
Roland Fryer
Russ Whitehurst
Sarah Reckhow
school budget cuts in New York City
school choice
school effects
school integration
single sex education
small schools
small schools in New York City
social justice teaching
Sol Stern
Stefanie DeLuca
stereotype threat
talented and gifted
talking about race
talking about race in schools
Teach for America
teacher effectiveness
teacher effects
teacher quailty
teacher quality
teacher tenure
teachers and obesity
Teachers College
teachers versus doctors
teaching as career
teaching for social justice
teaching profession
test score inflation
test scores
test scores in New York City
testing and accountability
Texas accountability
The No Child Left Behind Act
The Persistence of Teacher-Induced Learning Gains
thinktanks in educational research
Thomas B. Fordham Foundation
Tom Kane
University of Iowa
Urban Institute study of Teach for America
Urban Institute Teach for America
value-added assessment
Wendy Kopp
women and graduate school science and engineering
women and science
women in math and science
Woodrow Wilson High School