« Challenging the 'Longer Hours,' 'Try Harder' Wisdom | Main | Blaming Teachers »

Reports, Reform, and Hype


Dear Deborah,

I can’t believe that we are debating the message of A Nation at Risk in 2008, a quarter century after it appeared!

We have been agreeing so much lately that it is useful that we remember that we do have plenty of differences. That way, we can continue to try to bridge them.

This is one issue where we definitely disagree. The reason that the commission that wrote Nation at Risk focused on schools was because the name of the commission was “The National Commission on Excellence in Education.” Its charge was to “present a report on the quality of education in America,” not to propose needed changes in society and the economic order. Its chair, David P. Gardner, president of the University of California, said in his introductory letter that the goal of the report was to identify problems and to provide solutions, “not to search for scapegoats.”

Nor was the report unduly focused on blaming the schools for productivity decline. What it did say, which makes sense to me, is:

Knowledge, learning, information, and skilled intelligence are the new raw materials of international commerce and are today spreading throughout the world as vigorously as miracle drugs, synthetic fertilizers, and blue jeans did earlier. If only to keep and improve on the slim competitive edge we still retain in world markets, we must dedicate ourselves to the reform of our educational system for the benefit of all—old and young alike, affluent and poor, majority and minority. Learning is the indispensable investment required for success in the ‘information age’ we are entering.
Our concern, however, goes well beyond matters of industry and commerce. It also includes the intellectual, moral, and spiritual strengths of our people which knit together the very fabric of our society. The people of the United States need to know that individuals in our society who do not possess the levels of skill, literacy, and training essential to this new era will be effectively disenfranchised, not simply from the material rewards that accompany competent performance, but also from the chance to participate fully in our national life. A high level of shared education is essential to a free, democratic society and to the fostering of a common culture, especially in a country that prides itself on pluralism and individual freedom.

And more: “What is at risk is the promise first made on this continent: All, regardless of race or class or economic status, are entitled to a fair chance and to the tools for developing their individual powers of mind and spirit to the utmost.”

Nothing in these lines or in many others that I could quote can be construed as teacher-bashing. If the commission failed to ask questions about “other institutional and system failures that need to be addressed,” the same criticism could be directed to scores of other reports about school reform.

It is easy to nitpick a report because the people who wrote it did not know what we know now. They did not know, for example, that the decline in test scores—which began in the mid-1960s—bottomed out in 1980-81. We know that now. But they were writing in 1982 and did not. You know that hindsight is characterized by 20-20 vision.

Meanwhile, back to the ranch or back to the present. On one day, I received conflicting news stories in my email: One from Baltimore, which is about to jump on the small-school bandwagon; the other from Portland, Oregon, where small schools (started much earlier) have lost their luster.

Then this past week, New York State reported phenomenal test score gains, some in double digits, in every district and in almost every grade. These scores are in conflict with the state and city NAEP scores of last fall, which showed that New York’s scores in reading and math (except for 4th grade math) for the past two years were unchanged. Now, here is an interesting puzzle: How did New York State (and New York City) move from flat scores over the past few years to a phenomenal jump in 2008? Should we call it the miracle of 2008? From my experience with large-scale testing, I have learned to be dubious about any one-year changes that are large, whether up or down. One child may have an amazing improvement or loss, but it is unlikely that an entire district or state will see a sudden change of the magnitude reported by New York State.

What do you think is going on?



The phenominal test score gains in New York, when compared with their NAEP results, are suspiciously congruent with state results from Tennessee, Mississippi, Georgia, Florida, Oklahoma, Texas, etc.

There's only one reason that many students could demonstrate that kind of apparent improvement in their academic performance over a one year period. The tests were different and they were clearly dumbed-down.

The sad part, these kids were USED by the mayor and school officials in a fraudulent attempt to justify their policies.

Here is the available ELA test score data (courtesy of eduwonkette):


As I was looking it over, I noticed something rather strange. The table gives, for each grade level, and for each year from 2006-2008, the total number of test takers, and then the total number of level 1, 2, 3, and 4 scores (as well as the number of 3+4 scores).

You'd think the numbers would add up in each row. For instance, if the total number of third-grade test takers in a given year was x, then the numbers of students scoring 1, 2, 3, and 4 in that year would add up to x. There is no other score, so how could it be otherwise?

Everything adds up except for the 2007 figures, which do not add up for any grade level. This is what I mean:

In 2007, 71045 third graders were tested.
9249 scored at level 1.
21735 scored at level 2.
35695 scored at level 3.
4375 scored at level 4.

Now add up 9249+21735+35695+4375. You get 71054, not 71045.

Do this for any of the grade levels for 2007, and you will see a discrepancy. Try it for 2006 or 2008, any grade level, and everything will add up just fine.

The discrepancy has a minimal impact on the overall results, from what I can seee. But why are there discrepancies for 2007 only, and why do they exist at every grade level?

This does not explain this year's drastic test score increases. But it does throw things off a little if you're trying to make sense of the figures.

All too often educators pose as at least part of their response to a comment on education in this country is to accuse the observer of “teacher bashing” and/or “education bashing”. This tells me simply the educators are too thin skinned. The trait of viewing statements as attacks even where clearly no such attack exists hampers understanding. It makes no sense to bridge differences where none exist.

The fifth- and seventh-grade scores account for a disproportionate share of the rise in level 3.

Again, I got the data from here:


I subtracted the numbers for fifth and seventh grade, and arrived at the following:

2008 total: 277663
Level 1: 20704 7.5%
Level 2: 106564 38.3%
Level 3: 137308 49.5%
Level 4: 13087 4.7%

2007 total: 284178
Level 1: 27195 9.6%
Level 2: 112379 39.5%
Level 3: 130855 46.0%
Level 4: 13775 4.8%

Not only is the level 3 rise less drastic, but the 2007 percentages are very close to the original ones. That is, removing the fifth- and seventh-grade data affected them very little. Not so with the 2008 percentages.

Again, you will see that the 2007 numbers do not add up. This is an error in the DoE data.

So, the next step would be to take a look at the tests themselves. Would that this were possible.

Actually, the exams for 2006-2008 are available online. My error.

2006: http://www.nysedregents.org/testing/elaei/06exams/home.htm

2007: http://www.nysedregents.org/testing/elaei/07exams/home.htm

2008: http://www.nysedregents.org/testing/elaei/08exams/home.htm

I started comparing the ELA 7th-grade exams of 2007 and 2008. Preliminary observations about Book 1: the 2007 test has fewer but longer reading passages (4 total), and the questions have more difficult vocabulary. The 2008 has more reading passages (5 total), and the questions seem to have simpler vocabulary.

Someone really should proofread (or even just read) these tests for grammar. Question 5 on the 2007 test (http://www.nysedregents.org/testing/elaei/07exams/gr7bk1.pdf) reads:

The cause of the conflict between Alex and Kendall is due to
(A) the rumors they have heard
(B) the schools they attend
(C) the conversation they have
(D) the mind games they play

If the cause is due to something, then that thing must be the cause of the cause.

The cause of my confusion about this question is its poor grammar. The poor grammar is likely due to lack of grammatical knowledge on the part of a test writer or proofreader. Thus, the cause of my confusion about this question is due to....

Gerald Bracey has opinions and a unique style of expressing them. But when I really want to subject my ideas to the toughest cross examination, I've found Bracey to be invaluable. I don't have the expertise to evaluate the following, and I haven't quoted his more abrasive comments, but I'm offering Bracey's latest EDDRA post for discussion.

Brcey wrote: "Because like the twisted, spun and selected statistics in that golden treasury of misinformation, A Nation At Risk (the article is headlined, "A Nation Still At Risk"), PISA is the only study suited to attempts to induce panic. We have shown improvements on TIMSS and we look very good indeed on PIRLS. No one talks about that. But if PISA is no damn good, why pay ANY attention to it? Here's a short quote from one of the chapters by British economist, S. J. Prais.

"That the U. S., the world's top economic performing country country, was found to have schooling attainments that are only middling casts fundamental doubts on the value, and approach of these surveys [such as PISA]."

Right on, S. J. That's the right attitude. It's the test that has to prove itself against a reality check. We take a "believe the worst" attitude. We assume, very foolishly, that PISA is valid and go oh woe is us. We should be saying PISA is a crock (and I have been saying this; I received this book because my writings came to the attention of a chapter author in Munich).

Twenty-five years after A Nation At Risk, the Institute for Management Development and the World Economic Forum both rank the U. S. the most competitive economy in the world. IMD, which has been at this game since 1989, says the U.S. surpassed Japan as #1 in 1994. You might recall that at the time, Japanese kids were acing tests but Japan's economy was tanking and it would remain in the doldrums for almost 15 years. That alone should have dispelled the myth of the link between test scores and a nation's economic health.

All right. I found the 2006-2008 tests online. They are available for public view on the NYSED website. I took a look at the seventh-grade ELA tests, Book 1, for 2007 and 2008. I see some interesting differences.

The 2007 test has four reading passages, on the longer side. The 2008 test has five, a little shorter. The 2007 test has more questions per passage than the 2008 test, as you'd expect.

The vocabulary on the 2007 test seems a little more difficult than on the 3008 test, both in the passages and in the questions. The 2007 test questions use words like "indicates" and "suggests"; the 2008 test questions do not. The 2007 test questions seem to vary their phrasing and vocabulary; the 2008 test questions use just a little more repetitive language and phrasing (such as "best supports the main idea").

Each of these differences taken in isolation fails to impress; but taken together, they could account for some of the difference in scores. It seems the 2008 test is well matched to a certain kind of test prep that emphasizes certain kinds of questions. Of course these are first impressions, and a much more detailed analysis is needed (but not by me--ha!)

Now, the scoring rubric would also shed light on the situation. A scoring key is available online, but it doesn't tell much. How are the "constructed response" questions scored? And what is the formula for converting from raw score to final score? We need to know this.

Incidentally, some of the questions are poorly written. On the 2007 test, question 5 reads:

5. The cause of the conflict between Alex and Kendall is due to
(a) the rumors they have heard
(b) the schools they attend
(c) the conversations they have
(d) the mind games they play

Now, if the cause of the conflict is due to something, then that thing is the cause of the cause, no?

I am disappointed that neither the writer nor any of the proofreaders caught this. I miss my own mistakes all the time, but this is not a mistake. This is bad grammar and bad logic. (I pointed this out in an earlier comment that didn't make it through the server--maybe I had included too many links. If it appears later, please excuse the redundancy.)

Final conclusions: (a) the 2007 seventh-grad ELA test seems a little harder than the 2008 one; (b) we need all the information, including scoring rubrics; and (c) teachers and scholars should review tests for errors and ambiguities before they go live!


Were you able to find any of the technical reports on the tests. I believe that they have to be posted somewhere. As I understand it, there is something known as equating that goes on. When items are "field tested," usually by including them as uncounted items in an earlier version, I believe that there is a process by which the number of students getting the question right is compared to the number of students getting various other questions right. This somehow goes into the process of determining/adjusting the questions that go into each test, and I believe is also how the raw scores are converted to scale scores. Are there any psychometricians out there who know or would care to comment?


The international tests, PISA, TIMSS, PIRLS are remarkably consistent.

That is at the lower grades, our students are only slightly behind the top school systems. But as our students spend more time in our schools the less they learn when compared to their peers in other nations with better school systems. And thus the gap between our students and others in the top school systems becomes larger with time.

The PIRLS measures reading literacy at 4th grade. The TIMSS measures math and science at 4th and 8th grade. The PISA measures reading, math and science for 15-year olds. So the PISA should show the greatest differences in student learning.

As for Bracey's argument that schools do not matter with regard to economics. Schooling has a highly delayed effect on economics. That is the children going through our schools now will not be contributing to economic growth for at least 15 - 30 years from now. Making an argument that since we are great now therefore we will always be, is uncompelling.

Given all your critiques of our schools, why do you assume that other school systems are not doing a better job?

Erin Johnson


Any number of economists, including some with Nobel prizes, have established that education develops human capital and that education is key to a nation's economic success. Those who say it is not are misinformed, perhaps willfully so. It is foolish to say that the schools caused an economic upturn or an economic downturn; obviously they don't. Making such a point is pure polemics, used for propaganda or purely for argument's sake.

Educators have long argued that an investment in education will benefit the nation as a whole in the long run.

Who can doubt it? Bracey does? He's wrong.

Diane Ravitch

I am not remotely qualified to address the details of PISA vs. Timms vs PIRLS.

I don't think you are being fair to Bracey's arguments which are always as subtle as his rhetoric is inflammatory. He argues that the relationships between education and economics are complex and we don't understand them. I think that holds up.
He also makes arguments that sound persuasive to me that even with education stats we don't know enough to support a lot of the international comparisons.

He argues that the strength and innovation of the American economy is an argument against blanket indictments of our educational systems. He overstates, in my opinion, his argument against the John Hopkins nongraduation rates. But I'm impressed with his "bottom line argument." He asks where, in our every day experience, do we see those hoards of dropouts?

I have minimal contact with suburban schools. IN MY MINIMAL EXPERIENCE, as well as my layperson's reading of the evidence, I'm impressed with Bracey's arguments that most of our schools are doing pretty well.

FROM MY EXPERIENCE, I want us to concentrate on schools that are demonstrably failing, and in the places like my neighborhood where I see more people on the street who are suffering. I want us to focus on rural poverty, inner city neighborhood schools, and the low income minorities who HAD BEEN invisible in suburban schools. I give NCLB props for raising consciousness of poor Black and Brown students who were hidden by the averages before NCLB. But I don't think that we will get much milage out of berating suburban schools and the NEA. That is doubly true if we are overstating the case against them.

But getting back to the art of interpreting social science from our various real world perspectives, I don't see international data as being very illuminating. We can't even agree on the definition of poverty levels within the US. Until recently, the discussion involved models that would adjust poverty rates in urban centers up, and lower poverty rates in Oklahoma City. (again, I'm just being upfront and concentrating on what I know.) But today, with $4.00 gas and the announcement that Oklahoma is #1 with 1/3rd of its people not having health care, we would want models that revise our poverty rate up. So how in the world could factor in the stigma of being Black or Brown in America vs being Muslim and Turkish in Germany, and how much the social benefits compensate?

Implicit in Bracey,and my arguments, is the fear that NCLB-type accountability threatens the goose that lays the golden eggs. Our prosperity is attributable to our democracy's creativity. We won't help poor kids by undermining the vitality of middle class kids' schools.

Again this is anecdotal, but when I taught freshmen at Rutgers in the mid-70s, I was absolutely amazed by the excellence of regular middle class kids. I had never seen anything remotely like it. Now my neighborhood is gentrifying and all I have to do to see the excellence that came out of OKCs suburbs is to compliment those kids on their tatoos and get into a conversation.

After all, this blog is Bridging Dfferences. A little of Bracey's rhetoric goes a long way. His analysis, however, deserves respect


Taking Bracey's same argument vis-a-vis our social programs and economic growth: it could be argued that because the US is doing great economically, then our social programs must be sufficient.

Would you agree with that statement?

Erin Johnson

I don't see the value of going to extremes. A job is the best social program. The Free Market has worked best. BUT, the Free Market needs to be balanced by the government and a social safety net. The old New Deal approach was not perfect, and we need to develop new progressive solutions for the 21st century, but Bracey is saying we should stop throwing the babies out with the bath water. We need to stop worshiping neoliberal and neoconservative theories.

The old-fashined liberalism that produced the greatest social and educational miracles in history must be updated, but let's respect those basic principles that have worked so well.

My precise disagreement with you, and others who have so much faith in their specific proposals, is that in retrospect I'm glad that many of the liberal social engineers were unsuccessful in patterning our imperfect social safety net after the European model. We helped people drink fresh water, enjoy sewage, provide basic prenatal care, etc. and we created jobs. The government thus helped create hope. Then the dynamism of American democracy took over.

I'm agreeing with Diane that education is necessary for real prosperity. I'm just saying that Bracey agrees also, and that's one reason why he rejects some pessimistic appraisals of our system as a whole.

Ironically, one purpose of public schools was to take children out of the job market so they wouldn't compete with adult workers. One reason for the GI Bill was to give combat vets a quiet place to heal. The great outpouring of educational excellence, relative equality, and the economic energy it released were unintended consequences.

We need to debate which roadmap to reform is best, but let's not get carried away with the beauty of our blueprints. Paul Tillich is relevent here, calling for a great leap of faith into the unkown. I would think that educators would be especially open to that hopefulness.


Considering that I have not offered any, what specific proposals do you disagree with?

One of the reasons that ed reform has failed in the past has been too many people immediatly leaping to "solutions" without even figuring out what the problems were in the first place.

What was NCLB trying to solve anyway? Was the problem that our students were not tested enough? Was the problem that the curricula had been too diverse and needed to be narrowed to test prep strategies? Was the problem that teachers are lazy and that we need to just light a fire under them and they would teach better? Lots of "solutions" to non-existant problems.

My contention has been that many of our underlying difficulties in improving our students' learning are elements of our schools/culture that are implict. That is we do not see them.

The power of the international data is to get us to look outside the box and see what elements of our schools might be preventing real improvements in student learning.

Muliple, disparate analyses are pointing to the contributions that a quality school system has in improving student learning. This is not an aspect of education that will be apparent if we just look at within-US data alone.

Erin Johnson


Thanks for the recommendation. I should look at the report. Also, an anonymous poster on the NYC Public School Parents blog observed that "each required percentage for minimum level 2 and level 3 (raw scores) went down across the grades." (See the first comment to the post ""NYS Algebra Regents: A National Joke, A Statewide Embarrassment." That could explain a lot. This information is up somewhere on the NYSED website as well.

Unfortunately I can't dig in it immediately, as I've got four other projects demanding my attention! I'll return to it when I get a chance. In the meantime, I am enjoying other people's insights. I believe we will learn a lot more about these tests and the scores.

Erin asks: "What was NCLB trying to solve anyway? Was the problem that our students were not tested enough? Was the problem that the curricula had been too diverse and needed to be narrowed to test prep strategies? Was the problem that teachers are lazy and that we need to just light a fire under them and they would teach better? Lots of "solutions" to non-existant problems"

Erin--I am with you that putting the solution before the problem is a poor way to operate. But, in this case, I believe that there has been a defined problem--although it remains far from most of the dialogue.

Somewhere back in the 60s/70s--I am presuming as a part of the War on Poverty, Title I funds were made available to "level the playing field" between the schools where low-income students went and those where non-low-income students went. Like much of the funding streams at that time, there was not a whole lot of accountability (read restrictions) regarding how funds were to be spent. I recall that during my student teaching, inner-city schools had lots of projectors and things, purchased with Title I funds. At that time, or later, there was a "supplement not supplant" requirement--so that crafty local districts couldn't just use the funds to lower their local contribution. These things are, of course, hard to track--particularly over decades in which many things change. Over time, perhaps more restrictions--ie: Title I teachers only for kids who are bona fide "poor" kids. But--and particularly within a Republican administration--somebody was bound to ask the question--has the funding made any difference? Well, how would we know? Well--maybe we should have some tests? Could we use NAEP as a universal indicator--well that smells a whole lot like a national curriculum and might get into some Constitutional questions and there might be some push-back from the states. OK--lets let states to make up their own tests, we can require them to take NAEP so we know that they're not cheating by coming up with some test that the lamp-post is capable of passing (would they do that?). Well--what if they don't take it seriously? Let's make them publish the results. What if they just make all their kids LD so that they don't have take the tests? Make them report their scores too.

Is this an ideal system? Of course not. But did it result from a real problem? Yes, it did. It resulted from the widespread denial of equal access to education based on poverty. Did we already have a go at just giving schools the dollars and letting them decide how to best spend them? Yes, we did.



I agree that there is a real problem with our schools and in particular our very large achievement gap. And I have no problem with the federal government asking whether their dollars were well spent or not.

But did NCLB make a difference in our students' learning? Did it make a difference in closing the achievement gap? And if it did not, why?

While the intentions of NCLB were honorable (who could ever disagree with No Child Left Behind), the details of the law were poorly designed and failed to improve our students' learning.

The implict rationale behind NCLB was completely wrong. And it was wrong because NCLB had completely jumped to the "accountability" solution before defining what was the problems were with our schools and how the proposed solutions were supposed to address those problems.

Unless we know what problem we are trying to solve, too often we end up with poorly thought out laws that often make our students education worse!

NCLB was trying to "solve" problems that did not exist. Consequently, we ended up with untenable constraints and "accountablity" on our schools without any noticable improvements in our children's learning.

Our schools do not lack in "accountablity" but rather suffer from a poor school system that has no way of improving teaching, curricula or assessment.

There are school systems that are effective at improving their students learning.

One of the best cases of school improvements comes from Singapore.

Singapore as you may know has been well known for their stellar acheivements in math. Less well known is their dramatic, stunning improvments in reading as measured by PIRLS.

In 2001, Singapore only did so-so in their 4th grade reading scores on PIRLS. Their MOE decided that this was an area that they wished to improve on. As you may know, the vast majority of students learn in English which is a second language to at least 75% of their students. But even still, their MOE wanted to improve their students reading ability.

By revamping their curricula and giving quality support to their teachers, the Singapore MOE was able to dramatically change the reading achievement of their 4th graders. To the extent that on the 2006 PIRLS, Singapore outscored all native English speaking countries. (One of the provinces in Canada was higher, but not the country as a whole.)

How is it that Singapore is able to enact improvements in learning and our country can not? Why is it that Singapore is able to improve their students' learning and we fail time after time?

Our school system fails time after time because it has no way to improve teaching, curricula or assessments. And consequently, we have no system to improve our students' learning.

NCLB failed on all levels. It failed in improving our children's learning and it made our schools a less positive place to work in. And it encouraged the "gaming of the system" by our states by inflating "pass scores" on the tests. Who benefits from these artificial increases in pass scores? Our children don't because they really didn't learn anything more than they had in the past. Only the officials that escape from AYP improvment provisions. Not a quality educational system!

We can make our schools better. We just need to define what the problems are first, before we jump to these so called "solutions".

Erin Johnson


You say here, as you have said many times before, that "Our schools do not lack in 'accountablity' but rather suffer from a poor school system that has no way of improving teaching, curricula or assessment."

I agree with you all the way up to "that has no way of...." This is not true. We do have ways of improving the schools, and we have excellent schools serving some of the poorest children in the country (as well as the not-so-poor). We have schools with excellent curricula and instruction. How did we get those? Through parents, teachers, and administrators recognizing and embracing excellence. So it comes down to the question of how to recognize excellence.

We will not recognize excellence unless we clarify our language. Words like "accountability" and "curriculum" lack meaning in public dialogue until they are carefully defined. We must fight what Demiashkevich calls "the witchery of words":

"In the course of history, philosophers have debated whetehr the more powerful factors in the life of nations are sentiments or ideas. The final solution of this problem is, naturally, still lacking because the final philosophy of history can be written only by one who sees the last day of our world, and he will not have time to write. In the meantime, it appears sufficiently evident that whenever a sentiment is clothed in a deceptively logical idea, a powerful historic factor, often of negative character, can come into existence. This factor may be described as the witchery of words. The possible injurious effects of the influence of this witchery of words, that is to say, of ideas inflated with emotions, can be checked only by a concerted effort of enlightened minds. Therefore, the witchery of words deserves the special attention of an educator in his or her work relative to intellectual training" (Demiashkevich, 1935, pp. 259-260).

Yes! And the "witchery of words" deserves the special attention of those involved in policymaking. What is curriculum? What is assessment? What is accountability? What are standards? What are years?

The last one is a joke--but I do think of Marianne Moore's poem in response to your statement that our schools have no way of improving under the current system. We have many constraints, yes. But with all those limitations, maddening as they may be, true education takes place quietly in classrooms every day. To deny it would be to deny the bird singing in the cage. The poem ends:

"So he who strongly feels,
behaves. The very bird,
grown taller as he sings, steels
his form straight up. Though he is captive,
his mighty singing
says, satisfaction is a lowly
thing, how pure a thing is joy.
This is mortality,
this is eternity."


A few extraordinary individuals that go against the system to create a quality environment for their students is not a quality school system.

Why do you think that the only way for our schools to improve is for teachers/admin/parents to buck the system?

Would it not be better (and work for more schools) if the system itself were set up to encourage quality instruction, curricula and self improvement?

As for quality curricula development. Those few places that do develop quality curricula in our country do not have a wide audience as their materials are going against the system mandates for test-prep strategies.

Erin Johnson


I agree with you. It would be much better if the system were set up for quality curriculum, instruction, and self-improvement. And yes, it is difficult to deal with the system mandates as they stand.

I do not think that "the only way for our schools to improve is for teachers/admin/parents to buck the system." But I do think that we need to break through the "witchery of words" at many levels.

We need to clarify our language and demand clarity from others. Many believe that the schools have a "curriculum," just because those in charge call it a "curriculum." Many take comfort in the thought that the tests are "standards-based," not knowing how vague the standards are.


The implicit assumption in your characterization of school improvements as "parents, teachers, and administrators recognizing and embracing excellence" is that the school system does not do already do that. Those very few schools that have developed a quality educational program have done so despite our school system, not because of it.

On an international scale our schools are mediocre and display a slightly larger achievement gap than seen in the average country. Not horrible but not great.

If we are good with that scenario, then we should stop trying to change our schools altogether.

But if we want a better education for our children, how do we get there?

Not an easy question, but the international evidence can give us indications about what other schools systems do well and/or do poorly.

Multiple school systems do things worse than we do and I would hardly wish to adopt their ideas.

There are only a few school systems that are able to reduce the achievement gap while promoting quality learning for their top students as well.

Key elements of these top school systems are a focus on student learning and a school system set up for self-improvement. Elements that are completely missing in our school system.

A quality school system is not a "nice to have." It is a "must have" if our children's education is to improve.

Individuals can only fight the system so much. Why are we not talking about changing the system instead?

Erin Johnson


You misunderstood my point. I didn't say or mean that change should only occur at the individual or local level. I mean that we can talk of systemic change all we want, but it will stay at the level of talk until it involves the public. For public involvement to be effective, we have to be able to talk to each other--policymaker to parent, teacher to principal, student to data analyst.

So, the first step (or one of the first steps) toward systemic change is to tear down the jargon and build up meaningful language, so that we can address the problems together.


Thanks for the clarification.

I concur that many people use jargon to obfuscate.

But the issues we face in ed reform are not limited by the lack of discussion, nor word choice. Clarifying meaning will not take us on the road to educational improvement.

It is our ideas that need improvements, not the words.

Erin Johnson

"Many take comfort in the thought that the tests are "standards-based," not knowing how vague the standards are."

Diana--yes, one does have to be careful. I was recently trying to explore ways of getting a kid with some special needs through his PE requirement. While the special needs are not particularly physical there are some real discomforts associated with the district requirement. The district of course assures me that "their hands are tied" by the standards. Knowing that the PE standards in my state are, as you say vague--and optional in addition, I requested the school's curriculum. Right there on the front page it says that it is aligned to the state standards. I thought it curious since it was written before the state standards were adopted. Looking through it, it appears that someone thought it would be a good idea if the phys ed curriculum aligned to the English language arts standards.

So--is this about clarity? I don't know. I am not quite enough of a conspiracy theorist to believe that there are groups of people being willfully undermining to the whole concept of standard--although I am quite sure that there are individuals actively engaged in the fight. Is this lack of accountability? Is it plain outright stupidity? Or is there some (systemic) malaise that needs to be rooted out in order to move forward and grow.

Comments are now closed for this post.

The opinions expressed in Bridging Differences are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Follow This Blog


Most Viewed on Education Week



Recent Comments

  • hertfordshire security installers: Greetings. Great content. Have you got an rss I could read more
  • http://blog.outsystems.com/aboutagility/2009/04/challenges-of-scoping-and-sizing-agile-projects.html: I would like to thank you for the efforts you've read more
  • http://acousticwood.net/mash/2008/03/yeah_off_to_the_uk.html: Between me and my husband we've owned more MP3 players read more
  • buy cheap metin2 yang: When you play the game, you really think you equipment read more
  • Nev: Anne Clark - If a Dr. instructs a patient that read more