« Cerf-ing the Web | Main | The Vision Vacuum »

Lessons for Education Policy Research from the Market for Lemons

| 22 Comments
insights_lemon_car.jpg
Does the market for research in education policymaking work pretty well? For once, eduwonk, Dean Millot, and I all agree - it doesn't. The “market for lemons,” which Jay Greene makes reference to in his most recent post, gives us insight into why.

A common rationale given by economists for intervention in selected markets – for example, insurance markets - is the problem of asymmetric information, a gap in information available to buyers and sellers in a market. Using the example of used car markets, Nobel prize winning economist George Akerlof lays out this dilemma in his famous paper, “The Market for Lemons: Quality Uncertainty and the Market Mechanism.”

Imagine you’re selling a used car. You know the problems with your car, but your potential buyers don’t. You may be trying to swindle unsuspecting buyers because you know it has major defects. But your potential buyers aren’t stupid, and they know that they can’t trust you to provide an honest appraisal of your car’s problems.

If buyers don’t decide to avoid this market altogether, they end up betting on averages. They’ll only pay a price that reflects the average frequency of lemons in the used car market. That’s a price that’s too high for a lemon, but too low for a car of good quality. If you’ve got a good car, you know you’re going to get too low a price in the used car market, so you’re likely to not to sell there.

When sellers of good cars refuse to sell, lemons increase in frequency in the market. As a result, the people selling good cars are really in trouble, because they will end up getting an even lower payout for a good car. Now they are even less likely to sell them there, and the frequency of lemons continues to rise.

Left unchecked, the end result is market failure. What this means is that there are people who want to buy good cars and people who have them to sell, but that they are afraid of getting stuck with a lemon keeps that trade from happening.

The situation in education policy is analogous, but a little different. Sellers in the research market know what they are selling, but buyers like policymakers, journalists, and superintendents don’t have the expertise to evaluate what they are buying. They don’t differentiate between a paper in the American Economic Review – the best peer-reviewed journal in economics – and a report issued by a pro-vouchers thinktank. Unlike the used car market, the buyers aren’t always suspicious enough, in some cases because the buyers are constantly changing and don’t have the time to build up knowledge about reputations, which help to regulate markets with asymmetric information. Journalists get moved around from beat to beat, and policymakers come and go.

For some parties, there’s no incentive to be suspicious. Stories need to be written, laws need to be pushed through, and it’s not the editor or reporter or legislator who gets stuck on the side of the road when the car sputters out. It’s the public that gets left holding an empty bag when we rely on potentially flawed research to shape public policy.

Anyone have ideas on how this market could operate better? Or do ideologically driven policymakers, who can find “research” to support just about anything, simply prefer the status quo?
22 Comments

Of course, bad articles sneak through peer-review sometimes as well -- even in the AER. Perhaps if researchers had less incentive to focus on publishing as many things as quickly as possible and more incentive to publish only their best work and ensure that others do the same then we wouldn't have that problem.

Evaluating the quality of research is extremely time consuming and requires an awful lot of background knowledge. I'd suggest that a panel of experts evaluate all articles that are published and reports that are released and give them an easy to comprehend rating, but that doesn't sound feasible.

Eduwonkette is attempting to change the subject. I've never disputed that peer review can help provide additional assurances to readers about quality. The issue is whether research ought to be available to the public even if it has not been peer reviewed. In attacking the release of my most recent study Eduwonkette seems to be arguing that it is inappropriate to release research without peer review, at least under certain conditions that she only applies to research whose findings she does not like. If she were going to be consistent, she would have to criticize anyone who releases working papers of their research, which would be almost everyone doing serious research.

What's more, she is still trapped in a contradiction: she can't say that we should analyze the motives of people who release research directly to the public when assessing whether it is appropriate, while she prevents analysis of her own motives because she blogs anonymously. As I have now said several times, either she drops the suggestion that we analyze motives or she drops her role as an anonymous blogger. If she refuses to resolve this contradiction, Ed Week should stop lending her their reputation by hosting her blog. Let her be inconsistent in blogging at the expense of her own anonymous persona and not drain the respectability of Ed Week.

Lastly, the comparison of the market for education policy information and the market for cars comes from my most recent post in our exchange, but she oddly does not credit me here. (See http://jaypgreene.com/2008/07/12/see-were-in-italy/ ) Her position seems to be that we ought to forbid (or at least shun) the sale of used cars without warranties (translation: research without peer review). My argument is that used cars without warranties come at a risk but there are compensating benefits. Similarly, non-peer-reviewed research has its risks but also its benefits.

I have a good illustration of the points you make. As a member of what New York State calls the Board of Education and what Mayor Bloomberg calls the "Panel for Educational Policy", I am one of one of the thirteen members responsible for approving educational policy for NYC's 1+ million public school students.

When I explained in March to my colleagues on the Panel that research conducted by the Consortium on Chicago School Research was negative on the type of grade retention policy we were asked to approve, the NYC DOE issued a briefing paper dismissing the Consortium work of Jenny Nagaoka and Melissa Roderick: "The Chicago study is significantly flawed in its methodology, including incorrectly comparing students above the benchmark to those below...."

Instead we were pointed to a Manhattan Institute study by Jay Greene on Florida's test-based promotion poilcy which showed the FL grade retention policy was effective.

What's the solution? I don't know but it certainly helped that public hearings were held any many prominent academics appeared, universally criticized the DOE proposal and pointed to specific research efforts, all of which were more credible than the Manhattan Institute paper. It has also helped that important subjects are repeatedly analyzed. Again, in the Florida case, the most recent research has since shown that any benefits of grade retention, if they did exist, have not persisted over time.

Patrick Sullivan's example actually illustrates the opposite of what he means to prove. The research that Marcus Winters and I did on the effects of test-based promotion in Florida have actually been peer-reviewed and published three times; once in the Fall 2007 issue of Education Finance and Policy, once in the Spring 2006 issue of Education Next, and again in the current issue of Economics of Education Review.

It's true that the research initially appeared as a non-peer reviewed Manhattan Institute report in December 2004.

The issue here is whether it would have been better to suppress the information we had available in 2004 until it could be peer-reviewed and published a few years later? My position is that the public is better served by having the information available all along, with difference levels of confidence in the findings based on the different types of review it had undergone at that time. The opposing view seems to be that researchers should stay out of the policy debate until they have the full warranty. The problem with waiting is that the public is denied information that may be relevant to decisions they have to make at the moment.

And if people want to be consistent in saying that researchers should stay silent until peer review, then they have to oppose all releases of non-peer-reviewed research, including working papers

I’m arriving a bit late to this conversation, and I want to be careful not to simply repeat what’s already been said by Dean Millot, Sherman Dorn, Eduwonkette or the other posters here.

In Eduwonkette’s original post, she linked to a published review of an earlier report authored by Greene and Winters: “even when researchers working in the policy advocacy industry make sloppy, indefensible errors - for example, when Greene and Winters used data that the Bureau of Labor Statistics warned against using to show that teachers are overpaid - they're not approached with caution by the press when the next report rolls around.”

That review, written by Professor Sean Corcoran (NYU), was part of the Think Tank Review Project, which I co-direct. Our project has reviewed, over the past three years, four different reports from Greene and Winters, offering some praise but also documenting errors: overstating effects, omission of key information, weaknesses in data and analyses as well as research design, unsubstantiated assumptions, poor use of existing literature, and (in the instance noted by Eduwonkette) inappropriate use of a database. Comparable mistakes have been found in most other think tank reports.

I’ll briefly note here that I see our Project, as well a comparable project started recently by the What Works Clearinghouse, as being part of a dialogue rather than as some sort of objective final word. In fact, that’s how I also see the blind peer reviews that I receive from journals, regarding my own work.

Here are the urls for the four “Think Tank Reviews” of Greene and Winters reports:
http://epicpolicy.org/thinktank/review-effect-of-special
http://epicpolicy.org/thinktank/review-getting-ahead-staying-behind-an-evaluation-floridas-program-end-social-promotion
http://epicpolicy.org/thinktank/review-how-much-are-public-school-teachers-paid
http://epicpolicy.org/thinktank/review-getting-farther-ahead-staying-behind-a-second-year-evaluation-floridas-policy-end-s

These reviews should not necessarily be taken as a criticism of the authors’ scholarship. If anything, it’s a criticism of the publication process used by think tanks. Most of us who publish our research have been humbled when our mistakes are pointed out in the peer review process, but we’ve also been relieved that those mistakes were identified before actual publication.

I’m also sympathetic to Jay Greene’s timeliness argument; the peer review and pre-publication process for many academic journals can take literally years. But think tanks could set up their own, streamlined peer review process, as I believe has been done at the Hoover Institution’s Education Next. My own policy center created, about three years ago, such a process for the policy briefs that we release, and I’ve never regretted the decision. Even our think tank reviews go through an abbreviated peer-review process.

The other part of this conversation -- concerning the role of the press in reporting on different studies -- is also of great interest to me. But that’ll have to wait for another time. I have a peer review here waiting for my attention…

Sellers in the research market know what they are selling, but buyers like policymakers, journalists, and superintendents don’t have the expertise to evaluate what they are buying.

I don't think the problem is just that the buyer's don't have the expertise; journalists may not, but policy makers certainly could. I think the problem is that many of the buyers don't have a great stake in the long term quality of the car. For journalists, the incentive is to sell papers -- or at least to write attention grabbing articles, and the quality of the research isn't actually all that relevant to whether people will find the article interesting.

But policy makers, one would think, would want to get things right, and would have quite a bit of incentive to decide which sources were reliable and which were not.

However, my sense is that for most policy makers -- particularly at the Congressional and State Legislature level -- education is only a very small piece of the picture, and, when push comes to shove, the people making the decisions are ideology driven, not data driven.

So, I think one of the problems for this market is that there are a lot of "buyers" who are quite content with low quality research -- the cute-but-flakey convertible that looks good in the driveway is enough to catch the eye of newspaper readers and voters -- and a smaller (and possibly shrinking) market for higher quality research.

What I see -- from a low-level policy maker's point of view -- is a vicious cycle of over-simplifying. Education reform becomes reduced to sound bites like "value added" and "pay-for-performance" and then "value added" gets reduced yet again to "increased test scores."

And in the end, the narrowing of the curriculum discussion isn't "What is the right balance between teaching the basics and teaching other subjects?" but has gotten reduced to something very close to "Is NCLB a good thing?"


Corey - I'm much more worried about thinktank reports than academic ones because they take up astronomically more media space each year, and are often used to justify public policies by policymakers. But I don't disagree that academic research is imperfect. One idea that I had was that an independent panel of scholars could produce something like "consumer reports" for thinktanks, rating each report on its accuracy in reviewing the literature, the quality of the data, the rigor of the methods, and the extent to which the policy recommendations are supported by its findings, in addition to other dimensions that I'll count on readers to tease out.

But I think Rachel provides a clue as to why that wouldn't work - lots of parties are quite happy with the current setup, where there is readily available “research” to support any ideological whim they may have.

Patrick - Thanks for sharing the PEP angle.

Jay - This post was not a response to your most recent post, as I think that limiting the discussion to a back and forth between us fails to address the larger issue, one that the blogosphere has discussed many times this year. I have provided a manifesto-length discussion about the "influence spectrum" and the issue of anonymity, and it continues to floor me that you believe that blogging and research serve the same functions and have the same impacts. Do you think it is likely that policymakers will start quoting blog posts in legislative reauthorizations, or that Supreme Court decisions will soon be using posts released in the educational blogosphere as evidence?

In your used car market example, you use the example of one person buying a car without acknowledging the complex dynamics of markets that the Akerlof market for lemons model makes so clear. You neglect the long-term impact of these market dynamics on influencing the quality of the goods that are bought and sold in the market.

And regarding working papers: working papers and thinktank reports are released for entirely different functions -they could not be more different. Scholars releasing working papers through NBER, IDEAS, SSRN, or their own websites don't have publicized "release dates" for their working papers. Studies are not blasted to the media by a well-funded PR department. Rarely do you see working papers on education featured in papers or even in the "research roundup" section of Ed Week where thinktank reports almost always get a few paragraphs - that is, if they don't get a full-length article. Instead, working papers are passively posted for purposes of dissemination to other scholars. In many cases, the advance posting gives other academics the opportunity to provide peer review before the article is submitted to a journal for publication.

Has Jesse Rothstein's teacher effects paper Do Value-Added Models Add Value? released in the fall, which has tremendous implications for current public policy, been covered in the New York Sun, the St. Petersburg Times, Education Week, and the Palm Beach Post? As far as I'm aware, it hasn't been covered anywhere, and it's been available since November. In short, the probability of a working paper being covered in the press is substantially less than the probability of a thinktank report being picked up widely.

The comparison between the two is apples to elephants.

Corey - I'm much more worried about thinktank reports than academic ones because they take up astronomically more media space each year, and are often used to justify public policies by policymakers. But I don't disagree that academic research is imperfect. One idea that I had was that an independent panel of scholars could produce something like "consumer reports" for thinktanks, rating each report on its accuracy in reviewing the literature, the quality of the data, the rigor of the methods, and the extent to which the policy recommendations are supported by its findings, in addition to other dimensions that I'll count on readers to tease out.

But I think Rachel provides a clue as to why that wouldn't work - lots of parties are quite happy with the current setup, where there is readily available “research” to support any ideological whim they may have.

Patrick - Thanks for sharing the PEP angle.

Jay - This post was not a response to your most recent post, as I think that limiting the discussion to a back and forth between us fails to address the larger issue, one that the blogosphere has discussed many times this year. I have provided a manifesto-length discussion about the "influence spectrum" and the issue of anonymity, and it continues to floor me that you believe that blogging and research serve the same functions and have the same impacts. Do you think it is likely that policymakers will start quoting blog posts in legislative reauthorizations, or that Supreme Court decisions will soon be using posts released in the educational blogosphere as evidence?

In your used car market example, you use the example of one person buying a car without acknowledging the complex dynamics of markets that the Akerlof market for lemons model makes so clear. You neglect the long-term impact of these market dynamics on influencing the quality of the goods that are bought and sold in the market.

And regarding working papers: working papers and thinktank reports are released for entirely different functions -they could not be more different. Scholars releasing working papers through NBER, IDEAS, SSRN, or their own websites don't have publicized "release dates" for their working papers. Studies are not blasted to the media by a well-funded PR department. Rarely do you see working papers on education featured in papers or even in the "research roundup" section of Ed Week where thinktank reports almost always get a few paragraphs - that is, if they don't get a full-length article. Instead, working papers are passively posted for purposes of dissemination to other scholars. In many cases, the advance posting gives other academics the opportunity to provide peer review before the article is submitted to a journal for publication.

Has Jesse Rothstein's teacher effects paper, “Do Value-Added Models Add Value?,” released in the fall, which has tremendous implications for current public policy, been covered in the New York Sun, the St. Petersburg Times, Education Week, and the Palm Beach Post? As far as I'm aware, it hasn't been covered anywhere, and it's been available since November. In short, the probability of a working paper being covered in the press is substantially less than the probability of a thinktank report being picked up widely.

The comparison between the two is apples to elephants.

Eduwonkette -- Let's make this very concrete. Was it inappropriate for Marcus Winters and I to release our social promotion findings in 2004 without peer review, or should we have waited until it had been peer-reviewed and published (in various forms) in 2006, 2007, and again in 2008? If the appropriate thing is to wait, would interest groups, editorial boards, and bloggers similarly hold their tongues until the additional evidence came in?

Would it have been OK to release in 2004 as long as we tried to make it obscure enough so that people were less likely to find it? What if interest groups, bloggers, etc... found our obscure finding and promoted them (as has happened with Jesse Rothstein's paper)? Would policymakers hold off on decisions that might have come out differently if they had the suppressed information?

And in saying "working papers and thinktank reports are released for entirely different functions" you are repeating your call for an analysis of motives. You've said that think tanks want to influence policy (bad motive) while academics are trying to advance knowledge with each other (good motive). But if academics are serving the public good, shouldn't they ultimately want to influence policy? I am an academic who also releases working papers through a think tank. Does that make my motives good or bad? I think all of this analysis of motives is silly when the real issue is the truth of claims, not why people are making those claims. Calling for an analysis of motives is especially silly for someone who is trying to influence people anonymously. The fact that you are trying to influence people through a blog does not give you a free pass from having to be consistent on this.

I'm not sure what the obsession is with your true identity. Or that impacts the reading of your blog. I agree with your earlier post that blogs serve a very different purpose than research papers, peer reviewed or not. While you provide information to people, it is up to them to read your work with a critical eye - knowing your name would not allow me to do that any differently than I do now. Regardless of who you really are, I can still consider EdWeek's support of your work, as well as the audience you are writing for and the point of view you represent.

In terms of improving the situation with educational policy making...I'm not sure if this will help or hurt, or really address the issue at hand at all. But I am a huge advocate of having a larger range of voices at the table. Not necessarily more people, but more different kinds of people. Especially people in the field who have a hand on the pulse of the problems in school. I think it must be very difficult to read any sort of research criticially if you are completely removed from the realities of the situation you are trying to alter.

I want to thank Eduwonkette for adding a mention of my earlier use of the market for cars in her post. My complaint that she had not acknowledged it was accurate at the time I commented, but it is no longer accurate.

My tip on becoming a better consumer of research and research-assited punditry is to restore two old school approachs. The thing I miss the most is an old-fashinoed introductory paragraph with a clear hypothesis, that places it within the context of the debate and indicates what issues are and are not being tackled. Secondly, even though it drives my wife crazy I print out out the entire studies, then I read them under the oak tree.

The basic problem is the explosion of knowledge.

Isn't it interesting that we haven't mentioned that k-12 education faces the same problem? The last time I read a study on the issue, the estimate was that a public school student could master all of the Standards - if no time was lost to absences, tests, extracuuricular activity, disciplinary disruptions,etc. - by the age of 24.

We've got an explosion of information which is great. We need to cultivate more wisdom. To do so, we need to cultivate better conversations.

Bringing this back to research and policy, we need to remember what our focus is. The accountability hawks who try to produce teacher-proof policies, don't seem to realize that their real beef is with the central offices. But shifting the blame would do no good either. It drives me crazy when administrators announce a grand new reform without taking the time to put pencil to paper and estimate how many hours in the day it would require, and where those hours would come from. Similarly, we dump promising programs,
like SES tutoring, on administrators in the middle of the fall, and don't ask when they would have the time to design an implementation program. We wouldn't tell the people who are building a sky scraper that we don't have time to consult an engineer but here is the money and we need you to build some new floors. Figure out the architecture in your spare time.

Wierdly, or maybe predictably, this reminds me of the New Yorker article on Obama. Count the minutes, hours, days, and years of his career, and he clearly hasn't had time to solve problems. But I'm impressed that he "went to school" on politics in a place where politics is so personal. Just as we need to remember that schooling is a people process, we need to remeber that education politics is also.

This is an important discussion. That said, I'm not sure why Jay Greene is devoting so much time and scarce mental energy to this issue.

My (cynical) explanation is that he depends so heavily on this process-big media splashes where rigor and peer review are of only second order importance-that he's willing to do everything to defend it. Even howl for an anonymous blogger to be kicked off the web.

If he is confident that his research can stand up to intense scrutiny, the timing issue should not be that big a deal. Sooner is always better than later when it comes to timely questions, but getting it right is arguably more important.

Greene also appears to think that eduwonkette's posts have been mostly about motives. That's not my reading of her at all. She really appears to be more concerned with the process, which in part is shaped by incentives.

Of course scholars in this field want to influence public policy. But they also care at least as much about getting it right (in part because they have strong long-run incentives to do so).

Imagine if, in another field, scholars rushed their results to the media, bypassing all peer review. We'd have pharmaceutical think tanks splashing their new cures for cancer to the papers. Astronomical think tanks would release glossy reports about their latest evidence for water on Pluto. And so on.

The "education sciences" have a long way to go.

Eduwonkette: "Anyone have ideas on how this market could operate better? Or do ideologically driven policymakers, who can find 'research' to support just about anything, simply prefer the status quo?"

1) "Ideological" is an uncomplimentary way to say "systematic". The antonym is "scatterbrained".

2) Which process?
a) The process by which any individual decides whom to trust,
b) The process by which government actors (legislators), decide what resources (money and student time) to dedicate to which institutions, or
c) The process by which decision-makers within those institutions decide between curricula and methods of instruction?

a) For individuals, I suggest you discount the words of people with a record of failure. Since Professors of Education foisted Whole Language methods of reading instruction, "discovery" methods of math instruction, and numerous other lunatic fads on the State (that is, government generally) school system, I discount their voice wherever it appears (blogs, think tank, peer-reviewed journals) unless I am familiar with that particular professor's work.

b) It makes no more sense for people who are not in government to argue about what State (government, generally) actors should do than it makes for the swimming survivors of a mid-ocean shipwreck to argue about what sharks should eat.

You want a peer reviewed journal? Here...

Eduardo Zambrano
"Formal Models of Authority: Introduction and Political Economy Applications"
Rationality and Society, May 1999; 11: 115 - 138.

"Aside from the important issue of how it is that a ruler may economize on communication, contracting and coercion costs, this leads to an interpretation of the state that cannot be contractarian in nature: citizens would not empower a ruler to solve collective action problems in any of the models discussed, for the ruler would always be redundant and costly. The results support a view of the state that is eminently predatory, (the ? MK.) case in which whether the collective actions problems are solved by the state or not depends on upon whether this is consistent with the objectives and opportunities of those with the (natural) monopoly of violence in society. This conclusion is also reached in a model of a predatory state by Moselle and Polak (1997). How the theory of economic policy changes in light of this interpretation is an important question left for further work."

c) If a dispute over education policy reflects a difference in taste, numerous small school districts or a competitive market in education services allows consumers (parents, taxpayers) and employees to find a supplier who accommodates their taste, while a single State-wide school district must create unhappy losers. If a dispute over education policy turns on an empirical question, where "What works?" is a matter of fact, numerous small school districts or a competitive market in education services will supply more information than a single State-wide school district. A State-monopoly school system is like an experiment with one treatment and no controls, a retarded experimental design.

http://econ-www.mit.edu/files/24

Mimi:

"I am a huge advocate of having a larger range of voices at the table. Not necessarily more people, but more different kinds of people. Especially people in the field who have a hand on the pulse of the problems in school. I think it must be very difficult to read any sort of research critically if you are completely removed from the realities of the situation you are trying to alter."

I just thought it was worth reprinting that very cogent thought. While I find this snarling competition for intellectual credibility fascinating, the usefulness of research in shaping policy-making would be greatly enhanced by asking those to whom policy is applied to be part of the process.

Greene says, for example--several times--that Florida's policy of giving Fs (threats and shame) produces gains in reading and math, a fairly breathtaking assertion, and one with which Florida teachers who have taken the challenge of improving instruction or curriculum in high-needs schools might take exception.

But there are few teachers in this discussion, aside from Mimi and Elton. Why is that?

More on this over at the Education Policy Blog:

http://tinyurl.com/6cy5l5

We'd have pharmaceutical think tanks splashing their new cures for cancer to the papers. Astronomical think tanks would release glossy reports about their latest evidence for water on Pluto. And so on.

Some people might argue that pharmaceutical companies do something close to that in their marketing... But the astronomy analogy points toward another piece of the puzzle... Astronomers don't send out press releases on non-peer reviewed work because there isn't much incentive for it -- and funding agencies aren't particularly interested in funding research that's published in non-peer reviewed journals. And it's an interesting question why the situation is different for educational research.

There's peer review and then there's peer review. The quality of research in education is so low in general, it becomes hard to distinguish the difference, except perhaps for screening out the worst of the worst.

Research (and peer review) in pharmacology must be at a different level.

And those from the same range of the ideological spectrum in the sciences as the Manhattan Institute forgo peer review altogether - peer review in biology does what it's supposed to do, and they'd never clear it.

Unfortunately, were peer review that rigorous in educational research, it would probably reduce the amount published by an order of magnitude.

Nancy,

Thanks for recognizing that teachers should have an important voice in these matters, unlike Malcolm who arrogantly thinks otherwise. If his second point was the rule, then no one in the public sector would be allowed to think about, much less participate in, the discussions on anything the government does.
Malcolm spends so much time with his dictionary and voucher-study tomes that his arguments come across with little or no common sense.
If Malcolm is into quotes why not look into some quotes from Jay Greene about teachers. It's questionable and ill-conceived notions such as these that have teachers wary of the so-called expertise he proclaims:

1) "A typical teacher, unfortunately, will do the same thing for the rest of his or her career that he or she did the first year of teaching. The same lessons, the same approaches forever. No matter what kind of school they teach in, no matter what kinds of kids. They're comfortable with that. It works for them as they see it."

2) “One reason for the prominence of the underpaid-teacher belief is that people often fail to account for the relatively low number of hours that teachers work.”

3) “Apparently the end of the course occurs 6 weeks before school breaks for the summer. After the tests are done academic work grinds to a halt. Instead, academic content is increasingly replaced with field days, watching movies in school, parties, etc… as the end of the year approaches."

To those of us who have actually taught in the classroom, the ridiculous nature of his statements is obvious. You might say that they are just “myths.”

The myths are based on something though.

1) "A typical teacher, unfortunately, will do the same thing for the rest of his or her career that he or she did the first year of teaching. The same lessons, the same approaches forever. No matter what kind of school they teach in, no matter what kinds of kids. They're comfortable with that. It works for them as they see it."

True, if the "thing" works.

2) “One reason for the prominence of the underpaid-teacher belief is that people often fail to account for the relatively low number of hours that teachers work.”

If you look at the summer vacation, it does seem that way. What he neglects to see is that teachers work long after-hours, weekends, and often take expensive, often useless courses during the summer to fulfill credential requirements.

3) “Apparently the end of the course occurs 6 weeks before school breaks for the summer. After the tests are done academic work grinds to a halt. Instead, academic content is increasingly replaced with field days, watching movies in school, parties, etc… as the end of the year approaches."

When else are you going to have a celebration of completion? Kids need joy, and diversity of input, and parties, and field trips!

Teachers should not be afraid to counter these myths--reality based myths--with the reality that these myths use as their seed.

Great input "tft" but I wish to add some more to the last point. In middle schools and high schools, there is practically no drop off after a test since the final and semester exams still loom over the horizon. State departments of education are vigilant about checking schools to see that they adhere to the curriculuum. Also, local school districts may require further data regarding accomplishments and skills for their own needs and records.

I'm willing to bet that just about any teacher reading Greene's quotes above will know immediately that he has no experience in teaching at the public school level. All education researchers, in my humble opinion, should spend a year or two teaching in public schools. It might be the best education they ever got and they might even learn how to make a real difference in the lives of students.

I agree completely, elton! How about principals learn some business skills so they can spend less time hiring experts to help them designate what little discretionary spending they have.

Oh, and howsabout we fund some resources that will help kids?

I think, as usual, carts are being put before horses (to torture an old saying).

Eduwonkette,

I realize I am coming to this conversation late (better late than never, right?), but I felt that this was very relevant.

Your idea of having an "independent panel of scholars create a "consumer reports" for thinktanks," already exists. I'm surprised no one has posted yet.

The Education Policy Research Unit (http://epsl.asu.edu/epru/thinktankreview.htm)

The Think Tank Review Project provides the public, policy makers, and the press with timely, academically sound reviews of selected think-tank publications. The project is a collaborative effort of the Education Policy Research Unit (EPRU) at Arizona State University and the Education and the Public Interest Center (EPIC) at the University of Colorado (from their website).

Comments are now closed for this post.

Advertisement

Recent Comments

  • Charlie: Eduwonkette, I realize I am coming to this conversation late read more
  • tft: I agree completely, elton! How about principals learn some business read more
  • elton: Great input "tft" but I wish to add some more read more
  • tft: The myths are based on something though. 1) "A typical read more
  • Elton: Nancy, Thanks for recognizing that teachers should have an important read more

Archives

Categories

Technorati

Technorati search

» Blogs that link here

Tags

8th grade retention
Fordham Foundation
The New Teacher Project
Tim Daly
absent teacher reserve
absent teacher reserve

accountability
accountability in Texas
accountability systems in education
achievement gap
achievement gap in New York City
acting white
admissions
AERA
AERA annual meetings
AERA conference
AERJ
Alexander Russo
Algebra II
American Association of University Women
American Education Research Associatio
American Education Research Association
American Educational Research Journal
American Federation of Teachers
Andrew Ho
Art Siebens
ATR
Baltimore City Public Schools
Barack Obama
Bill Ayers
black-white achievement gap
books
books on educational research
boy crisis
brain-based education
Brian Jacob
bubble kids
Building on the Basics
Cambridge Education
carnival of education
Caroline Hoxby
Caroline Hoxby charter schools
cell phone plan
charter schools
Checker Finn
Chicago
Chicago shooting
Chicago violence
Chris Cerf
class size
Coby Loup
college access
cool people you should know
credit recovery
curriculum narrowing
D3M
Dan Willingham
data driven
data-driven decision making
data-driven decision-making
David Cantor
DC
Dean Millot
demographics of schoolchildren
Department of Assessment and Accountability
Department of Education budget
desegregation
Diplomas Count
disadvantages of elite education
do schools matter
Doug Ready
Doug Staiger
dropout factories
dropout rate
dropouts
education books
education policy
education policy thinktanks
educational equity
educational research
educational triage
effects of neighborhoods on education
effects of No Child Left Behind
effects of schools
effects of Teach for America
elite education
ETS
Everyday Antiracism
excessed teachers
exit exams
experienced teachers
Fordham and Ogbu
Fordham Foundation
Frederick Douglass High School
Gates Foundation
gender
gender and education
gender and math
gender and science and mathematics
gifted and talented
gifted and talented admissions
gifted and talented program
gifted and talented programs in New York City
girls and math
good schools
graduate student union
graduation rate
graduation rates
guns in Chicago
health benefits for teachers
High Achievers
high school
high school dropouts
high school exit exams
high school graduates
high school graduation rate
high-stakes testing
high-stakes tests and science
higher ed
higher education
highly effective teachers
Houston Independent School District
how to choose a school
IES
incentives in education
Institute for Education Sciences
is teaching a profession?
is the No Child Left Behind Act working
Jay Greene
Jim Liebman
Joel Klein
John Merrow
Jonah Rockoff
Kevin Carey
KIPP
KIPP and boys
KIPP and gender
Lake Woebegon
Lars Lefgren
leaving teaching
Leonard Sax
Liam Julian

Marcus Winters
math achievement for girls
McGraw-Hill
meaning of high school diploma
Mica Pollock
Michael Bloomberg
Michelle Rhee
Michelle Rhee teacher contract
Mike Bloomberg
Mike Klonsky
Mike Petrilli
narrowing the curriculum
National Center for Education Statistics Condition of Education
NCLB
neuroscience
new teachers
New York City
New York City bonuses for principals
New York City budget
New York City budget cuts
New York City Budget cuts
New York City Department of Education
New York City Department of Education Truth Squad
New York City ELA and Math Results 2008
New York City gifted and talented
New York City Progress Report
New York City Quality Review
New York City school budget cuts
New York City school closing
New York City schools
New York City small schools
New York City social promotion
New York City teacher experiment
New York City teacher salaries
New York City teacher tenure
New York City Test scores 2008
New York City value-added
New York State ELA and Math 2008
New York State ELA and Math Results 2008
New York State ELA and Math Scores 2008
New York State ELA Exam
New York state ELA test
New York State Test scores
No Child Left Behind
No Child Left Behind Act
passing rates
Pearson
picking a school
press office
principal bonuses
proficiency scores
push outs
pushouts
qualitative educational research
qualitative research in education
quitting teaching
race and education
racial segregation in schools
Randall Reback
Randi Weingarten
Randy Reback
recovering credits in high school
Rick Hess
Robert Balfanz
Robert Pondiscio
Roland Fryer
Russ Whitehurst
Sarah Reckhow
school budget cuts in New York City
school choice
school effects
school integration
single sex education
skoolboy
small schools
small schools in New York City
social justice teaching
Sol Stern
SREE
Stefanie DeLuca
stereotype threat
talented and gifted
talking about race
talking about race in schools
Teach for America
teacher effectiveness
teacher effects
teacher quailty
teacher quality
teacher tenure
teachers
teachers and obesity
Teachers College
teachers versus doctors
teaching as career
teaching for social justice
teaching profession
test score inflation
test scores
test scores in New York City
testing
testing and accountability
Texas accountability
TFA
The No Child Left Behind Act
The Persistence of Teacher-Induced Learning Gains
thinktanks in educational research
Thomas B. Fordham Foundation
Tom Kane
Tweed
University of Iowa
Urban Institute study of Teach for America
Urban Institute Teach for America
value-addded
value-added
value-added assessment
Washington
Wendy Kopp
women and graduate school science and engineering
women and science
women in math and science
Woodrow Wilson High School