« Cool People You Should Know: Stefanie DeLuca | Main | Cerf-ing the Web »

The Influence Spectrum: From Blogging to Academic Research

| 11 Comments
Let me use the occasion of Jay Greene's response to my earlier post to explain the differences between blogging and research, as I see it. Greene makes no distinction between the two activities, and is, as a result, skeptical about my anonymity. As he explained to me off-blog, "The same basic principles apply. They are both part of the spectrum of how people communicate ideas that may be related to policy decisions."

Blogs provide opinions, commentary, and analysis. Blogs are a place to discuss ideas, consider other points of view, and hear what a community of readers has to say. Blogging is great for testing out ideas, reflecting on the news of the day, and discussing and disseminating existing research. But bloggers don't do academic research. Academic research, in contrast, is subject to norms about method. The central norm in academic research is subjecting your work to the scrutiny of a critical community of scholars.

Undoubtedly, blogs, thinktank research, and academic research are "part of the spectrum of how people communicate ideas that may be related to policy decisions." But different levels of confidence should be assigned to different parts of that spectrum in educational policymaking. Below, from least to most credible:

1) Blogs: Blogging is free-form exchange, and the blogger is judged by the quality of his or her arguments and content by readers who seek out the blogger. Blogs are grassroots online communities where everyone, irrespective of their identity, is entitled to an opinion.

2) Thinktank research: Thinktank research is generally released without external review. The questions that are asked and the policy recommendations that are put forth are usually – but not always – tied to the stated objectives of the organization, which are sometimes ideological in nature. Thinktanks are well-funded and endowed with PR departments that publicize studies to policymakers and the media. As a result, thinktank research on education receives more attention than blogs and academic research in the media.

It is important to note that thinktanks do vary significantly in the extent to which they internally and externally review work before releasing it. They also vary in the extent to which they make their methods transparent enough that their analyses can be evaluated and replicated. Some thinktanks are more judicious than others about describing the implications of their work for policy and in spinning their findings. And some thinktanks don’t appear to sanction researchers when their studies are consistently discovered to be wrong. I imagine that other thinktanks would treat such a violation differently, because at the end of the day, these mistakes reflect poorly on the institution.

3) Academic research: Academic research is intended to contribute to a body of scholarly knowledge, and is subject to thorough peer review and to norms of scholarly inquiry. Though it is often policy relevant, the primary audience for this research is a community of scholars, who judge the research not for its policy contributions but its innovativeness, rigor, and contributions to a body of literature.

But peer reviewers are human, too, and they come with their own set of biases; the idea of a search for truth immune to ideology is a fantasy. Academic research that is imperfect does get published. And people do make mistakes in their papers, both innocent and intentional. That's why one of the norms of scholarly inquiry is to replicate studies and to take caution in declaring that the case is closed on any issue. This can be thorny, because academic research communities are small and dense. Everyone knows everyone else, and the scholars that take on prominent colleagues, even when they are clearly wrong, can pay a handsome price. People also have personal relationships with mentors and colleagues, and sometimes we don’t challenge each other as much as we should.

For all of these reasons, peer review is double-blind. In practice, papers are submitted to conferences before they are submitted to journals. On more than one occasion, I have reviewed papers of scholars who have sat on the same conference panels that I have. But the academics whose work is under review do not know the identity of their reviewers (except when reviewers cry foul that their work wasn’t cited, and suggest references that give away their identity!), and this provides a countervailing force against the social dynamics that sometimes cloud our judgment. And with academic research, no study is taken as a "killer study," and Jeff Henig has advocated for the same in the policymaking arena. Rather, individual studies are put in context of a larger evidence base.

To be sure, I, and some other bloggers, will occasionally present and analyze data in our postings, with the goal of persuading readers of a point of view. When I do so, I provide links to the data, which are generally in the public domain. When these data are not publicly available, I have always extended an offer to my readers to request data from me, which they have often done. When these posts involve more than making figures using publicly available numbers, I also provide detail about what I've done, which is simple descriptive analysis that a competent Excel user can replicate.

But there's no pretense that this is peer-reviewed academic work. And let's be realistic: an anonymous blogger isn't shaping public policy. In equating the two, Greene either overstates the influence of this blog on education policy, or diminishes the contributions of his own work. Of course, if my postings lead readers to think differently about research and policy matters, then those readers may have an influence. I see this as a very different dynamic than with thinktank research, where, because the objective is to influence public policy directly through research, the researchers have a greater obligation to their audience to vet what they've done before taking it public.

Finally, to Greene's point that my anonymity makes it impossible to "consider the source" - is it likely that Education Week would host an anonymous blog by someone working for or funded by "special interests" in education? Or that they would allow me to critique policymakers with whom I have some conflict of interest? The editors at Education Week know who I am, and decided to host this blog with full knowledge of my professional biography. I'm quite proud of - and grateful for - the community that we've built here, which has challenged and refined my own thinking on a wide variety of topics. At the end of the day, potential readers can decide for themselves whether this blog is worth reading, can tell me when they think I'm wrong (and you often do), and can expect me to listen, and even modify my positions in response.

Update: Be sure to check out Dean Millot's exceptional post related to this issue, The Letter From: "In short, I see no problem with research becoming public with little or no review” (I) , as well as Sherman Dorn's from earlier in the week, Can reporters raise their game in writing about education research? eduwonk also weighs in here: Politics of Information.
11 Comments

Double-blind reviewing? Really? You state "But the academics whose work is under review do not know the identity of their reviewers." How often do you think this really happens? Sure, there's no names on the papers or the reviews, but I bet decent survey would reveal that most peer-review isn't blind at all... I think it may be blind for about the first 10 minutes, or for graduate students only, but after a year in academia we all know each other...

Hi Sara,

You are a better sleuth than I am - I'm never confident of my ability to guess all of the reviewers correctly. Surely you know who's part of the possible set of reviewers, and you know who is more or less likely to be sympathetic to a paper. But unless colleagues have a distinctive writing style or self-promote their own work as I describe above (i.e. please cite me (2002)), it's hard to be totally sure.

Of course, I've had people tell me after the fact that they reviewed a paper of mine. I did tell someone once that I reviewed his paper and will never do that again. It was a pretty tense conversation about how much extra analysis he had to do because of that review (even though I thought it was a really good paper and recommended that the editor give it a positive R&R).

I found a local example of eduwonk's admonishment that newspaper editors and reporters are poorly equipped to understand, much less challenge, the statistical lingo of research.

I e-mailed him your earlier entry on Greene in which Anonymous Peer Review wrote a mini-review of Greene's work. Although he is acutely interested in following educational topics for the community and is quite conversant about educational affairs, his reaction to Anonymous Peer Review's breakdown of concerns about Greene's methodology was "A lot of it is over my head."
To me, Greene lost this argument the moment he tried to discredit you.

Hi Elton,

Thanks for weighing in. Dean Millot does a great job in his post of explaining that quantitative research has gotten technical enough that it is difficult for anyone but the small set of people using these methods to evaluate; I thought eduwonk did a nice job of making this point as well.

As I wrote in the other post, "The more complex the methods are, the more there is a need for peer review because it becomes more difficult to eyeball the problems from the sidelines." I initially had argued that if the methods were transparent, reported in full, and simple (i.e. reporting mean scores), this could be less of a problem. But Dean reminds us that the average consumer of this work is not in a position to evaluate even simple technical work, which makes the peer review process all the more important.

Let me clarify a statement I made. The person I e-mailed Anonymous Peer Review's entry to was a local news editor.
As a retired public school teacher, I am very concerned about the quality of research being done in the field of education. The battle lines seemed to be public vs private, public vs charter, public vs voucher, etc.
What appears to be lost is the real answers schools need: more effective classroom techniques and how to increase parental/community involvement.
Greene's work is a classic example of Churchill's famous line about someone using facts like a drunk uses a lamp post: "More for support than for illumination."
Kudos for helping us to separate the wheat from the chaff.

J.P.Greene makes a decent point about hiding behind anonymity as destructive of credibility. He puts his name to his work, unlike some bloggers I can (not) name.

Whether individuals, think tanks, and academic journals are credible depends on the integrty of the people who do the work. The three-tier ranking of sources doesn't work. For example, "Social Text", the journal which published Alan Sokal's spoof "Trangressing the Boundaries; Toward a hermeneutics of quantum gravity", is a "peer-reviewed" journal whose editors claim academic affiliations. There are peer reviewed journals of Marxist economics and of Austrian economics. They can't both be "credible".

What academic journal of Education compares in quality to the Brookings publications "Politics, Markets, and America's Schools" and "Vouchers and the Provision of Public Services"?

Eduwonkette writes: "Academic research is intended to contribute to a body of scholarly knowledge, and is subject to thorough peer review and to norms of scholarly inquiry."

Gulp. "Credulous" and "credible" share a root. People do not become beter-informed or more altruistic when they get tenure. What academics "intend" with their research is a matter of inference. I recommend Charles Sykes' "Profscam", William Broad's "Betrayers of the Truth" and Martin Anderson's Imposters in the Temple".

Eduwonkette's tri-partite partition of information sources doesn't work: many bloggers and think tank scholars are academics and some think tanks are university-affiliated (e.g. Hoover). Further, significant sources lie outside these channels, unless you want to call the World Bank, OECD, and NCES "think tanks".

Malcolm -
You're showing your lack of familiarity with academic journals (and in particular, journals in education). First of all, "Social Text" is a really poor example to use here. As its webpage states, it is "at the forefront of cultural theory," "breaking new ground in the debates about postcolonialism, postmodernism, and popular culture." Noone is looking to this journal for empirical evidence on anything. Nor is it used to inform policy. Even its journal description sounds made up.

Second, there is a hierarchy of journals in any field. Anyone can start a peer-reviewed publication. Thus we shouldn't be surprised if spoof or fraudulent articles get through somewhere. In education, as in most fields (and perhaps disproportionately so), there are dozens of bad journals. But take a look at most any issue of Educational Evaluation and Policy Analysis (EEPA) or the Journal of Educational and Behavioral Statistics (JEBS)--the quality is generally very high.

Doug, it sounds like what you are saying is that "peer review" alone is not an accurate safeguard on what qualifies as reliable research.

It's people who assert "academic research is intended to contribute to a body of scholarly knowledge, and is subject to thorough peer review and to norms of scholarly inquiry" who show a "lack of familiarity with academic journals (and in particular, journals in education)." Even in the hardest of hard science, physics, a lot of research is trivial pounding of sand, adding tenth-decimal-point accuracy to something that only matters to the third decimal point. For years, physics texts gave false values of the speed of light (false in the sense that they claimed an accuracy far beyond the actual value, that is, they gave values to n places and went awry after n-5 places). Once some respected lab published a value, all the other researchers who obtained values significantly different questioned their own results and omitted extreme (more correct) results. Paleontologists recently (last 20 years) had to revise their time series on the Indian subcontinent because many reports in peer-reviewed journals were fabricated by one "scholar". A mathematician friend of mine told me that the American Mathematical Siociety once tried double-blind reviewing (reviewers didn't know the author or his institutional affiliation) They had to stop. The problem wasn't that bad work was getting in but that the fingernail clippings of Mr. Big at Prestige U. weren't getting their due respect.

I like some academic journals: American Economic Review, the Journal of Political Economy, Comparative Education, etc. How could I not like Education Review, after this: "...It is almost certainly more damaging for children to be in school than to out of it. Children whose days are spent herding animals rather than sitting in a clasroom at least develop skills of problem solving and independence while the supposedly luckier ones in school are stunted in their mental, physical, and emotional development by being rendered pasive, and by having to spend hours each day in a crowded classroom under the control of an adult who punishes them for any normal level of activity such as moving or speaking." Quoted in Clive Harber, "Schooling as Violence", p. 10, Educatioinal Review, V. 54, #1.

Anonymity and credibility is a moot point. As the poet said, "A rose by any other name would smell just the same."
Obtuse and jumbled references from the encyclopedia of unusual research points do not make a coherent argument.
What does stand out since this issue first arose is that not once has Greene or anyone addressed the concerns raised by Anonymous Peer Review about the quality or credibility of Greene's claims in his recent paper.
APR dissects with precision the flaws in Greene's paper. He points out questionable assumptions, improper data manipulation, and missing information pertinent to the report.
Eduwonkette has taken the time to answer her critics and provide a point by point response to her critics. No one has done the same with Anonymous Peer Review's critique.
The only thing accomplished by those disagreeing with Eduwonkette is to muddy the waters and distract us from seeing real answers to real questions about Greene's own paper.

If one takes the time to look into the Clive Harber paper and put Malcolm's selected quote into context, then one would undertand that Harber is discussing 3rd world schooling under oppressive, totalitarian, or terrorist regimes.
Does the image provoke nostalgic memories on Malcolm's part?

Comments are now closed for this post.

Advertisement

Recent Comments

  • Elton: If one takes the time to look into the Clive read more
  • elton: Anonymity and credibility is a moot point. As the poet read more
  • Malcolm Kirkpatrick: It's people who assert "academic research is intended to contribute read more
  • kderosa: Doug, it sounds like what you are saying is that read more
  • doug: Malcolm - You're showing your lack of familiarity with academic read more

Archives

Categories

Technorati

Technorati search

» Blogs that link here

Tags

8th grade retention
Fordham Foundation
The New Teacher Project
Tim Daly
absent teacher reserve
absent teacher reserve

accountability
accountability in Texas
accountability systems in education
achievement gap
achievement gap in New York City
acting white
admissions
AERA
AERA annual meetings
AERA conference
AERJ
Alexander Russo
Algebra II
American Association of University Women
American Education Research Associatio
American Education Research Association
American Educational Research Journal
American Federation of Teachers
Andrew Ho
Art Siebens
ATR
Baltimore City Public Schools
Barack Obama
Bill Ayers
black-white achievement gap
books
books on educational research
boy crisis
brain-based education
Brian Jacob
bubble kids
Building on the Basics
Cambridge Education
carnival of education
Caroline Hoxby
Caroline Hoxby charter schools
cell phone plan
charter schools
Checker Finn
Chicago
Chicago shooting
Chicago violence
Chris Cerf
class size
Coby Loup
college access
cool people you should know
credit recovery
curriculum narrowing
D3M
Dan Willingham
data driven
data-driven decision making
data-driven decision-making
David Cantor
DC
Dean Millot
demographics of schoolchildren
Department of Assessment and Accountability
Department of Education budget
desegregation
Diplomas Count
disadvantages of elite education
do schools matter
Doug Ready
Doug Staiger
dropout factories
dropout rate
dropouts
education books
education policy
education policy thinktanks
educational equity
educational research
educational triage
effects of neighborhoods on education
effects of No Child Left Behind
effects of schools
effects of Teach for America
elite education
ETS
Everyday Antiracism
excessed teachers
exit exams
experienced teachers
Fordham and Ogbu
Fordham Foundation
Frederick Douglass High School
Gates Foundation
gender
gender and education
gender and math
gender and science and mathematics
gifted and talented
gifted and talented admissions
gifted and talented program
gifted and talented programs in New York City
girls and math
good schools
graduate student union
graduation rate
graduation rates
guns in Chicago
health benefits for teachers
High Achievers
high school
high school dropouts
high school exit exams
high school graduates
high school graduation rate
high-stakes testing
high-stakes tests and science
higher ed
higher education
highly effective teachers
Houston Independent School District
how to choose a school
IES
incentives in education
Institute for Education Sciences
is teaching a profession?
is the No Child Left Behind Act working
Jay Greene
Jim Liebman
Joel Klein
John Merrow
Jonah Rockoff
Kevin Carey
KIPP
KIPP and boys
KIPP and gender
Lake Woebegon
Lars Lefgren
leaving teaching
Leonard Sax
Liam Julian

Marcus Winters
math achievement for girls
McGraw-Hill
meaning of high school diploma
Mica Pollock
Michael Bloomberg
Michelle Rhee
Michelle Rhee teacher contract
Mike Bloomberg
Mike Klonsky
Mike Petrilli
narrowing the curriculum
National Center for Education Statistics Condition of Education
NCLB
neuroscience
new teachers
New York City
New York City bonuses for principals
New York City budget
New York City budget cuts
New York City Budget cuts
New York City Department of Education
New York City Department of Education Truth Squad
New York City ELA and Math Results 2008
New York City gifted and talented
New York City Progress Report
New York City Quality Review
New York City school budget cuts
New York City school closing
New York City schools
New York City small schools
New York City social promotion
New York City teacher experiment
New York City teacher salaries
New York City teacher tenure
New York City Test scores 2008
New York City value-added
New York State ELA and Math 2008
New York State ELA and Math Results 2008
New York State ELA and Math Scores 2008
New York State ELA Exam
New York state ELA test
New York State Test scores
No Child Left Behind
No Child Left Behind Act
passing rates
Pearson
picking a school
press office
principal bonuses
proficiency scores
push outs
pushouts
qualitative educational research
qualitative research in education
quitting teaching
race and education
racial segregation in schools
Randall Reback
Randi Weingarten
Randy Reback
recovering credits in high school
Rick Hess
Robert Balfanz
Robert Pondiscio
Roland Fryer
Russ Whitehurst
Sarah Reckhow
school budget cuts in New York City
school choice
school effects
school integration
single sex education
skoolboy
small schools
small schools in New York City
social justice teaching
Sol Stern
SREE
Stefanie DeLuca
stereotype threat
talented and gifted
talking about race
talking about race in schools
Teach for America
teacher effectiveness
teacher effects
teacher quailty
teacher quality
teacher tenure
teachers
teachers and obesity
Teachers College
teachers versus doctors
teaching as career
teaching for social justice
teaching profession
test score inflation
test scores
test scores in New York City
testing
testing and accountability
Texas accountability
TFA
The No Child Left Behind Act
The Persistence of Teacher-Induced Learning Gains
thinktanks in educational research
Thomas B. Fordham Foundation
Tom Kane
Tweed
University of Iowa
Urban Institute study of Teach for America
Urban Institute Teach for America
value-addded
value-added
value-added assessment
Washington
Wendy Kopp
women and graduate school science and engineering
women and science
women in math and science
Woodrow Wilson High School