« Are Too Many Colleges a Black Hole? | Main | Performance Pay for All? »

Carnegie Unveils Plans for 'Problem-Based' Research Networks

| 3 Comments

Anthony J. Bryk, the president of the Carnegie Foundation for the Advancement of Teaching, has drawn a lot of attention in Washington policy circles over the past year with his call for a "design-educational engineering-development'" approach to research and development. The basic idea, one that a number of experts echo, is to design educational solutions, test them, tinker, test them again more widely and in different contexts, scale up, and continue to test. A good analogy might be the process that software engineers use to solve problems and develop new versions of their products. (To read more about the growing interest in this approach to research, see this EdWeek article I wrote in January. This article by my colleague,
Catherine Gewertz, describes some of the counterpoint to that movement.)

This idea differs from the approach to education research that's been in vogue among policymakers over the past eight years or so. The push up until now has been on transforming education research into an "evidence-based" field much like medicine and it involved testing specific interventions through rigorous experiments designed to answer a single question: Does it work? If it didn't—and most didn't seem to— researchers went on to the next project or program.

But, beyond the fundamental engineering orientation, the details of Bryk's ideas were sketchy. The foundation helped to fill in some of the blanks yesterday when it unveiled plans for the first of its projects to reflect the new approach, which it calls D-EE-D for short.

Along with the Bill & Melinda Gates Foundation and the William & Flora Hewlett Foundation, Carnegie is investing $2.5 million to form a research network focused around a single educational problem. The problem the foundation wants to solve is how to improve the success rates of community college students in remedial, or developmental, math courses. Taken by 60 percent of community college students, the noncredit courses are designed for students whose academic skills are not up to par. Students have to take and pass them before they can enroll in the courses that count toward their degrees. The problem is that many students get stuck there, with some taking as many as four or five courses before giving up on college altogether.

For the alpha phase of the project, which will take place over the next year and a half, Carnegie has enlisted some big research guns from around the country to map out the terrain, develop some promising practices, and begin to test them in one or two community colleges. (To find out who's involved, check out this press release posted online yesterday.)

The researchers will work from the beginning with designers, practitioners, and institutional leaders and the design teams will use technology developed through the open educational resources movement to share data among themselves, and eventually make their products available to the general public for free.

By the third year of the project, Mr. Bryk hopes to be testing and refining promising innovations in 20 to 30 community colleges in two states. The goal is to eventually extract theories from all the data on how the innovations work, when, and in what contexts, so that they can be reliably used, or adapted for use, in a varying array of settings.

Don't rely on my description, though. Mr. Bryk outlines the basic principles for his approach in a new"Message from the President" on the foundation's Web site.

Over time, Mr. Bryk said, the foundation hopes to seed other research networks focused around single, "high leverage" educational problems. I have a suggestion for one: How about designing a name that rolls off the tongue a little more smoothly than "design-educational engineering-development?"
.

3 Comments

The past eight years of pushing, by education policy makers for results, has proven that educational interventions are extremely similar to medicines...whether they work or not depends first on whether or not the patient takes the medicine as prescribed and finally, if the prescription will work specifically for the individual malady in question. Suffice it to say that policy makers are not educators; educators are not doctors; but when it student success is the issue at hand, educators are the only professionals qualified to diagnose and prescribe or proscribe.

VT--I do see some rather glaring differences, as well as some overlooked similarities. While we tend to see medicine as primarily a responsive model (diagnose, prescribe, evaluate), the field of public health and preventive medicine is a much larger environment that must be included. Innoculations against disease are regularly provided--not on the individual advice of individual physicians, but based on agreed-upon best practices, sometimes codified into law. For those whose health care is government sponsored (for instance those on Medicaid), there may be additional required practices. Pediatricians treating children on Medicaid, for instance, are required to screen patients for lead exposure.

Further, the realm of public health, is responsible for such things as rapid dissemination of responsive/preventive measures in the advent of threats to the water supply, or the possibility of pandemic. In these cases, health related criteria determine the priorities for any supplies. When there is a shortage of flu vaccine, particular groups are targeted (whereas in education, we systemically ensure that those with the most solid backgrounds get the best of available education), based on their health risk.

Education--most of which is "public" and publicly funded--has a ways to go to emulate a medical/public health model. I always find it ironic that teachers want to compare themselves with doctors--much more highly and rigidly educated as a group--when their level of education might better place them in comparison to nurses, or a good many other positions (X-ray technicians, lab specialists, etc). This is important because decision-making in medicine is highly structured. An RN does not prescribe medicine, and only exercises decision-making in certain situations, more typically acting on the direction of a physician. An RN does not diagnose.

Yet teachers, in comparing themselves to doctors, seem to be making the case that 1) doctors are totally independent decision-makers (not the case) and 2) that teachers should be afforded the same perceived level of autonomy. Teachers rail against the supposition that there should be any higher level of authority or agreement on anything such as best practice. There is also resistance to accepting responsibility for the "public" outcomes of public education. In particular (and this is always surprising to me--given that the field is education) there is resistance to the idea that there might be groups having higher levels of education who, by virtue of that higher level of education, might appropriately make higher level decisions to guide the practice of individual teachers.

So--I think that the comparison is apt, but education still has a ways to go.

This is a really interesting post. I would be interested in seeing which approach works better in the long term. My instinct says the latter…the trusted influence is the one that builds the relationship that leads to repeat business. I would add to that one (which is my approach as well) that product CAN be sold through this method (not just gigs, consulting and leads), but the seller must first be positioned as an expert that provides value to a prospect. Only then will their product recommendations be taken seriously.

Comments are now closed for this post.

Follow This Blog

Advertisement

Most Viewed on Education Week

Categories

Archives

Recent Comments

  • School Singapore: This is a really interesting post. I would be interested read more
  • Margo/Mom: VT--I do see some rather glaring differences, as well as read more
  • VT_Tradition: The past eight years of pushing, by education policy makers read more