© 2024 Kansas City Public Radio
NPR in Kansas City
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

There Is No FDA For Education. Maybe There Should Be

LA Johnson
/
NPR

Has American education research mostly languished in an echo chamber for much of the last half century?

Harvard's Thomas Kane thinks so.

Why have the medical and pharmaceutical industries and Silicon Valley all created clear paths to turn top research into game-changing innovations, he asks, while education research mostly remains trapped in glossy journals?

Kane, a professor of education at Harvard's Graduate School of Education, points out that there is no effective educational equivalent of the Food and Drug Administration, where medical research is rigorously vetted and translated into solutions. Maybe, he says, there should be.

It's been 50 years since the publication of the highly influential "Equality of Educational Opportunity" study — better known as the Coleman Report, after its author, James Coleman. And after a half-century, Kane writes in a new article, we should have made much more progress toward closing the achievement gap: the educational equivalent of the fight against cancer.

Failure to do more, Kane argues, underscores the deep shortcomings of education research.

The Coleman Report drew national attention to chronic educational inequality and achievement gaps by race. And while scholarship and research since 1966 have challenged some of his conclusions as misleading, even wrong, many of the core problems he highlighted remain.

James Samuel Coleman, 1958.
/ JHU Sheridan Libraries/Gado/Getty
/
JHU Sheridan Libraries/Gado/Getty
James Samuel Coleman, 1958.

I spoke with Kane about this recently. Here's a version of our conversation, edited for length:

Give us a snapshot of how important the Coleman Report was in terms of looking at the achievement gap.

The Coleman Report was extremely important. It was authorized as part of the Civil Rights Act of 1964. They were given two years to do a nationally representative study just documenting the magnitude of the achievement gap and differences in access to quality education.

As you can imagine, the technology for collecting and analyzing data back in 1964-65 was very different from what we have today, so it was a remarkable feat, and has a lasting legacy to this day.

The tools he had were limited; his methods and conclusions were flawed. But he was nonetheless on to something. Is that a fair characterization?

I would say my main complaint is not with what Coleman did, although, as you say, there are some weaknesses to it. My main complaint is what we've done since then.

We have spent the last 50 years essentially recapitulating the same descriptive work that Coleman and his colleagues did, and not finding solutions and spreading information about solutions.

The point of education research is to identify effective interventions for closing the achievement gaps that Coleman observed and ensuring that that information is usable.

And by that metric — by our ability to build consensus around a set of interventions that work for closing the achievement gap — I would have to say that the last 50 years have been a near complete failure.

Let's drill down on that. That's a pretty scathing and strong indictment.

Yes. I don't point fingers at the school officials out there. We just have not organized ourselves and organized the research function in a way that we're actually informing decision-makers with the type of evidence, and on the timeline they need, to make decisions.

Have education experts been writing and researching for these glossy journals in a kind of echo chamber? It's mostly produced for each other and not for the actual practice that's implemented?

I would say for the first 35 of those 50 years, we were primarily writing for academic colleagues.

Over the last 15 years, we've done a better job of studying interventions. Under the second George W. Bush administration, the Education Department made a big push to study interventions with random-assignment experiments.

The problem is that we don't have any kind of mechanism for connecting that central knowledge base to the decisions that school superintendents and chief academic officers inside school districts make.

Medical science and technology companies, with its research and development, they have a pretty good system where research gets translated into action. You're saying in education that just does not exist.

It's almost as if we have replicated the medical model and the pharmaceutical model for conducting research, but we've not replicated the parts of those models which actually translate that research into action.

The FDA offers the expertise of a central panel of folks that are reviewing the research and looking for the evidence, side effects, efficacy. They make decisions about which drugs we're all, you know, going to be exposed to. We have no such thing in education.

In medicine, often you'll have panels recommending clear standards of care. We have no such thing in education.

We have an approach to funding education research that seems geared toward building a central knowledge base or building the expertise among a small group of experts, but there's just no way to translate that into decisions out in the field. They're completely separate worlds.

You write that we've learned very little about the most basic educational questions, like how best to train or develop teachers.

People often refer to a revolution in the training of medical doctors back in the early 1900s. Around 1910, there was a report by Abraham Flexner on the way we educate and train doctors, and people have said, "Well, guys, we need the same thing in education." The fact is that, at the time Flexner was writing, there was a model that at least the experts at the time believed was a much more effective way of training doctors. And the Flexner report was just putting some muscle behind that pre-existing expert opinion. Again, we have no such thing in education.

You know, we realized that teachers are the most important in-school factors for driving student learning. Yet we don't have models of teacher preparation where we actually have evidence that produces better teachers.

There is the Education Resources Information Center, as you write, and they even have something called the "What Works Clearinghouse." You're saying that actually doesn't work very well.

The "What Works Clearinghouse" is great for people like me who want to look up what are all the studies that have been conducted on a particular topic to try to sort out the persuasive studies from the not-so-persuasive studies. But it's not proven very effective in forming the decisions of state and local leaders.

It's not just about making it more searchable or doing a better job translating the technical findings into everyday language. I would argue it is a matter of human nature that people are going to be less likely to be persuaded by something that they read in the What Works Clearinghouse than they are to be persuaded by their own data on whether or not something is working.

So there's a need to stay closer to home, get more data-driven on their own school, make it more localized and adaptable?

Right. But also to make it that easy to do. Right now, if I were a school superintendent, I'd likely have to go out and hire a contract research firm for a few hundred thousand dollars to come in to help me analyze my school data to tell me whether or not that group of classrooms was outperforming comparison classrooms elsewhere in my district or in the state. We could automate that and make that much easier.

You do write that there are some positive signs out there. But there's still a lot more work that needs to be done. What would you argue needs to be done to close this gap? And if you can't build an FDA for education, what can you do practically for a country as large and populous as ours to translate this research into on-the-ground help for teachers and administrators?

We've just started to measure achievement on these new Common Core assessments in different ways, by requiring students to do more writing and problem-solving and so forth.

The problem is that those data are currently being used to issue school report cards, to calculate adequate yearly progress — and not to evaluate programs and policies.

What I would argue is that a much larger share of the federal education research dollars ... should be devoted to helping states begin to use the data that they've been accumulating to start evaluating their own programs and policies.

For instance, if there is a set of classrooms that are using some new piece of educational software, we should be able to go out and find other classrooms that have very similar students, that have students with similar prior achievement, similar demographics and so forth, and then just track them over time and just automatically report the differences in outcomes for the students receiving the software and then the comparison group of students that aren't.

It wouldn't be the same as a randomized, controlled experiment, but it would also not take nearly as long and would be a lot less expensive.

That is the thing that I think we are on the verge of actually being able to have in education.

Try it. See what works. Iterate.

Anybody who has ever done a home-improvement project knows that their first idea about how to solve a problem is almost always wrong.

I'm sure the same is true in education — that most of the things we're trying, even though they sound like good ideas, are not working and may actually be doing harm.

The problem is, we don't know which subset of things is working and not. It's that kind of fact — knowing that most of the things we're trying probably are not working — that gives you a sense of urgency around increasing the number of trials, rather than what the federal government strategy has been: to fund a few very expensive, very high-quality studies of a smaller number of interventions.

Copyright 2020 NPR. To see more, visit https://www.npr.org.

Eric Westervelt is a San Francisco-based correspondent for NPR's National Desk. He has reported on major events for the network from wars and revolutions in the Middle East and North Africa to historic wildfires and terrorist attacks in the U.S.
KCUR prides ourselves on bringing local journalism to the public without a paywall — ever.

Our reporting will always be free for you to read. But it's not free to produce.

As a nonprofit, we rely on your donations to keep operating and trying new things. If you value our work, consider becoming a member.