"No Effects" Studies

April 1, 2009 | Blog

Education Week‘s got this article out about randomized trials producing “no effects.” According to the article, these null findings are raising eyebrows and “prompting researchers, product developers, and other experts to question the design of the studies, whether the methodology they use is suited to the messy real world of education, and whether the projects are worth the cost, which has run as high as $14.4 million in the case of one such study.”

Wow, is that ever a disappointing reaction. Here’s why:

1. We should be psyched, not upset, that studies with null effects are being released. That is not always the case. Publication bias, anyone? I’ve often thought that studies demonstrating null effects need to be publicized even more widely than those that find positive or negative impacts. Too many places out there are at the behest of funders, and can’t release null findings. Too many assistant professors don’t get tenure because they “didn’t find anything.” Are you kidding me? If it’s a current practice and you learn it doesn’t produce any effects, either way, it needs to be out there. We should learn as much from null findings and “worst practices” as we do from “statistically significant” impacts and “best practices.”

2. Saying that experimentation isn’t suited to the “messy real world” is a cop out. It lumps many different kinds of experiments into one category– the good, the bad, and the ugly. Field experiments, lab settings, cluster-randomized trials with volunteer districts, and student-level randomized experiments with participants selected via administrative data– these are very different animals. Each approach has a differential potential for generalizable results (external validity) and varying levels of challenges to internal validity as well. I’ll grant you, experiments that rely on volunteer samples probably can’t help us much in education– since in real life programs aren’t applied to students, families or schools who volunteer–they apply to everyone. This is especially a problem when we try interventions to close achievement gaps– African-Americans who volunteer for studies are very, very different from those who do not (Tuskegee anyone?).

3. Doing experiments well costs a LOT of money. Putting trials on tight budgets helps to ensure they aren’t run well–PIs cannot build the kinds of relationships that promote treatment fidelity, cannot collect high-quality data, and cannot get inside the black box of mechanisms–and instead are stuck simply estimating average treatment effects. No drug works for everyone, and no drug works in the exact same way for everyone– the medical community knows this, and uses larger samples to make identifying differential and heterogeneous effects possible. When is Education going to catch up?

4. One thing I do agree with this article on. The model IES is using needs some revisions. I heard William T. Grant president Bob Granger give a great talk at SREE recently, where he made the point that the usual ‘try small things then scale them up’ model isn’t going anywhere fast. We need to know how current policies work as currently implemented– at scale. Go after that, spend what’s necessary to conduct experiments with higher internal AND external validity, and support researchers to do this who reject old models and try new things. I promise you, we’ll get somewhere.

3 Comments

  1. Reply

    Nate

    April 1, 2009

    I've been reading your bog for awhile now and I've been impressed with the good sense and intentions I see. As a HS English teacher, I've always been interested in research, at least research that can offer suggestions to improve. Still, I have to say, the real impact of research on the classroom is so, so small. Even if there was a magical sugar daddy who could fund these major research studies, how valid would they be? When you have a drug interaction at least you can be sure that you gave the same drug to different people, but if you have a teaching interaction? If I, for instance, increase my direct written commentary on writing. How exactly would you be able to tease out the result from the melange of other forces including student background, who the kid has the period before, the parents, how much Halo gets paid, and even the ineffable effect of relationships. It seems to make sense to me that everyone should enter into this with a great deal of humility and recognize that research can advise but rarely be able to nail down something you could bet the house on.

  2. Reply

    Dr. Sara Goldrick-Rab

    April 1, 2009

    Hi Nate,

    Thanks for your note. I think you're talking about two things here. First, whether an experiment can really identify a causal impact of an intervention. Second, whether even with the knowledge of a causal effect, what relevance does research have in educational settings? To these, I'd add a third-- how much can we expect educational interventions to achieve?

    So in brief, here are some responses:

    1. Experiments conducted with fidelity are very good at disentangling the multiple factors affecting outcomes and identifying causal effects. That said, that depends on treatment fidelity- and that depends on the teacher. Best we can say is, if you give the intervention in this way, it has this effect.

    2. I'm distressed but not surprised that research isn't taken up by practitioners. To the extent that has to do with a lack of information about what research can and cannot accomplish, this needs to be addressed.

    3. That said, I never expect educational interventions to have BIG effects- simply because they only target one setting. Life chances are affected by a much greater degree by social circumstances and structural constraints than by educational experiences-- attainment, yes, but that too is driven by the former. When we start recognizing social and health interventions as educational interventions and linking them in comprehensive ways, only then will we see major impacts. IMO, of course.

  3. Reply

    Dr Kevin Cooper

    April 5, 2010

    Saying that experimentation isn't suited to the "messy real world" is a cop out.

    Is absurd, while empirical research might not suit every discipline and sometimes you may have to use simulation.


Would you like to share your thoughts?

Would you like to share your thoughts?

Leave a Reply

© 2013 The EduOptimists. All Rights Reserved.