The field of financial aid research is rapidly growing and expanding, which is a really good thing since the reliability and validity of evidence on effects pales in comparison to the magnitude of the national investment in aid. Policymakers shoot me emails almost daily, asking “how can this be?”
Well, it’s expensive and difficult to rigorously examine the impacts of expensive, complicated programs. Financial support for aid research is often difficult to come by; seemingly because at least to some foundations and other funders, “we know it all” about aid already and need to move on.
Expert researchers like Sue Dynarski and Judith Scott-Clayton know better than this, and bother to continue studying financial aid and write comprehensive reviews of existing studies on the topic for the rest of us. I’ve relied on Dynarski’s work continuously since my career began, and continue to be amazed at her ability to conduct incisive, beautifully executed work year after year. This morning, she issued not one but two new papers from NBER, both on financial aid. For that, we owe her and her co-authors quite a big thanks.
With that sincere respect for her work in mind, I want to submit one point of disagreement with one of the new papers. It regards a particularly difficult and controversial issue: whether financial aid ought to be reformed to include academic incentives tied to college persistence, to increase its effectiveness. The abstract for Dynarski and Scott-Clayton’s paper reads “for students who have already decided to enroll, grants that link financial aid to academic achievement appear to boost college outcomes more than do grants with no strings attached.” This is not a new statement from these researchers, but the paper reiterates it, reviving the debate in the midst of the Gates Foundation’s efforts to rethink aid.
A close look at the evidence presented in this new paper leads me to believe that while this is a reasonable hypothesis, it has just as little empirical support today– or perhaps even less– than it did a few years ago when the debate over this issue was especially hot. Let’s review.
On the question of the impacts of strictly need-based grants on college persistence, the authors point to two quasi-experimental studies (one by Eric Bettinger, one by Dynarski) with “suggestive but inconclusive evidence that pure grant aid improves college persistence and completion” as well as one study of a program targeting very high achieving students (Gates Millenium Scholars, studied by Steve DesJardins) that found no effects (unsurprising since their high outcomes were hard to improve upon).
In addition, they point to an early working paper (issued in 2011) from my ongoing experimental study in Wisconsin, which at the time, using one cohort of students, found null effects for a private grant program. This is the extent of the evidence they display about the impacts of grants with no strings attached. So it’s important to note that our paper was updated and re-released last fall 2012 (not uncommon for working papers, and it did get press coverage) to incorporate findings from four cohorts of students and take into consideration that some students saw real increases in financial aid from the grant program while others did not. The results suggest that grants with no strings attached increased college persistence by about 3 percentage points per $1,000 — consistent with Dynarski and Bettinger’s estimates of aid’s impacts from other programs. Sure, those estimates are derived from a quasi-experimental analysis within our experimental study (not dissimilar to the approaches highlighted in the other studies cited), but if you want to be a purist about it, look at the experimental evidence only. That evidence suggests positive impacts as well, and raises the possibility that program complexity is moderating the impacts (another key theme in Dynarski and Scott-Clayton’s research from 2006).
On the questions of the impacts of grants with academic incentives, the authors highlight several studies, with two figuring most prominently. First, the MDRC performance-based scholarship demonstration. They point to evidence from the first, small experiment in New Orleans, which showed positive impacts. Then, they suggest that the ongoing replication studies of those scholarships “appear to reinforce the findings of the initial study.” Unfortunately, that comment is outdated. The latest reports on effects are showing null results. The What Works Clearinghouse issued a Quick Review on the New York City results (published in December 2012) last week, indicating the experimental test of performance-based scholarships in that city produced no detectable effects on college retention (the title of the study is “Can Scholarships Alone Help Student Succeed?” but please note that the study is of performance scholarships– not scholarships alone). (Full disclosure: I am Project Director on that WWC contract.) Recent conference presentations from the project reveal similar trends– very little improvement, if any, resulting from the additional of performance based scholarships. Project director LaShawn Richburg-Hayes has been commendably careful to also point out that these scholarships are delivered on top of aid without strings, and also noting that it’s unclear why Louisiana’s results haven’t been reproduced elsewhere.
Next, Dynarski and Scott-Clayton highlight Scott-Clayton’s rigorous dissertation study of a West Virginia program. Her quasi-experimental analysis suggests that in West Virginia, effects of tying aid to performance had positive effects. However, the most important claim for the argument here– that effects were stronger when academic incentives operated than when they did not–does not tell us that incentives performed better than “no strings attached” for these students. The two treatments occurred at different points in college for these West Virginia students, with academic performance required early in college, and no performance required later in college. In other words, there is a key confound (year in college) compromising the results.
The truth is that the experimental work needed to test the hypothesis that academic incentives tied to grant aid outperform grant aid without strings attached hasn’t been conducted. It’s clear that the effects of aid are very likely heterogeneous, and there are numerous variations in how aid is designed and delivered. For this reason, across-study comparisons are very hard to make. We need to set up a horse race between aid and aid+incentives for a sample of students much like those whom we’d hope to reform aid for– Pell Grant recipients, most likely. Only then will we know if academic incentives really add value. And even then, we won’t know why— without rigorous mixed methods research.
For now, the jury is out, and policymakers who pair academic incentives with need-based aid are flying blind. They may have other rationales for doing wanting to do this– some people feel better about distributing money when it comes with strings– but they shouldn’t pretend it’s an evidence-based decision. And like merit aid, it emphasizes the “reward” rather than “compensatory” rationales for distributing financial aid; a political norm-laden shift that probably isn’t without consequence.