Some members of the media are covering the release of a new Canadian study, associated with the Educational Policy Institute, that examines the effects of a financial aid program on college-going and completion among low-income students. Researchers at the Measuring the Effectiveness of Study Aid Project tried to isolate those effects by examining what happened following a change in student aid policy in Quebec that increase aid eligibility and decreased reliance on loans. By comparing student outcomes both before and after the policy change, and comparing the outcomes of similar student in Quebec to those in other provinces (where such reforms did not occur), analysts attempted to establish a causal effect of aid.
They conclude that the policy affected access (increasing overall enrollment among students from families making less than $20K per year by 4-6 percentage points), and persistence (increasing retention rates by 6 percentage points) but did not affect graduation rates–at least within the 4-year window of time during which graduation was measured.
While noting that the null findings may stem from that short period of observation, the researcher still goes on record with this conclusion: “These results therefore cast doubt on the efficacy of this reform in particular, and of needs-based grants in general, to improve graduation rates.”The headline over at Inside Higher Ed reads “More Money Doesn’t Equal More Graduates.”
This is a distinctly premature and irresponsible conclusion. First, as one of my graduate assistants James Benson pointed out, “if the percentage of college-eligible students that enrolled in college increased by 5 percent, and the persistence and graduation rates remained entirely static, then the program produced a net gain in the proportion of young adults completing semesters and degrees.”
Furthermore, there are many reasons why an effect might not be estimated properly in this study. As my colleagues Doug Harris, Phil Trostel, and I explained in a recent paper, a simple correlation between aid receipt and college success is likely to be negative because students from low-income families, in the absence of aid, are for a variety of reasons less likely to succeed. Unless researchers can convincingly account for all of those reasons – and we argue that very few do – the estimated effects of aid are likely to look smaller than they really are. This study is not very convincing and really doesn’t move far beyond a correlation, for many reasons. For example, as another graduate assistant, Robert Kelchen, indicates:
1. The comparison groups (Quebec vs. other provinces) have very different rates of financial aid take-up prior to the reform. This calls the validity of the comparison into question. It’s also too bad the researcher didn’t see fit to post his tables on the website, since we cannot see whether the differences post-treatment are significant.
2. Quebec saw increases in the enrollment rates of high-income students following the reform, in addition to increases in the enrollment rates of low-income students. If financial aid was the real driver, it shouldn’t have affected the (ineligible) high-income students.
These are but a few examples– if a full research paper (such as would be submitted for academic review) was available, I bet we’d have more concerns.
This is a case of the press jumping the gun and running with a story, and a headline, not supported by the empirical work done by the researchers. We’re in a recession, and aid programs cost a lot of money. We do need to know if they work, and in particular if they are cost-effective. But the estimation of impacts should be done more carefully, and results discussed in a much more responsible manner. Sexy but un-informed headlines will do little good– perhaps even casting a shadow on an effective program, reducing its ability to maintain funding. All of us studying financial aid have an obligation to do much, much better.