Is Our Students Learning?

May 15, 2011 | Blog


Remarkably, one of the topic’s of yesterday’s blog post (and another I wrote two years ao)– the limited learning taking place on many college campuses– is the subject of a New York Times op-ed today. Titled, “Your So-Called Education,” the piece argues that while 90% of graduates report being happy with their college experience, data suggests there’s little to celebrate. I urge you read it and its companion op-ed “Major Delusions,” which describes why college grads are delusional in their optimism about their future.

We don’t regularly administer the Collegiate Learning Assessment at UW-Madison, the test that the authors of the first op-ed used to track changes in student learning over undergraduate careers. From talking with our vice provost for teaching and learning, Aaron Brower, I understand there are many good reasons for this. Among them are concerns that the test doesn’t measure the learning we intend to transmit (for what it does measure, and how it measures it, see here), as well as concerns about the costs and heroics required to administer it well. In the meantime, Aaron is working on ways to introduce more high-impact learning practices, including freshmen interest groups and learning communities, and together with colleagues has written an assessment of students’ self-reports of their learning (the Essential Learning Outcomes Questionnaire). We all have good reason to wish him well. For it’s clear from what we do know about undergraduate learning on campus, we have work to do.

The reports contained in our most recent student engagement survey (the NSSE, administered in 2008) indicate the following:

1. Only 60% of seniors report that the quality of instruction in their lower division courses was good or excellent.

This is possibly linked to class size, since only 37% say that those classes are “ok” in size — but (a) that isn’t clear, since the % who says the classes are too large and the % that say they are too small are not reported, and (b) the question doesn’t link class size to quality of instruction. As I’ve noted in prior posts, it’s a popular proxy for quality but also one that is promoted by institutions since smaller classes equates with more resources (though high-quality instruction does not apparently equate with smaller classes nor high resources). There are other plausible explanations for the assessment of quality that the survey does not shed light on.

2. A substantial fraction of our students are not being asked to do the kind of challenging academic tasks associated with learning gains.

For example, 31% of seniors (and 40% of freshmen) report that they are not frequently asked to make “judgments about information, arguments, or methods, e.g., examining how others gathered/ interpreted data and assessing the soundness of their conclusions.” (Sidebar– interesting to think about how this has affected the debate over the NBP.) 28% of seniors say they are not frequently asked to synthesize and organize “ideas, information, or experiences into new, more complex interpretations and relationships.” On the other hand, 63% of seniors and 76% of freshmen indicate that they are frequently asked to memorize facts and repeat them. And while there are some real positives– such as the higher-than-average percent of students who feel the university emphasizes the need to spend time on academic work– fully 45% of seniors surveyed did not agree that “most of the time, I have been challenged to do the very best I can.”

3. As students get ready to graduate from Madison, many do not experience a rigorous academic year.

In their senior year, 55% of students did not write a paper or report of 20 pages or more, 75% read fewer than 5 books, 57% didn’t make a class presentation, 51% didn’t discuss their assignments or grades with their instructor, and 66% didn’t discuss career plans with a faculty member or adviser. Nearly one-third admitted often coming to class unprepared. Less than one-third had a culminating experience such as a capstone course or thesis project.

4. The main benefit of being an undergraduate at a research university–getting to work on a professor’s research project– does not happen for the majority of students.

While 45% of freshmen say it is something they plan to do, only 32% of seniors say they’ve done it.

Yet overall, just as the Times reports, 91% of UW-Madison seniors say their “entire educational experience” was good or excellent.

Well-done. Now, let’s do more.

Postscript: Since I’ve heard directly from readers seeking more resources on the topic of student learning, here are a few to get you started.

A new report just out indicates that college presidents are loathe to measure learning as a metric of college quality! Instead, they prefer to focus on labor market outcomes.

Measuring college learning responsibly: accountability in a new era by Richard J. Shavelson is a great companion to Academically Adrift. Shavelson was among the designers of the CLA and he responds to critics concerned with its value.

The Voluntary System of Accountability, embraced by public universities who hope to provide their own data rather than have a framework imposed on them. Here is Madison’s report.

On the topic of students’ own reports of their learning gains, Nick Bowman’s research is particularly helpful. For example, in 2009 in the American Education Research Journal Bowman reported that that in a longitudinal study of 3,000 first year students, “across several cognitive and noncognitive outcomes, the correlations between self-reported and longitudinal gains are small or virtually zero, and regression analyses using these two forms of assessment yield divergent results.” In 2011, he reported in Educational Researcher that “although some significant differences by institutional type were identified, the findings do not support the use of self-reported gains as a proxy for longitudinal growth at any institution.”

As for the NSSE data, such as what I cited above from UW-Madison, Ernie Pascarella and his colleagues report that these are decent at predicting educational outcomes. Specifically, “institution-level NSSE benchmark scores had a significant overall positive association with the seven liberal arts outcomes at the end of the first year of college, independent of differences across the 19 institutions in the average score of their entering student population on each outcome. The mean value of all the partial correlations…was .34, which had a very low probability (.001) of being due to chance.”

Finally, you should also check out results from the Wabash study.

7 Comments

  1. Reply

    frank.rojas

    May 16, 2011

    It would be nice if you provided some context to the NSSE data. Like that the UW numbers were usually as good or better than their peer groups contained in the NSSE report.
    91% gave UW a rating overall of good or excellent compared with 86% ot 88% for the peer groups. 90% said quality of upper division classes is good-excellent versus 87% at peers. And so on down the list. Even the rating for lower division classes of 60% saying good-excellent was far better than the 55% at peers.

    And your metaphor of kid's dessert permission being similar to a large national sample of peer per student costs was just juvenile and beneath your station.

  2. Reply

    Dr. Sara Goldrick-Rab

    May 16, 2011

    Peer comparisons are only worth citing when they are meaningful. Madison chose to include only "AAUDE" and "Carnegie" peers which are very large groups that contain institutions I suspect most of us would not contend are true peers to Madison. For example, AAUDE includes Nebraska, Texas A&M, SUNY-Buffalo and Stony Brook, Rutgers, Pittsburgh, UC-San Diego etc. Carnegie is no better-- there many institutions classified as public research institutions that aren't anything like UW-Madison. In particular their student bodies are incredibly different. So yes, we look better than them-- but what does that tell us? Not much.

    As for my dessert analogy-- it fits. It's the "Cookie Monster Principle."
    http://offshoreinn.com/investing/debt-for-diploma-schemes-and-the-cookie-monster-principle/

  3. Reply

    Jason Pickart

    May 16, 2011

    I've always felt that op-eds like "Your So-Called Education" that harp on less study time for students who still have good grades miss a couple of important things.

    The first deals with the decrease in study time, and this has to do with the internet. It's obvious, but it needs to be said: There was no such thing as Google Scholar online in the past. Services like Google Scholar or JSTOR are recent innovations and make comparisons in study time to the 1960s somewhat moot in my mind. When I go to research something, I (usually) don't spend an hour in Memorial Library looking through the stacks to find what I'm looking for, nevermind the time spent walking to the library. I can simply go online and find something in 30 seconds if I'm lucky, even primary sources like historical newspapers. Most professors post their readings online as well.

    The second is the fact that there are more students at very average or poor universities than ever before that skew discussions about academic rigor and grade inflation. My old school of UW-Oshkosh (which I attended before transferring to Madison) fits the definition of a very average university. It has a good business school but not much else, and it's ranked #76 in USNWR's regional university rankings. From 1999 to 2009 it's enrollment increased by 2000 students (http://www.uwosh.edu/today/2223/uwo-sees-largest-enrollment-in-its-139-year-history/). Getting an A at UW-Oshkosh is much easier than getting an A at UW-Madison, and the trend of increasing enrollment at schools like UW-O is seen across the country. With that in mind, it's not surprising that average grades would be higher.

    That isn't to say I don't think there aren't problems at UW-Madison, but I find it hard to take op-ed pieces or articles about the subject seriously when they ignore these two trends.

  4. Reply

    Jason Pickart

    May 16, 2011

    This comment has been removed by a blog administrator.

  5. Reply

    frank.rojas

    May 16, 2011

    If more of the so-called elite schools were actually willing to participate in NSSE we might have more data. But they don't. Why do you think that is?
    Of course you chose not to include Maryland, Florida, Ohio State, Purdue, UNC, Texas-Austin and Virginia as some of the compared schools. What a questionable decision. Why was that anyway?

  6. Reply

    Dr. Sara Goldrick-Rab

    May 16, 2011

    Jason,

    Your points are valid ones but they are in fact addressed in the research upon which those op-eds are based. Both Arum and Roksa and Babcock and Marks take shifts in student composition and institutional distribution, as well as technology, into account when assessing changes over time in college student habits. There is a very clear downward slope on time spent on all forms of educational activities (e.g. time spent in class and studying).

    That said, my own research with Doug Harris and Chris Taber indicates that part of the reason some students aren't studying is because they are work. I know-- that sounds obvious, except it's very hard to demonstrate empirically that what students SAY is true is in fact, true. We observe this in an experimental study in which students are randomly assigned an increased amount of financial aid-- as a result they work a little less, and study a little more. Nice. Except, their grades don't appear to improve from that study time--nor do their year-year retention rates.

    My point is not that Madison is distinctly alone in faces these challenges but rather that it is part of a national story, in which students pay more and more to attend college but appear to receive relatively little in return. What's swelled is the "higher education industry"--in particular the size of administrations--and what hasn't changed is how education is delivered. Despite all of those new technologies you refer to, we teach today much the way we always have-- and are rewarded the same way too-- for "seat time" rather than completions, based on student evaluations rather than on demonstrable learning gains. We should certainly argue for continued public support of higher education but that case is weak as long as it rests solely on a claim that we deserve more simply because other schools get more-- we need to make the claim makes on demonstrable productivity and outcomes.

    Thanks for writing,
    Sara

  7. Reply

    Dr. Sara Goldrick-Rab

    May 17, 2011

    Frank,

    The answer is quite simple-- you wanted to know why I didn't provide peer comparisons, and I responded with a long list of institutions that could "drag down" the means for the "peer group," making Madison look "good" without actually doing very well. Yes, those institutions you noted were in the comparison group but there's no reason to think they would've substantially raised the mean of the "peer group" thus affecting the comparison.

    Your accusation that I'm manipulating data is offensive and unfounded. I replied to your question on the basis with which you asked it -- why do we appear "good or better" than our peers. My point stands-- because the peer group includes numerous peers that would be expected to perform below us, and none that would be expected to perform substantially higher than us.


Would you like to share your thoughts?

Would you like to share your thoughts?

Leave a Reply

© 2013 The EduOptimists. All Rights Reserved.