Showing posts with label graduation rate. Show all posts
Showing posts with label graduation rate. Show all posts

Thursday, January 13, 2011

A Little Information Could Go A Long Way


THIS GUEST POST COMES FROM ROBERT KELCHEN, DOCTORAL CANDIDATE IN EDUCATIONAL POLICY STUDIES AT UW-MADISON.

In a new report, Filling in the Blanks: How Information Can Affect Choice in Higher Education, Andrew Kelly and Mark Schneider of the American Enterprise Institute examine the role that information can play in the college choice process. One thousand parents in five states were asked which of two similar colleges they would recommend to their high school-age child. Half of the parents were given information about the colleges’ six-year graduation rates, while half were not. The researchers found that parents who were provided information about graduation rates were fifteen percentage points more likely to recommend the college with the higher graduation rate to their child, with larger differentials for parents who reported having less information about colleges and who had lower levels of education.

The intervention shows the importance of providing salient information to the parents of high school students. However, because parents in the study were making a theoretical decision instead of an actual decision that would affect their child, they had less of an incentive to think as carefully about their choice. This might result in effects that are larger than in real life, especially where parents have evenmore information about the two colleges being compared. A logical next step would be to repeat this experiment with high school students to see if the results significantly differ. Encouraging or requiring colleges to publicize their graduation rates may lead parents and students to choose colleges at which the student is more likely to graduate, as they take this information into account. In any case, even a small effect of additional information can make this low-cost intervention sound public policy.

Tuesday, February 2, 2010

Spin Cycle

Education Next apparently has provided a platform for school choice advocate George Mitchell to shill for voucher schools outside of the state of Wisconsin. Here is his latest spin on a study that shows the high school graduation rate to be 12 points higher in seven Milwaukee voucher schools compared with 23 Milwaukee public high schools.

The Milwaukee Journal Sentinel story by Erin Richards provides the crucial quote regarding causation from the study's author, John Robert Warren, a sociology professor at the University of Minnesota:

"We still don't know whether it's going to the voucher school that causes you to be more likely to graduate, or if it's something about the kinds of families that send their kids to voucher schools would make them more likely to graduate," he said.

Then there's the whole question of which and how the voucher and public high schools were chosen for purposes of comparison. More questions than answers. Unlike Mitchell, I neither see this report as providing "another piece of evidence suggesting that urban students benefit when afforded more educational options," nor "new data" to encourage President Obama and Education Secretary Duncan to take "a second look at the power of parent choice."

The study was funded by the voucher-advocacy group School Choice Wisconsin, run by Mitchell's wife, Susan. The Mitchells have split from national school choice leader Howard Fuller who is devoting his current efforts to furthering accountability and quality in the Milwaukee Parental Choice Program.

After the spin cycle, be sure to rinse.

For past perspective on Voucher Inc, please visit here.

Monday, September 14, 2009

Premature Conclusions: More Money, No More Grads?

Some members of the media are covering the release of a new Canadian study, associated with the Educational Policy Institute, that examines the effects of a financial aid program on college-going and completion among low-income students. Researchers at the Measuring the Effectiveness of Study Aid Project tried to isolate those effects by examining what happened following a change in student aid policy in Quebec that increase aid eligibility and decreased reliance on loans. By comparing student outcomes both before and after the policy change, and comparing the outcomes of similar student in Quebec to those in other provinces (where such reforms did not occur), analysts attempted to establish a causal effect of aid.

They conclude that the policy affected access (increasing overall enrollment among students from families making less than $20K per year by 4-6 percentage points), and persistence (increasing retention rates by 6 percentage points) but did not affect graduation rates--at least within the 4-year window of time during which graduation was measured.

While noting that the null findings may stem from that short period of observation, the researcher still goes on record with this conclusion: "These results therefore cast doubt on the efficacy of this reform in particular, and of needs-based grants in general, to improve graduation rates."The headline over at Inside Higher Ed reads "More Money Doesn't Equal More Graduates."

This is a distinctly premature and irresponsible conclusion. First, as one of my graduate assistants James Benson pointed out, "if the percentage of college-eligible students that enrolled in college increased by 5 percent, and the persistence and graduation rates remained entirely static, then the program produced a net gain in the proportion of young adults completing semesters and degrees."

Furthermore, there are many reasons why an effect might not be estimated properly in this study. As my colleagues Doug Harris, Phil Trostel, and I explained in a recent paper, a simple correlation between aid receipt and college success is likely to be negative because students from low-income families, in the absence of aid, are for a variety of reasons less likely to succeed. Unless researchers can convincingly account for all of those reasons – and we argue that very few do – the estimated effects of aid are likely to look smaller than they really are. This study is not very convincing and really doesn't move far beyond a correlation, for many reasons. For example, as another graduate assistant, Robert Kelchen, indicates:

1. The comparison groups (Quebec vs. other provinces) have very different rates of financial aid take-up prior to the reform. This calls the validity of the comparison into question. It's also too bad the researcher didn't see fit to post his tables on the website, since we cannot see whether the differences post-treatment are significant.

2. Quebec saw increases in the enrollment rates of high-income students following the reform, in addition to increases in the enrollment rates of low-income students. If financial aid was the real driver, it shouldn't have affected the (ineligible) high-income students.

These are but a few examples-- if a full research paper (such as would be submitted for academic review) was available, I bet we'd have more concerns.

This is a case of the press jumping the gun and running with a story, and a headline, not supported by the empirical work done by the researchers. We're in a recession, and aid programs cost a lot of money. We do need to know if they work, and in particular if they are cost-effective. But the estimation of impacts should be done more carefully, and results discussed in a much more responsible manner. Sexy but un-informed headlines will do little good-- perhaps even casting a shadow on an effective program, reducing its ability to maintain funding. All of us studying financial aid have an obligation to do much, much better.

Thursday, September 10, 2009

Where Have You Been?

A spate of recent articles, including those covering Bill Bowen and Mike McPherson's new book (which I promise to review just as soon as my copy arrives), have left me a bit perplexed-- wondering aloud "where have you all been?" The punchline each time is that a fair proportion of adults starting college are not finishing. Yes, and duh. This is not new, and if it's news well I guess it's only because we've deliberately kept our heads in the sand.

But there's no way that folks like New York Times reporter David Leonhardt have been deliberately oblivious, and yet he's writing about low college completion rates as if they've just been unearthed. In a recent blog post, Kevin Carey implied the same-- just as he did in a recent American Enterprise Institute report. But this has been a prominent topic of discussion for years--maybe a decade plus! Just look at Kevin's own 2004 report A Matter of Degrees (which received plenty of media coverage), or the Spellings Commission report, or Claudia Goldin and Larry Katz's book. I know I could go back several more years and find plenty more evidence.

I think it's one thing to imply something is new when it isn't (because again, maybe you just didn't know, or you feel the issue still is widely known enough and want to beat the drum more), and it's another thing entirely to claim that policymakers still aren't paying attention. In Leonhardt's case, he's simply wrong when he says the current Administration isn't focused on college completion. Um, how about that $2.5 billion Access and Completion Fund, part of Obama's original budget proposal? What about the performance (outcomes)-based components of the new community college monies contained in HR 3221? Foundations like Lumina and Gates have been beating this drum for years, and those in the Administration are well aware. No one in DC is saying institutions should continue to be judged solely based on enrollment (even enrollment of disadvantaged groups). There is plenty of ado about completion rates. The question is now, what exactly are the best solutions? That's a debate that needs to be richer and more visible, since the answers are far from clear-- and we'd be terribly wrong to simply resort to NCLB-style responses that remind me of my toddler: "Institutions bad. Do wrong. I punish you and you do better. Now." Let's direct our energies toward really identifying the sources of the problems, and developing a sense of how reforms can be most effective. When I get a chance to read the new Bowen and McPherson book, I'm hoping I come away with new ideas on how to do that.

Thursday, June 4, 2009

Sorting, Selection, and Success

Cross-posted from Brainstorm, over at the Chronicle of Higher Education.

The latest report from the American Enterprise Institute, Diplomas and Dropouts, hits one nail on the head: plenty of students starting college do not finish a degree. Access does not equate with success, and partly as a result, U.S. higher education is perpetuating a lot of inequality.

What do we do about this? The authors identify a key fact: “analysis of graduation rates reveals wide variance among institutions that have similar admissions standards and admit students with similar track records and test scores.” They interpret this to mean that “while student motivation, intent, and ability matter greatly when it comes to college completion, our analysis suggests that the practices of higher education institutions matter, too.”

This is a pretty common argument made by many policy institutes and advocacy organizations, including but not limited to the Education Trust and the Education Sector. I understand their goal—to make sure that colleges and universities can’t hide behind the excuse of “student deficits” in explaining low graduation rates, and instead get focused on the things they can do something about. In some ways that mirrors efforts over the last fifty years to focus on “school effects” in k-12 education —witness the continuing discussion of class size and teacher quality despite evidence that overall variation in student achievement is much more attributable to within-school differences in student characteristics than to between-school differences (school characteristics). Like many others, I read those findings to say that if we really want to make progress in educational outcomes, we must address significant social problems (e.g. poverty, segregation) as well as educational practices. Don’t misinterpret me- it’s not that I think teachers don’t matter. It’s simply a matter of degree—where and how can we make the biggest difference for kids, and under what circumstances.

Unlike k-12, access in higher education isn’t universal and competitive admissions processes and pricing structures result in lots of sorting of kids into colleges and universities. As a result, they differ tremendously in the students they serve. In turn, as the AEI report admits, this necessarily shapes their outcomes.

The problem is, all this sorting (selection bias) has to be properly accounted for if you want to isolate the contributions that colleges make to graduation rates. (I’ll qualify that briefly to add that the role college enrollment management —tuition setting, financial aid, and admissions— plays in the sorting process is quite important, and is under colleges’ control.) But if you want to isolate institutional practices that ought to be adopted, you first have to get your statistical models right.

Unfortunately, I don’t think the AEI authors have done that. To be sure, they try to be cautious, pointing out colleges that look “similar” but have extremely different graduation rates (rather than modestly different ones). But how they reached “similarity” leaves a lot to be desired. It seems to rest entirely on level of selectivity and geographic region. Their methods don’t begin to approach the gold standard tools needed to figure out what works (say, a good quasi-experimental design). Important student-level characteristics (socioeconomic background, high school preparation, need for remediation, etc) aren’t taken into account. Nor are many key school-level characteristics (e.g. resource levels and allocations). In sum, we are left with no empirical evidence that the numerous other plausible explanations for the findings have even been explored.

I’m not surprised by this, but have to admit that I’m a bit bummed. Yes, I “get” that AEI and places like it aren’t research universities. Folks don’t want to spend long periods of time conducting highly involved quantitative research before starting to talk policy and practice. But I don’t see how this approach is moving the ball forward—sure it gets peoples’ attention, but it’s not compelling to the educated reader—the one who votes and takes action to change the system. Moreover, it doesn’t get us any closer to the right answers, or provide any confidence that if we follow the recommendations we can expect real change.

There have been solid academic studies of the causes for variation in college graduation rates (here’s one example). They struggle with how hard it is to deal with the many differences among students and colleges that are not recorded – and thus not detectable—in national datasets. If we want better answers, we need to start by investing in better data and better studies. In the meantime, I think skipping the step of sifting and winnowing for the most accurate answers is inadvisable. Though, sadly, unsurprising…