Cross-posted from Brainstorm
Anyone who's taken a hard look at the reasons why more students drop out of community college realizes it's got to have at least something to do with their need for more frequent, higher-quality advising. After all, in many cases these are students who are juggling multiple responsibilities, only one of which is attending college, and they need to figure out a lot of details-- how to take the right courses to fit their particular program (especially if they hope to later transfer credits), how to get the best financial aid package, how to work out a daily schedule that can maximize their learning, etc. It's fairly easy to figure that in fact community college students would likely stand to benefit more from good advising than their counterparts at many 4-year institutions.
Except high-quality advising isn't what they get. Counselor-student ratios are on average 1000:1. That's right-- one counselor for a population the size of a decent high school. In elementary and secondary schools the ratio is 479:1. There's a pay disparity as well-- in k-12 the Bureau of Labor Statistics reports that the median annual earnings for a counselor in 2006 was nearly $54,000. For counselors at community colleges it was $48,000 (and for those at other colleges it was $42,000). Now, perhaps the salary differentials reflect the different work load, and assumptions about it being easier to counsel adults. But I tend to think this is offbase-- these are outdated notions of who community college students are and what they need.
So what would happen if we reduced the counselor/student ratio at community colleges to a standard even better than the national average in k-12? And at the same time ramped up the intensity of the counseling? Theory would suggest we should see some meaningful results. Many studies, including my own, point toward a persistent relationship between parental education and college outcomes that's indicative of the importance of information-- and information (plus motivation) is what counseling provides. So, putting more counselors into a community college and increasing the quality of what they provide should work-- if students actually go and see them.
To test these hypotheses, MDRC (a terrific NYC-based evaluation firm) recently conducted a randomized program evaluation in two Ohio community colleges. In a nutshell, at college A students in the treatment group were assigned (at random) to receive services from a full-time counselor serving only 81 students, while at college B students in the treatment group had a counselor serving 157 students. In both cases, the control group students saw counselors serving more than 1,000 students each. In addition to serving far fewer students than is typical, these counselors were instructed to provide services that were "more intensive, comprehensive, and personalized." In practice, students in the treatment group did see their counselors more often. The "treatment" lasted two semesters.
The students in this study are Midwesterners, predominantly (75%) women, predominantly white (54%), with an average age of 24, half living below the poverty line and half are working while in school. I think it's also worth pointing out that while all applied for financial aid, these were not folks who were overwhelming facing circumstances of deprivation-- 88% had access to a working car, and 64% had a working computer in their home. And 98% were U.S. citizens.
The results indicate only modest results. After one semester of program implementation, the biggest effects occured-- students in the treatment group were 7 percentage points more likely to register for another semester (65 vs. 58%). But those differences quickly disappeared, and no notable differences in outcomes like the number of credits taken and other academic outcomes occured. Moreover, the researchers didn't find other kinds of effects you might expect--such as changes in students' educational goals, feelings of connection to the college, or measured ability to cope with struggles.
So what's going on? The folks at MDRC suggest 3 possibilities: (1) the program didn't last long enough to generate impacts, (2) the services weren't comprehensive enough, (3) advising may need to be linked to other supports--including more substantial financial aid--in order to generate effects. I think these are reasonable hypotheses, but I'd like to add some more to this list.
First and foremost, there's a selection problem. MDRC tested an effect of enhanced advising on a population of students already more likely to seek advice-- those who signed up for a study and more services. Now, of course this is a common problem in research and it doesn't compromise the internal validity of the results (e.g. I'm not saying that they mis-estimated the size of the effect). And, MDRC did better than usual in using a list of qualified students (all of whom, by the way had to have completed a FAFSA) and actively recruiting them into the study-- rather than simply selecting participants from folks who showed up to a sign-up table and agreed to enter a study. But, in the end they are testing the effects of advising on a group that was responsive to the study intake efforts of college staff. And we're not provided with any data on how that group differed from the group who weren't responsive to those efforts--not even on the measures included on the FAFSA (which it seems the researchers have access to). Assuming participants are different from non-participants (and they almost always are), I'm betting the participants have characteristics that make them more likely to seek help-- and therefore are perhaps less likely to accrue the biggest benefits from enhanced advising. I wish we had survey measures to test this hypotheses-- for example we could look at the expectations of participants at baseline and compare them to those of more typical students-- but the first survey wasn't administered until a full year after the treatment began. To sum, up, this issue doesn't compromise the internal validity of the results, but it may help explain why such small effects were observed-- there are often heterogeneous effects of programs, and those students for whom you might anticipate the bigger effects weren't in the study at all.
A second issue: we just don't know nearly enough about the counterfactual in this case-- specifically, what services students in the control group received. (We know a bit more about differences in what they were offered, e.g. from Table 3.3, but not in terms of what they received,) We are provided comparisons in services received by treatment status only for one measure-- services received 3+ times during the first year of the study (Appendix Table c.3), but not for the full range of services such as those shown in Appendix Table C.1. For example we don't know that students in the control and treatment groups didn't have similar chances of contacting a counselor 1 or 2 times, only the incidence of 3+ contacts. If the bar was rather high, it may have been tougher to clear (e.g. the treatment would've needed to have a bigger impact to be significant).
Having raised those issues, I want to note that these are fairly common problems in evaluation research (not knowing much about either study non-participants or about services received by the control group), and they don't affect MDRC's interpretations of findings. But these problems may help us understand a little bit more about why more substantial effects weren't observed.
Before wrapping up, I want to give MDRC credit for paying attention to more than simply academic outcomes in this study-- they tested for social and health effects as well, including effects on stress (but didn't find any). As I've written here before, we need to bring the study of student health and stress into educational research in a more systematic way, and I'm very glad to see MDRC doing that.
So, in the end, what have we learned? I have no doubt that the costs of changing these advising ratios are substantial, and the impacts in this case were clearly low. Right now, that doesn't lend too much credence to increasing spending on student services. But, this doesn't mean that more targeted advising might not be more effective. Perhaps it can really help men of color (who are largely absent from this study). Clearly, (drumroll/eye-rolling please), more research is needed.
Friday, August 28, 2009
Strengthening Student Support: A Sensible Proposal with What Results?
Labels:
college degree,
community college
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment