Friday, January 30, 2009

'Gaming' university rankings

Read this from the Duke alumnus magazine. I think it's a good reminder for our VCs - don't try to 'game' the ranking system, concentrate on improving your university in ways you see fit.

Under the GargoyleThe Rankings Game: Who's Playing Whom?
By John F. Burness

U.S. News & World Report published its first annual ranking of the nation's best colleges in 1983. In the years since, the publication has spawned a cottage industry, transformed how the public thinks about higher education, and in the process made a lot of money.

Over the past three decades, I've had ample opportunity to dissect the various rankings or discuss the validity of their methodologies in an effort to explain to a wide range of university constituencies, including the news media, why the universities where I worked—the University of Illinois at Urbana-Champaign, Cornell, and for the last seventeen years, Duke—were rated where they were. It's fun as I retire from university administration to ruminate on the absurdity of it all.

Ours is a competitive culture, and it should be no surprise that many people are interested in such external assessments of the quality of American higher education. After all, students and families spend as much as $50,000 a year to go to college, and it is reasonable for them to want a credible, independent assessment to help guide their thinking about where to make that significant investment.

That said, I don't know anyone in higher education whom I've talked to since the ratings game began who believes that the magazine rankings can capture what makes the experience offered by an individual institution unique or effective. The precision that U.S. News purports its methodologies reveal is, on the face of it, rather silly. If you look at the top ten institutions, you will see that some of them are separated by small fractions of a percent. In the Olympics, those fractions make a difference, but it's hard to understand how, in the real-life breadth of activities of a university, they make any difference at all to a student.

The rankings give considerable weight to perception and tend to be based on annual assessments, as if undergraduate-program innovations or tweakings manifest significant change in two semesters. U.S. News has artfully—in the guise of improving the veracity of its rankings—made one or more changes in its methodology every few years, which enables it to argue that there is some shift in the quality of institutions that the new methodology has captured. The cynic in me says that the changing of the methodology is more a strategy for getting different results in the rankings, which helps the publication sell more copies.

During my years at Duke, the university ranked as high as tied for third and as low as tied for eighth. The year we tied for third was my favorite. Folks at Duke were understandably elated. I recall telling university leaders, including our trustees, not to crow too much about this jump to our position of three because inevitably the methodology would change, and we would drop a few places—which, of course, is what happened.

My favorite magazine ranking experience wasn't with U.S. News but with Money magazine, which, in the 1990s, had a "Best Buys in Higher Education" issue. In that one, the public universities, almost by definition, ended up having a built-in advantage, although fifteen private institutions were listed among the top 100. Duke was not among the fifteen, much to the consternation of some of our trustees and others. So I met with the editors of Money and asked how we could be ranked in the top ten in the country in other ratings (as skeptical as I was about them) and not make the top-ten private institutions in Money's listing. They mumbled something about our library resources, and I was able to document that their numbers were wrong. The next year, Money came out with a new category: "Costly Schools That Are Worth the Price." Duke was ranked highly in that, and people at Duke were pleased. Alas, I didn't keep the pressure on the magazine, and one year later, it dropped the category.

I remember well a wonderful speech by a distinguished faculty member at my son's freshman convocation several years ago. The scholar compared the founding of that institution to Odysseus' journey, noting that both had decided not to let others define who they were. He urged the freshmen to create their own identity through the choices they made during their college years. Within a moment or two of the faculty member taking his seat, the chancellor of the university—a person I admire enormously—told the assembled freshmen and their parents that while the information was embargoed publicly until 11:59 that night, he felt comfortable telling them in confidence that the university for the first time had cracked the top ten of U.S. News rankings. The response was predictable, with students jumping up and down, and parents smiling at the thought that their investment clearly was going to be worth it. The faculty member sat there, his head bowed.

 I always said when reporters and others sought my reaction to Duke's being ranked somewhere in the top ten: "It's nice to have confirmed what we know about the quality of our students and faculty. But magazine ratings are really designed to help sell magazines. Students should visit a campus, spend real time learning about the academic programs, and determine whether or not they have the right fit with a particular institution." I still think that's very sound advice.

No comments:

Post a Comment