College rankings are problematic
On Jan. 19, Mount St. Mary’s University’s student newspaper The Mountain Echo published an article reporting evidence that their University President Simon Newman, was attempting to improve the university’s retention rates by dismissing approximately 20 to 25 first-year students deemed at risk of dropping out early in the academic year before they would have a chance to affect reported retention rates.
Newman’s plan to improve retention rates reportedly involved a survey given to all first-years during orientation. Despite being presented to the students as a leadership tool with “no wrong answers,” developed to help students “discover more about [themselves],” the results of the surveys were actually used to assess whether a student was at risk of dropping out. Newman reportedly asked Greg Murry, the director of Mount St. Mary’s orientation, to compile a list of students whose surveys indicated that they were unlikely to finish their freshman year. The president reportedly told the faculty, “This is hard for you because you think of the students as cuddly bunnies, but you can’t. You just have to drown the bunnies … put a Glock to their heads.” The Mountain Echo also reported an email exchange in which Newman said, “My short term goal is to have 20 to 25 people leave by the 25th [of Sep.]. This one thing will boost our retention 4 to 5 percent.”
The school board stands behind Newman, claiming that while he used an “unfortunate metaphor,” his plan was taken out of context. Newman’s proposal was met with marked concern, however, from some of the university’s faculty, who held that it wasn’t possible to accurately determine who would drop out, and even if it was possible, it was unethical to use the surveys in this way without notifying the students. Several professors and faculty members actively opposed Newman’s plan, spurring Newman to fire several faculty members including David Rehm, ex-provost of Mount St. Mary’s, and two university professors, one of which was The Mountain Echo’s adviser.
This controversy has raised concerns over a variety of issues including freedom of the press, faculty rights, and the influence of business on academics. The Mountain Echo has been severely criticized by the board for using leaked emails, which the board claims goes against the school’s code of conduct. Many are also noting that Newman has prior experience in business, not higher education, and suspect that his business principles influenced his retention plan. While this controversy has sparked a number of important debates, The Tartan is interested in what this story has to say about the dangers of college rankings.
The events at Mount St. Mary’s University are rooted in Newman’s desire to increase the college’s rankings; by encouraging students to leave early in the semester, the school would have lower dropout numbers, which would improve their overall ranking as an institution. Unfortunately, this manipulation of data is far from uncommon. In recent years many colleges have revealed similar scandals, ranging from falsification to mild manipulation.
Iona College, for example, admitted in 2011 that they had falsified a variety of statistics, including test scores, freshman retention, and acceptance rates. It has since been estimated that this falsification led to a significant increase in the college’s overall ranking, putting Iona at 30th on U.S. News’ ranking list, one of the most prominent college ranking lists in the country. Correct data would likely have put the university at 50th. Other universities have also turned to data manipulation in order to improve their rankings. Bayer University was the topic of a similar scandal in 2008 when they attempted to raise their average SAT scores by reimbursing admitted students who retook the SAT.
This manipulation and falsification speaks to a larger issue: an over-dependence on college rankings. While college rankings are intended to be useful resources for students and universities, they are often used beyond their intended purpose. Prospective students, parents, and universities who wish to make side-by-side comparisons between multiple schools often use college rankings as all-inclusive, concrete sources of information instead of the partial, and often subjective, source of information they really are.
Even if falsification and manipulation of rankings didn’t happen, there would still be the subjectivity inherent in these rankings due to the diverse range of schools the lists encompass. A school’s multi-dimensional presence cannot be accurately encompassed by a number. Every prospective student is looking for something different in a university, so the idea of a “number one” university is illogical.
Despite the natural individual bias of these lists and brazen manipulation of data that occurs, there is still a heavy reliance on college rankings. When institutions gain higher scores they also reap other benefits. Better rankings lead to an increase in the number of prospective students and an increase in alumni donations. Rankings can also influence a school’s funding and ability to attract prestigious faculty.
This dependence puts pressure on schools to boost their rankings in order to stay competitive, attract students, and fund their programs. While some motivation can be healthy and push schools to improve themselves, an overwhelming reliance on college rankings pressures many schools to go to extreme measures to keep their rankings high in order to retain students and funds. This focus on rankings draws attention away from the real goal of higher education: providing students with the best education possible.
College rankings can be helpful, but they need to be used correctly and with caution. Prospective students, universities, and the general public, need to understand the limitations of these resources and realize that they are not all-inclusive, definitive lists. A decrease in the dependence on rankings would allow universities to switch their focus back to the students and away from the numbers game that ranking promotes.