In the quest to move up in the popular rankings, a number of institutions have tried to game the system
Every year, millions of students from different countries make their way to universities across the world desirous of obtaining a college education. India too is a massive contributor to this student pool—more than 250,000 students in the last year alone left Indian shores (data from Unesco Institute for Statistics).
At the other end of the education tunnel, China, India and South Korea together accounted for more than 50% of the international student pool in the US. For the universities, the benefits of an international student pool is obvious: raising of quality standards, short-term to long-term boosts to the economy and increased income (from fees and subsistence spending). Therefore, the key to the global tertiary education market is to capture the internationally mobile student.
But how does a student go about making his or her choice come admissions time?
Normally, students may use inputs ranging from parents and well-wishers, seniors, cut-off percentage, peer pressure and/or financial factors. They are also likely to factor in college reputation, a perceived after-sale value of their qualifications and future opportunities. Academic research has shown that they are likely to use institute rankings in a wide variety of scenarios—especially when their choices are based outside their country. Unsurprisingly, students are the primary target audience for various rankings.
Last week, the Times Higher Education (THE) World University Rankings ranked Bengaluru’s Indian Institute of Science (IISc) as the only Indian university among the top 300. In June, three Indian institutions made it to the top 200 of the Quacquarelli Symonds (QS) World University Rankings 2018 (it was the first time since QS had got involved with global rankings in 2004).
For universities, a good global ranking has become an important metaphorical funnel into which to channel prospective students from across the world. However, there is a palpable difference between the haves and the have-nots.
As reported in Boston Magazine in 2014, the fates of several US universities hinged on their standing in the influential U.S. News & World Report “Best Colleges" rankings.
The article described how Richard Freeland, the new president of Northeastern University, had “observed how schools ranked highly received increased visibility and prestige, stronger applicants, more alumni giving, and, most important, greater revenue potential. A low rank left a university scrambling for money."
Therefore, the rankings had the power to shape the destiny of a university as students often use it to choose potential colleges. This situation has only been magnified in recent times, with the global university rankings entering the fray.
Faced with such repercussions, what do universities do to get a leg up over their competition?
The ranking of universities began in 1870 in the US, when a commission of the US Bureau of Education published an annual report of statistical data and classifying institutions. Later, in the 20th century, US institutions were ranked through different exercises undertaken by James Catelli, Raymond Hughes, Chesley Manly, Hayward Keniston, Allan Cartter and others.
Then came the seminal U.S. News & World Report in 1983 when it published its first ever “America’s Best Colleges" rankings, based on data collected from different educational institutions via surveys, government and third-party data. These have been published annually since 1985.
International rankings made a beginning in 2003, with the Academic Ranking of World Universities (ARWU), also known as the Shanghai Ranking, which was established to benchmark Chinese universities with their Western counterparts—the eventual aim being establishment of world-class universities in China. This exercise gained a lot of attention, and one year later, Times Higher Education (THE) and Quacquarelli Symonds (QS) jointly published their own THE-QS World University Rankings (the two would end their collaboration in 2009, and have published their own rankings since).
Since then, there has been no looking back for these global university rankings, and many other such rankings with varying mandates, target audiences and methodologies have mushroomed.
The subject of higher education has universal appeal, and therefore garners a lot of attention from a wide audience. At the beginning, the rankings were used as tools for university administrators to introspect.
But now, increasingly, they have been used as a geopolitical indicator by governments to ascertain the quality of higher education in the global education game.
It was around 2005 that Richard Holmes, then an associate professor from Universiti Teknologi MARA in Malaysia, first came across global rankings. He noticed several inconsistencies in the rankings and sought to seek clarifications, which were largely ignored by the ranking agencies.
While he was still an academic, he proceeded to start a popular blog, where he continues to cast a critical eye on university rankings even today.
“The Malaysian university where I worked was having a campaign to achieve world class status with lots of transformation seminars etc.," he writes in response to an email. “One day we were sent a copy of Times Higher Education Supplement and advised to emulate Universiti Malaya (UM) which was in the top 100 of the inaugural THES-QS rankings.
“It took 10 minutes to work out that UM’s position was the result of an amateurish error. I fired off emails which were ignored, and sent letters, which were also ignored. So eventually I had to start University Ranking Watch since there seemed to be little interest in any critical views of the rankings.
“The error in the THES-QS 204 rankings was that ethnic Indian and Chinese minority students and faculty were counted as international thus giving a huge boost to the international students and international faculty scores. it was corrected the following year."
In 2013, he moved to the other side, working in the International Rankings Expert Group.
The three prominent global rankings (ARWU, THE and QS) have used different methodologies to come up with their rankings. ARWU mainly uses research-based parameters such as number of Nobel Prize-winning alumni and faculty members, citation-based inputs and per-capita performance.
THE and QS use factors such as reputation surveys, staff-to-student ratio and international outlook. Many times, when parameters like “teaching quality" cannot be measured, they resort to proxy parameters such as the aforementioned staff-to-student ratio. The weightage given to an individual parameter can vary from ranking to ranking as well.
But what do these rankings measure?
“Different rankings measure somewhat different things," Holmes writes in response to an email. “The Shanghai rankings measure age (that it takes to win Nobel Prizes), size and emphasizing medical research. THE measures income and QS gives a chance to universities with largely local reputations to excel by reputation management campaigns.
“Over a period of at least five years, provided there are no methodological changes, rankings can provide a broad idea of changes in the quantity and quality of research and the academic level of students and graduates."
The sources of data can vary depending on the parameter. Many research-based rankings use citations. At a simplistic level, citations could be compared to the incoming links to a website. Therefore, the number of citations that a research publication has amassed is seen as a proxy for its influence in that field.
Nowadays, bibliometric databases from Scopus, Thomson Reuters and other agencies are used to obtain citation and publication information. But for the other parameters such as academic reputation, data is obtained from surveys. In many other cases, the universities upload the data.
“We give universities an opportunity to submit data themselves," says Selina Griffin, the rankings manager of QS. “This data is validated by checking it with previous submissions or other sources of data and any discrepancies are queried. If the institution is not forthcoming with the information, this can often be obtained from a publicly available third party source, e.g. HESA (Higher Education Statistics Agency) in the UK."
Colleges uploading their own information can lead to undesirable consequences. Last year, Trinity College, Dublin, accidentally submitted €355 as its income (as opposed to €355 million), and dropped out of the top 200, to much embarrassment.
In 2013, the error wasn’t as innocent—some US colleges admitted to deliberately submitting false data for the sake of rankings, prompting rankings agencies to ask senior officials from the universities to sign off on the data. Some universities have been more ingenious and have tried to game the system instead.
Similarly, acceptance rates (the percentage of students admitted from its applicant pool) have been used as a measure of difficulty of getting into that particular university, and workarounds have been used there as well.
Northeastern University started accepting online common applications with a motive to drive up the number of applicants, according to the Boston Magazine report cited above; they even went further to invite students with lower grades to begin their semester abroad and start school on campus in spring so that students with lower scores are excluded from the purview of the rankings. Other such measures universities have taken include limiting class size, tinkering with graduation rates and the number of full-time faculty, and alumni donations.
Over time, these attempts have only gotten more sophisticated. In the early days of the rankings, no Saudi Arabian university was ranked among the top 500 in the world. The situation had changed recently, with many of the nation's universities (with King Abdulaziz University being the most prominent) making impressive strides in various rankings.
A research study revealed that it was due to their efforts of stockpiling highly cited researchers; another paper shows that the universities tried to attract the world’s leading researchers from different disciplines with “part-time employment", and with researchers listing King Abdulaziz University as their institute of secondary affiliation inreturn for a handsome contract, they jumped to second in the highly cited researcher standings or the “top scientist" arms race (an AAAS ID is needed to browse the paper, but it can be viewed here).
This was short-lived though, as the prominent global rankings removed the factor of secondary affiliation from their bibliometric parameters in subsequent editions.
On a similar note, recently, Chennai’s VEL Tech University was ranked the top university in Asia in terms of citations. Both Holmes, and Ben Sowter (the head of QS Intelligence unit) dug deeper into this surprising finding. On the social networking site Quora, Sowter dissected this ranking and attributed it to one researcher citing himself excessively over the last two years. (This researcher, it must be kept in mind, hasn’t been cited much by anyone outside his charmed circle).
Holmes examined this further, and found that a lot of this researcher’s publications were in a scientific journal where he served as associate editor—a circular ecosystem which might evoke memories of a certain ponytailed, best-selling author, leading management guru and economist. Academics publishing papers in journals where they serve in editorial positions is somewhat of a grey area, and it can affect the perception of the journal’s bias and objectivity. Journals usually have explicit policies to exclude the editor from the entire editorial process. In this case, the researcher himself has denied any deliberate wrongdoing.
With universities becoming wise to various strategies, pursuing a top ranking has become a moving target. Recent research has shown that small changes in rankings are short-lived and just “noise", and that any sort of sustained, upward push would require difficult and expensive efforts.
Given the undesirable consequences, what can be done to improve the rankings system going forward?
“I don’t think deliberate submission of false information is much of a problem because it is not really necessary. Universities can get a lot of mileage out of interpreting ambiguous instructions to their advantage," said Holmes.
He added his wish list of reforms: “No counting of self-citations or secondary affiliations, collecting data over five or 10 year periods to avoid short-term fluctuations, not putting too much emphasis on specific indicators would be a short list of the most desirable ones. I would also like to see universities being more critical of the rankings, especially when they produce obvious absurdities."
From one global university ranking in 2003 to several in the present day, rankings exercises are here to stay and exert tremendous influence. The oft-repeated refrains of “love them or hate them, you can’t ignore them", and “when you can’t beat them, join them" ring loud and clear.
Vyasa Shastry is a materials engineer and a consultant, who aspires to be a polymath in the future. In his spare time, he writes about science, technology, sport and society. He has contributed to The Hindu (thREAD), Scroll, Firstpost and The Wire.