×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Distorted and distorting

World University Rankings
Last Updated 25 June 2021, 21:02 IST

There is much jubilation at the Indian Institute of Science (IISc) being ranked the top research university in the world in the Quacquarelli Symonds (QS) rankings this year. I am delighted that IISc is ranked at the top as now one can look at the rankings critically without my criticism being looked at as a case of sour grapes.

So, we should take a look at the ranking process and ask ourselves how justified IISc's rank vis-à-vis those ranked at par or below it in research is. These other universities include MIT, Harvard University, Caltech, Stanford University, University of California-Berkeley and Cambridge University. Therefore, it is vital to analyse the validity of these rankings and not place so much importance on IISc's rank that we become complacent about our research performance.

It is quite ludicrous that the performance of entire universities can be quantified and reduced to a single number, much like an individual's abilities are quantified by his/her IQ. Education and research involve the interplay of creativity, innovation and mentoring, and are much too complex to be reduced to a single performance metric. Human nature is to compare and quantify, and as long as people are willing to value rankings, there will be companies to do the task. But it is for people in decision-making positions to view these critically and not make major decisions only based on the rankings.

The QS ranking system works by considering six criteria — 1) academic reputation; 2) reputation as perceived by employers; 3) faculty/student ratio; 4) citations per faculty; 5) the ratio of international faculty to national faculty; and 6) the ratio of international students to national students.

Academic reputation is decided by a survey conducted among about 100,000 experts in teaching and research. The subjective perception of these experts governs the score for academic standing, yet this criterion carries the most weight – 40% — in determining rankings. A similar survey conducted among about 50,000 responses from employers regarding the second criterion contributes 10% weight to the ranking system. Thus, half of the intangibles decide maximum ranking points.

The remaining four criteria can be quantified based on the data provided by universities. The faculty-to-student ratio of a university carries a weight of 20%. However, a larger faculty-to-student ratio does not always translate to better instruction for students. In many research institutes, support faculty such as scientific and technical officers are counted as teaching faculty, although they are only marginally involved in teaching. In many universities and institutes, well-established senior faculty are less accessible to students. They are often less involved in the institution's academic activities as they tend to be busy working in various committees, nationally and internationally.

Another criterion is the citations per faculty, and this parameter constitutes 20% of the ranking points. Citations are the number of times other publications cite a published paper. However, in scientific circles, to boost citations for one's papers, one has to circulate among scientists by attending conferences, inviting other scientists for seminars and personally socialising one's ideas.

So, the number of citations may scale more with the visibility of the paper than the true scientific impact of the work. It is also not uncommon to see citing a work to discredit it; this counts as a citation, nevertheless. Notwithstanding these well-known drawbacks, the number of citations remains one of the leading metrics used by the research community and funding agencies to judge research performance. For the QS rankings, the citations received per faculty over the five years starting from seven years before the assessment are considered. Citation count is normalised against the total citations in the field, as not all fields have the same number of researchers or total publications.

Citations can sometimes lead to incorrect conclusions. An example is Panjab University, which was ranked the top Indian university in the Times Higher Education ranking in 2013, upstaging all other institutions that had continuously outperformed Panjab University on almost all metrics. The anomaly was attributed to many citations received by a few faculty members and their students who were joint authors on papers on the discovery of the Higgs boson at the Large Hadron Collider (LHC) run by CERN in Switzerland. A few thousand authors from a few hundred universities and institutions authored these papers.

Besides such anomalies, usually the impact of a paper is not realized within the span of a few years of its publication. The impact of a paper is judged better by the longevity of its citations. Thus, restricting citations of publications to just a few years is flawed. Instead, the total citations received by all papers published from the beginning of an institute or university over the previous five years may be a better metric. Although I am generally opposed to the idea of ranking itself, this new way of reckoning citations score will weigh pioneering publications that have remained important over long periods.

In another ranking system, the number of papers published by an institute was used as a parameter in deciding rankings. This led to an anomaly in the rankings in 2010 involving the University of Alexandria. In that year's rankings, the University of Alexandria was placed surprisingly high. This anomaly was traced to a single professor's practice of misusing his position as an editor of a journal to publish a large number of articles in that journal.

The remaining two metrics, namely the fractions of international faculty and international students, are driven by sociological, financial and geographical aspects of where an institute is located. Although these two criteria have the least weight, they can impact ranking over mostly non-academic reasons.

The question whether it is necessary and desirable to rank universities and institutes is a moot point. Many universities resort to window-dressing data to improve their rankings. Often, they have personnel entrusted with finding ways to improve the rankings and embellish the achievements of students and faculty. Many a time, scarce resources are frittered away in showcasing the institute.

Critical evaluation of the ranking system has become important as funding agencies find it an easy metric to base their decisions on. The ranking process is further encouraged by the scientific publishing machinery (which is already a money-spinning business). It has a lot more to profit when researchers tend to increase their publication numbers to improve the institute's ranking, besides contributing to increasing their own individual metrics.

I believe that just as we should do away with marks and ranks for students in exams, we should also do away with the ranking of universities. Instead, it is sufficient to divide universities into different tiers and let the clientele, who need to decide, delve deeper to find out what suits them. Such a tier-based division can also be done for specific categories such as best faculty, best campus, best peer group and so forth. It will help prospective students and their parents make decisions based on categories they care most about, instead of bestowing bragging rights for institutes or universities to advertise themselves.

(The writer is an Emeritus Professor and an INSA Senior Scientist, IISc)

ADVERTISEMENT
(Published 25 June 2021, 19:44 IST)

Deccan Herald is on WhatsApp Channels| Join now for Breaking News & Editor's Picks

Follow us on

ADVERTISEMENT
ADVERTISEMENT