NIRF Rankings Are Ludicrous

On November 30, 2020, the National Institutional Ranking Framework (NIRF) invited applications for India Rankings 2021, the Sixth edition of this annual exercise. NIRF was launched in 2015 to rank higher educational institutions in the country. NIRF makes a loud claim of its purpose as “promoting competitive excellence in the higher educational institutions” and its process as “being based on objective criteria” is approved, endorsed and supported by the Ministry of Education of the Government of India.

Rankings for Educational Institutions are accorded great significance by institutional staff and leadership teams and when awarded by the Government itself, the outcomes of the ranking have significant material consequences. Year after year, NIRF has been publishing its Annual Rankings inciting excitement across academic social media. Nothing wrong in celebratory and congratulatory banter that follows; but what is unsettling is the fact that the academic scholars take things like a ranking as a confirmation, or evidence, of how good, or bad for that matter, they are having it as compared to everyone else.

Let me make it clear from the start that my intention here is not to criticise rankings. This is not a story about flawed methodologies or their adverse effects, about how some rankings, other than the NIRF, are produced for making profit, or about how opaque or poorly governed they are.  The intent here is to draw attention to a highly problematic assumption that there is, or that there could be, a meaningful relationship between a ranking, on the one hand, and, what an Educational Institution is and does in comparison to others, on the other.  

To avoid any embarrassment to the Indian Ranking Systems, let us take an example from three of the most popular rankings – Academic Ranking of World Universities 2020 (Shanghai Ranking), The Times Higher Education World University Rankings 2021 and QS World University Rankings 2021. Furthermore, to avoid any embarrassment to the Indian educational institutions, cases of educational institutions from our neighbouring country is taken.

Quaid-e-Azam University, Islamabad does not have a mechanical engineering department; in fact, it does not offer engineering of any kind. Yet the department of mechanical engineering at Quaid-e-Azam University was rated 76-100 in 2017. (

This placed it just below Tokyo University and just above Manchester University. Wow! Thereafter every year QAU improved its score and in 2020 it jumped into the 51-75 range putting it under McGill University but higher than Oxford University. (

The Times Higher Education World University Rankings 2021 declared the Abdul Wali Khan University in Mardan as Pakistan’s top university (!/page/0/length/25/locations/PK/sort_by/rank/sort_order/asc/cols/stats).

The Times Higher Education World University Rankings 2020 did not even list The Abdul Wali Khan University Mardan. (!/page/0/length/25/locations/PK/sort_by/rank/sort_order/asc/cols/stats).

The QS World University Rankings 2021, which were released soon after the release of The Times Higher Education World University Rankings 2021, put National University of Sciences And Technology (NUST) Islamabad at Pakistan’s number one and The Abdul Wali Khan University Mardan was not even on the list. (    

There is nothing exceptional about these examples beyond them being striking examples of how arbitrary rankings are.

Most ranking organisations, including the NIRF, never send assessors to the thousands of educational institutions they rank. Instead, they simply design forms for the officials of the institutions to fill and submit. The ranking criteria are periodically adjusted (for whose benefit?). Everyone (except the student) has something in the rankings for them.

Across the world, ranking organisations have been exposed as inconsistent, changing metrics from year to year, and omitting critical pieces of information. Smart academics and administrators have also learned to game the system. This speeds up their promotions and brings in recognitions and rewards.

Rankings are artificial zero-sum games. Artificial because they force a strict hierarchy upon educational institutions; artificial also because it is not realistic that an educational institution can only improve its reputation for performance exclusively at the expense of the reputations of other institutions. The most ludicrous aspect of it all is the belief, which may seem like a rational explanation that when an institution goes “up,” this must be because it has actually improved. If it goes “down,” it is being punished for underperforming. Such linear-causal kind of reasoning is absurd.

One of the hallmarks of any rankings are the numbers of research publications and citations.

Hundreds (the precise number is 1494) of Indian scientists and academics have been chosen from nearly 160 thousand (1,59,683 to be precise) scientists in universities across the world, ranked by their number of research publications and how often they were cited. ( Stanford University reportedly declared these Hundreds of Indian luminaries in the world’s top two per cent of scientists.

THAT IS A TOTAL LIE! Stanford University has not sanctioned any such report. This doctored news wrongly draws upon the enormous prestige of Stanford. Only one of the four authors, John P.A. Loannidis, has a Stanford affiliation. He is a professor of medical statistics while the other three authors are from the private sector. Their published work inputs numbers from an existing database into a computer that crunches them into a list. That list is meaningless for India. It does not represent scientific acumen or achievement.

Generating scientific research papers without knowing any science or doing actual research has been honed into a fine art by academic smarties at home and abroad. The stuff produced has to be published for which smart professors have developed many tricks including a membership to the cartel of international referees. The next and most difficult stage is to generate citations after the paper is published.

At this point, the smart professor relies upon smart friends to cite him and boost his ratings. Those friends have their friends in India, China, or elsewhere. This international web of connections is known as a citation cartel. Cartel members generate reams of scientific gibberish that the world of mainstream science refuses to even notice. Some of the individuals who made it to the exalted ‘Stanford scientist list’ would surprise people if they could pass a tough high-school-level exam for entering undergraduate studies in a decent university like Stanford. Others could certainly be genuine. No one would be able to tell.

Yet in India, the rewards are handsome, and the smart professor soon becomes chairperson, dean, vice-chancellor, or an influence peddler. One can expect nothing from the present gatekeepers of academia because fraud is a way of life for most. These gatekeepers shunt out all genuine academics lest they be challenged from below. This is creating a spiralling down vortex of mediocrity and upward spiral of favouritism. So many ‘category A’ NAAC accreditations of educational institutions are merely Self-congratulations and reflect the official policies that encourage academic dishonesty, all of whom have inflicted massive damage upon Indian higher education system.

Rankings are release and presented with much fanfare. Numbers, calculations, tables and other visual devices, “carefully calibrated” methodologies, and all that, are there to convince us that rankings are rooted in logic and  quasi-scientific reasoning. Rankings are made to appear as if they were works of science, they most definitely are not. However, maintaining the appearance of being factual is crucial for rankings.

The policy regime in India places a lot of importance on the rankings. That creates a problem, as more than a few educational institutions have started hiring consultants to help them raise their rankings. When a measure becomes a target, it ceases to be a good measure. This is the generalized Goodhart’s law which comes from Strathern’s paper, not from any of Goodhart’s writings [Strathern, Marilyn (1997). “‘Improving ratings’: audit in the British University system”. European Review. John Wiley & Sons. 5 (3): 305–321].

To assume that a rank, in any ranking, could possibly say anything meaningful about the quality of an educational institution relative to other institutions, is downright irrational. It is, however, precisely this assumption that makes rankings highly consequential, especially when it goes not only unchallenged, but also openly and publicly embraced, by the scholars themselves.


First published 24 Mar 2021


“Likes” “Follows” “Shares” and “Comments” are welcome.

We hope to see energetic, constructive and thought provoking conversations. To ensure the quality of the discussion, we may edit the comments for clarity, length, and relevance. Kindly do not force us to delete your comments by making them overly promotional, mean-spirited, or off-topic.