<p>The latest National Institutional Ranking Framework (NIRF) results were, as always, launched with fanfare, live telecasts, and the familiar parade of IISc Bengaluru and a few IITs at the top. </p><p>However, for all the rhetoric of transparency and rigour, the rankings have exposed a deeply flawed system riddled with inequities. Only a small metropolitan elite is consistently spotlighted, while hundreds of regional institutions remain invisible.</p>.<p>The NIRF claims to measure academic excellence, but in practice, its formula rewards reputation, networks, and financial muscle more than genuine educational work. The Union Education Minister’s repeated calls for objectivity and transparency acknowledge that the system lacks both. </p>.<p>The fundamental problem lies in the integrity of the data. The rankings rely entirely on information supplied by the institutions themselves: student numbers, faculty, publications, budgets, placements, and outreach. With little independent cross-verification, the process has become less of a survey and more of a competitive sport in data manipulation. Colleges embellish figures and get away with it because no meaningful third-party scrutiny exists.</p>.<p>The Madras High Court issued an interim stay on the 2025 rankings for precisely this reason: no transparent audit was in place. Meanwhile, private agencies now coach institutions on how to ‘game’ the system, reducing the exercise to clever number-dressing.</p>.<p>Nothing illustrates the malaise better than the farcical case of Banaras Hindu University’s dental college in the NIRF 2025 rankings. The Faculty of Dental Sciences appeared twice, once under BHU at rank 15, and again independently at rank 18. Both entries were based on separate submissions, with matching data in some areas but discrepancies in others, particularly placement figures.</p>.<p>The real scandal, however, lay in the outcome: the two entries received vastly different scores, with more than 17 points’ difference in ‘perception’ for what is essentially the same institution. That surveyors could “perceive” two versions of one college makes a mockery of credibility.</p>.<p>This was not just a clerical error but a damning indictment of NIRF’s weak vetting procedures. Such lapses are not isolated. Institutions routinely complain that the submission process is confusing, with unclear guidelines about names, affiliations, or multiple campuses. Reorganisations or ambiguous categories often lead to mismatched scores or outright exclusions. State universities serving rural or first-generation learners are forced into the same mould as IITs or JNU, even though the contexts and resources are vastly different. A small rural university with a social mission is judged by the same yardstick as metropolitan powerhouses, leaving its transformative impact invisible.</p>.<p>The metrics themselves privilege older, urban institutions. The ‘Peer Perception’ category, with a ten per cent weightage, rewards reputation and legacy rather than real impact. It ensures the elite keep winning, while newer or regionally focused universities barely register, even if they are doing vital work in leadership, community engagement, or access to higher education.</p>.<p>Research indicators are equally problematic. The obsession with counting papers, citations, and impact factors has turned scholarship into a box-ticking game, pushing academics towards self-citation, paper mills, and superficial collaborations. Though NIRF 2025 has introduced negative marking for retracted papers, the deeper malaise remains: research judged by numbers rather than substance. Innovation and rigorous scholarship are overshadowed by the chase for bibliometric points.</p>.<p>When it comes to teaching quality, the picture is no better. The assessment focuses narrowly on ratios, budgets, and infrastructure, while classroom experience—mentoring, pedagogy, and student growth—is sidelined. No spreadsheet can measure whether an institution has truly uplifted its students or empowered its region, yet that is precisely what is ignored.</p>.<p>The larger consequence is obvious: NIRF entrenches privilege. Elite institutions accumulate more prestige, funding, and applicants, while hardworking regional colleges remain anonymous. Instead of widening recognition and rewarding diversity, the rankings narrow the field further, worsening existing inequalities.</p>.<p>If the framework is to have any credibility, radical reform is unavoidable. Data verification must be made mandatory, transparent, and open to public audit. NIRF should publish the final ranks and the entire dataset, along with clear explanations and corrections, following an independent review. Perception metrics must be drastically reduced, if not scrapped altogether, and replaced by measures that reward inclusion, social mission, and regional impact.</p>.<p>Research assessment should move beyond raw numbers to engagement, interdisciplinarity, and contributions to real-world problems. For teaching, alumni reviews, classroom audits, and examples of innovation must become central. Most importantly, institutions found manipulating data must face real penalties: public disclosure, temporary exclusion from rankings, and reinstatement only after scrutiny.</p>.<p>Until such changes happen, NIRF will remain less a reformer and more an impediment—misleading students, parents, and policymakers, and encouraging institutions to chase metrics instead of improving education. Unless it is rebuilt on integrity, it will only be a glittering annual show that props up a few elites while masking a troubled reality.</p>.<p><span class="italic"><em>(The writer is a former professor and former dean of a Bengaluru-based university)</em></span></p>
<p>The latest National Institutional Ranking Framework (NIRF) results were, as always, launched with fanfare, live telecasts, and the familiar parade of IISc Bengaluru and a few IITs at the top. </p><p>However, for all the rhetoric of transparency and rigour, the rankings have exposed a deeply flawed system riddled with inequities. Only a small metropolitan elite is consistently spotlighted, while hundreds of regional institutions remain invisible.</p>.<p>The NIRF claims to measure academic excellence, but in practice, its formula rewards reputation, networks, and financial muscle more than genuine educational work. The Union Education Minister’s repeated calls for objectivity and transparency acknowledge that the system lacks both. </p>.<p>The fundamental problem lies in the integrity of the data. The rankings rely entirely on information supplied by the institutions themselves: student numbers, faculty, publications, budgets, placements, and outreach. With little independent cross-verification, the process has become less of a survey and more of a competitive sport in data manipulation. Colleges embellish figures and get away with it because no meaningful third-party scrutiny exists.</p>.<p>The Madras High Court issued an interim stay on the 2025 rankings for precisely this reason: no transparent audit was in place. Meanwhile, private agencies now coach institutions on how to ‘game’ the system, reducing the exercise to clever number-dressing.</p>.<p>Nothing illustrates the malaise better than the farcical case of Banaras Hindu University’s dental college in the NIRF 2025 rankings. The Faculty of Dental Sciences appeared twice, once under BHU at rank 15, and again independently at rank 18. Both entries were based on separate submissions, with matching data in some areas but discrepancies in others, particularly placement figures.</p>.<p>The real scandal, however, lay in the outcome: the two entries received vastly different scores, with more than 17 points’ difference in ‘perception’ for what is essentially the same institution. That surveyors could “perceive” two versions of one college makes a mockery of credibility.</p>.<p>This was not just a clerical error but a damning indictment of NIRF’s weak vetting procedures. Such lapses are not isolated. Institutions routinely complain that the submission process is confusing, with unclear guidelines about names, affiliations, or multiple campuses. Reorganisations or ambiguous categories often lead to mismatched scores or outright exclusions. State universities serving rural or first-generation learners are forced into the same mould as IITs or JNU, even though the contexts and resources are vastly different. A small rural university with a social mission is judged by the same yardstick as metropolitan powerhouses, leaving its transformative impact invisible.</p>.<p>The metrics themselves privilege older, urban institutions. The ‘Peer Perception’ category, with a ten per cent weightage, rewards reputation and legacy rather than real impact. It ensures the elite keep winning, while newer or regionally focused universities barely register, even if they are doing vital work in leadership, community engagement, or access to higher education.</p>.<p>Research indicators are equally problematic. The obsession with counting papers, citations, and impact factors has turned scholarship into a box-ticking game, pushing academics towards self-citation, paper mills, and superficial collaborations. Though NIRF 2025 has introduced negative marking for retracted papers, the deeper malaise remains: research judged by numbers rather than substance. Innovation and rigorous scholarship are overshadowed by the chase for bibliometric points.</p>.<p>When it comes to teaching quality, the picture is no better. The assessment focuses narrowly on ratios, budgets, and infrastructure, while classroom experience—mentoring, pedagogy, and student growth—is sidelined. No spreadsheet can measure whether an institution has truly uplifted its students or empowered its region, yet that is precisely what is ignored.</p>.<p>The larger consequence is obvious: NIRF entrenches privilege. Elite institutions accumulate more prestige, funding, and applicants, while hardworking regional colleges remain anonymous. Instead of widening recognition and rewarding diversity, the rankings narrow the field further, worsening existing inequalities.</p>.<p>If the framework is to have any credibility, radical reform is unavoidable. Data verification must be made mandatory, transparent, and open to public audit. NIRF should publish the final ranks and the entire dataset, along with clear explanations and corrections, following an independent review. Perception metrics must be drastically reduced, if not scrapped altogether, and replaced by measures that reward inclusion, social mission, and regional impact.</p>.<p>Research assessment should move beyond raw numbers to engagement, interdisciplinarity, and contributions to real-world problems. For teaching, alumni reviews, classroom audits, and examples of innovation must become central. Most importantly, institutions found manipulating data must face real penalties: public disclosure, temporary exclusion from rankings, and reinstatement only after scrutiny.</p>.<p>Until such changes happen, NIRF will remain less a reformer and more an impediment—misleading students, parents, and policymakers, and encouraging institutions to chase metrics instead of improving education. Unless it is rebuilt on integrity, it will only be a glittering annual show that props up a few elites while masking a troubled reality.</p>.<p><span class="italic"><em>(The writer is a former professor and former dean of a Bengaluru-based university)</em></span></p>