ADVERTISEMENT
Hiring in the AI eraThe AI surge has helped in introducing greater objectivity into the hiring process. AI-driven tools shift the focus from resumes, networking, and other peripheral factors that can lead to human biases to verifiable skills and measurable performance.
Kanad Basu
Gourab Mukherjee
Last Updated IST
<div class="paragraphs"><p>Representative image of job applicants</p></div>

Representative image of job applicants

Credit: iStock Photo

AI is not only changing the way companies hire people, but also transforming how students prepare to enter the workforce and how working professionals train to qualify for new opportunities. From resume-screening algorithms to video interview analysis tools, AI is reshaping the recruitment process at every stage. At the same time, students and job-seekers are increasingly using AI tools to tailor their skills, profiles, and interview strategies.

ADVERTISEMENT

The adoption of AI has not only made large-scale hiring much more efficient for companies but has also helped level the playing field for job-seekers. On the employer side, AI-powered tools make it easier to reach wide talent pools through targeted digital advertisements and postings on professional platforms and community forums. On the job-seeker side, the market is now filled with AI-driven tools ranging from resume and cover letter editing services to online certification programmes for professional development and career guidance.

The AI surge has helped in introducing greater objectivity into the hiring process. AI-driven tools shift the focus from resumes, networking, and other peripheral factors that can lead to human biases to verifiable skills and measurable performance. This makes hiring more skill-based, data-driven, and potentially more equitable. While data-driven recruitment practices have been evolving for decades, AI integration has significantly expanded their reach by making it easier and cheaper to assess skills at scale via automated evaluation of the interview performance. Michael Lewis, in his book The Undoing Project, illustrates how translating subjective judgements into structured, quantifiable criteria can lead to better hiring decisions. By encouraging evaluators to rely on objective data rather than intuition or vague impressions, organisations can achieve more consistent and fair outcomes.

Despite its benefits, AI-based hiring presents significant risks, particularly regarding algorithmic fairness and data privacy. AI systems rely on training data, and if certain demographic groups are underrepresented, the resulting algorithms may produce biased or unfair outcomes – sometimes in ways humans might not anticipate. Additionally, the collection of extensive personal and biometric data raises serious concerns about data security and the potential misuse of sensitive information.

The human element

The growing reliance on AI-powered platforms to quantify interviewee skills, especially in technical interviews for entry-level positions, raises important questions. Many students are overwhelmed by the pressure to prepare through online portals offering coding exercises tailored to specific companies. These risks encouraging a narrow form of “over-fitting”, where candidates invest disproportionate time and effort on a limited set of skills that may quickly become outdated.

Unfortunately, unlike standardised tests such as the GRE or TOEFL, these AI-driven assessments and online certification platforms have received little scrutiny through formal educational evaluation studies. Recently, the founders of Interview Coder reportedly used AI tools to complete coding-style technical interviews without detection, successfully securing top tech jobs. This raises concerns that an overreliance on AI in hiring is eroding the human element and reducing interviews to a race against machines – a race that AI itself may soon be better equipped to win. For fresh graduates entering the job market, these challenges are compounded by the rapid changes in hiring systems, an overwhelming abundance of tools and advice, and a lack of stability or consistency in what employers truly value.

In our discussions with experts, Prof. Ambar Sengupta, Head of the Department of Mathematics at the University of Connecticut, and Don Phin, Executive Coach and Employment Law Expert based in San Diego, both cautioned that without thoughtful design and governance, AI risks dehumanising hiring by reducing individuals to mere data points. They emphasised the importance of keeping humans involved to ensure that technology supports and strengthens human judgement rather than replacing it.

The hiring landscape is rapidly evolving. Over the past century, as we advanced from the mechanical age to the electrical, from the electrical to the digital, and now into the era of AI, both talent acquisition systems and the gatekeepers who shape them have undergone continual transformation. We are witnessing unprecedented changes in hiring systems, driven by AI’s growing influence across nearly every stage of the recruitment process. Yet, this progress demands careful reflection.

Building hiring systems that are fair, sustainable, and human-centred requires empathetic design, robust oversight, and alignment with the long-term welfare of the workforce. The real risk is not merely occasional misjudgements by AI, but the more subtle danger that AI could systematically redefine what organisations value, who they recognise, and how they measure potential. Ultimately, AI in hiring is not just a technical challenge; it is a deeply human one, requiring ongoing oversight and ethical reflection.

(The writers are professors at the Department of Data Sciences and Operations, Marshall School of Business, University of Southern California)

ADVERTISEMENT
(Published 04 September 2025, 02:57 IST)