<p>“I have reduced using the chalk and blackboard!” I reflected as the term came to an end. The chalkboard has been replaced by the smartboard, and the chalk by the stylus. At first glance, this appears to be a simple technological upgrade. But beneath this visible shift lies a deeper transformation — one that challenges not just our tools, but our identity as educators.</p>.<p>We were once educators in the traditional sense, custodians of content and authority in the classroom. Over time, we became facilitators, guiding discussion rather than delivering monologues. Today, we stand at a crossroads. Our roles have undergone a quiet but complete shift. Faculty are no longer preoccupied with what to teach, but with how to evaluate the output of a young mind working alongside a machine.</p>.<p>Earlier, mistakes, incorrect assumptions, wrong methodology, and flawed hypotheses would be corrected, keeping in mind that a human brain is at work. Now, can we still accept student work with the same neutrality? When an assignment is shaped — partially or substantially — by artificial intelligence, what exactly are we assessing? The final answer? The prompt engineering? Or the student’s judgment in selecting, modifying, or rejecting the machine’s output?</p>.<p>In this situation, the educator’s role shifts from correcting errors to diagnosing thinking. For instance, when a student submits a perfectly structured answer but struggles to explain why a particular approach was chosen, the gap becomes visible. The issue is no longer correctness, but ownership of thought.</p>.<p>As Dr T Hongray, Dean of Student Affairs, R V University, succinctly puts it, the correct way for students to use AI is not as a substitute for thinking, but as a scaffold for it. He often reminds students that AI should help them build on what they already have — to corroborate or substantiate their ideas and gain clarity. He encourages students to follow up by prompting AI tools for references and citations wherever possible.</p>.<p>What AI should never be used for, he cautions, is creating or building something entirely new from scratch. It was never designed for original idea generation and, when pushed beyond its limits, it merely hallucinates. At its core, AI remains a brute-force pattern recognition machine operating at scale, constrained by the data on which it has been trained.</p>.<p>Echoing a complementary perspective, Dr Deepthi Das, Professor and Associate Dean, School of Sciences, CHRIST (Deemed to be University), argues that teachers must view students’ use of AI as an opportunity to enhance learning rather than as a threat. Students, she emphasises, should be guided to use AI tools to understand concepts, clarify doubts, and expand their knowledge — not to copy answers.</p>.<p>At some universities, students are trained in the use of various AI tools alongside the ethics of responsible usage, ensuring they recognise both the benefits and the limitations of these technologies. As faculty, we deliberately design assessments to demand genuine thinking and creativity so that students engage meaningfully with the subject. After all, AI itself is a creation of human intelligence — intended to support us, not replace us.</p>.<p>Offering a more philosophical lens, Amit Vikram, a senior IT professional, observes that the rise of AI reopens an old question rooted in Darwin’s theory of use and disuse. If certain cognitive skills are no longer exercised because machines perform them faster and more efficiently, do those skills weaken over time? </p>.<p>He suggests that intelligence in the age of AI may be measured less by the ability to generate answers and more by the ability to recognise when something is wrong. If a student or user can determine whether an output is biased, misleading, or contextually inappropriate, then that person is a perfect fit for the AI. Let’s not debate whether AI should be accepted; it’s too late for that. But as educators, we have to ensure that the core values of education, namely cultivating curiosity, responsibility, and sound judgment, are not compromised by the usage of AI.</p>.<p>In redefining how we assess learning, we may also be redefining what it means to think — together, with machines.</p>.<p><em>(The author is an academic)</em></p>
<p>“I have reduced using the chalk and blackboard!” I reflected as the term came to an end. The chalkboard has been replaced by the smartboard, and the chalk by the stylus. At first glance, this appears to be a simple technological upgrade. But beneath this visible shift lies a deeper transformation — one that challenges not just our tools, but our identity as educators.</p>.<p>We were once educators in the traditional sense, custodians of content and authority in the classroom. Over time, we became facilitators, guiding discussion rather than delivering monologues. Today, we stand at a crossroads. Our roles have undergone a quiet but complete shift. Faculty are no longer preoccupied with what to teach, but with how to evaluate the output of a young mind working alongside a machine.</p>.<p>Earlier, mistakes, incorrect assumptions, wrong methodology, and flawed hypotheses would be corrected, keeping in mind that a human brain is at work. Now, can we still accept student work with the same neutrality? When an assignment is shaped — partially or substantially — by artificial intelligence, what exactly are we assessing? The final answer? The prompt engineering? Or the student’s judgment in selecting, modifying, or rejecting the machine’s output?</p>.<p>In this situation, the educator’s role shifts from correcting errors to diagnosing thinking. For instance, when a student submits a perfectly structured answer but struggles to explain why a particular approach was chosen, the gap becomes visible. The issue is no longer correctness, but ownership of thought.</p>.<p>As Dr T Hongray, Dean of Student Affairs, R V University, succinctly puts it, the correct way for students to use AI is not as a substitute for thinking, but as a scaffold for it. He often reminds students that AI should help them build on what they already have — to corroborate or substantiate their ideas and gain clarity. He encourages students to follow up by prompting AI tools for references and citations wherever possible.</p>.<p>What AI should never be used for, he cautions, is creating or building something entirely new from scratch. It was never designed for original idea generation and, when pushed beyond its limits, it merely hallucinates. At its core, AI remains a brute-force pattern recognition machine operating at scale, constrained by the data on which it has been trained.</p>.<p>Echoing a complementary perspective, Dr Deepthi Das, Professor and Associate Dean, School of Sciences, CHRIST (Deemed to be University), argues that teachers must view students’ use of AI as an opportunity to enhance learning rather than as a threat. Students, she emphasises, should be guided to use AI tools to understand concepts, clarify doubts, and expand their knowledge — not to copy answers.</p>.<p>At some universities, students are trained in the use of various AI tools alongside the ethics of responsible usage, ensuring they recognise both the benefits and the limitations of these technologies. As faculty, we deliberately design assessments to demand genuine thinking and creativity so that students engage meaningfully with the subject. After all, AI itself is a creation of human intelligence — intended to support us, not replace us.</p>.<p>Offering a more philosophical lens, Amit Vikram, a senior IT professional, observes that the rise of AI reopens an old question rooted in Darwin’s theory of use and disuse. If certain cognitive skills are no longer exercised because machines perform them faster and more efficiently, do those skills weaken over time? </p>.<p>He suggests that intelligence in the age of AI may be measured less by the ability to generate answers and more by the ability to recognise when something is wrong. If a student or user can determine whether an output is biased, misleading, or contextually inappropriate, then that person is a perfect fit for the AI. Let’s not debate whether AI should be accepted; it’s too late for that. But as educators, we have to ensure that the core values of education, namely cultivating curiosity, responsibility, and sound judgment, are not compromised by the usage of AI.</p>.<p>In redefining how we assess learning, we may also be redefining what it means to think — together, with machines.</p>.<p><em>(The author is an academic)</em></p>