
AI is quietly rewriting the rules of learning. From grading essays to generating lesson plans, artificial intelligence is moving from the sidelines of education into the heart of classrooms. It’s changing how students learn, how teachers teach, and even how schools measure progress. But unlike past tech shifts that focused on access or efficiency, AI is reshaping the learning process itself—making it more adaptive, data-driven, and deeply personal.
In an exclusive interaction with Deccan Herald in Education (DHIE), Aravind Putrevu, Director of Developer Marketing, CodeRabbit, talks about how AI is transforming pedagogy, the risks of over-reliance among young learners, and why teachers remain central to this new era of intelligent classrooms.
Teachers often worry AI might replace them. Based on your experience with AI-assisted coding tools, how should we frame AI in education — as a co-pilot, a tutor, or something else?
AI works best as three roles. First, co‑teacher for planning and differentiation. It drafts lesson outlines, adapts reading levels, and suggests remedial or enrichment tasks. Second, a tutor that offers after‑class practice with hints, examples, and error analysis so students can try again without embarrassment. Third, a lab assistant that checks steps in maths, highlights weak verbs in writing, or helps debug a small program. None of these roles replaces classroom leadership or care. Make AI a copilot (not an autopilot), where the teacher continues to have the ultimate decision regarding the goals, resources, and evaluation.
What risks do you see in over-reliance on AI in classrooms, especially for younger students?
Young learners need to build memory, language, and self regulation through struggle and play. Over‑reliance can short‑circuit those stages. Key risks include shallow understanding driven by answer‑seeking, hallucinated facts presented with confidence, cultural or linguistic bias against non‑English users, and privacy loss from logging children’s data. Mitigation is practical. Keep humans in the loop for any consequential action. Require offline activities that use pencil, paper, and conversation. Display provenance to show students when something is AI generated. Ask models to demonstrate steps and sources, and train children to ask questions of both. Prompts decision and system confidence should be logged in schools and periodically audited by a safety committee. Families require opt-in controls and explicit retention boundaries.. Finally, throttle convenience. Cap daily AI time for younger grades and rotate through centers that include reading, physical play, and hands-on experiments. AI should support curiosity, not replace it.This is aimed at independent thinkers who can apply tools wisely with the objective, rather than tool dependent learners who freeze without prior notice.
How can teachers be trained to use AI effectively without feeling intimidated by it?
Start with jobs to be done, not features. A practical programme has four parts. First, a template library for common tasks such as unit plans, rubrics, reading passages at multiple levels, and remediation pathways. Second, short hands‑on labs, ninety minutes each, where teachers bring a real lesson and leave with better materials. Third, peer champions in every department who run weekly clinics and share prompts that worked, with a simple repository inside the school’s LMS. Fourth, measurement that respects outcomes. Track prep time saved, student growth on formative checks, and engagement, not raw tool usage. Provide a safe sandbox, clear do‑not‑use zones, and escalation paths when answers look wrong. Finally, reward adoption. Give time credits, demonstrate good lessons during staff meetings, and allow successful teachers to mentor other teachers. Having a positive first-mile experience and having someone to lean on will build confidence. It is comfort and craft, not compliance.
Do you believe AI can genuinely personalise education for students in diverse classrooms like those in India, where learning levels vary widely?
It should and can be personalised. There is diversity in classes across several languages, disproportionate pre-school exposure and support at home. AI can even the playing field, by setting the reading level, or alternating between Hindi and a regional language, and pacing practice according to mastery, rather than time spent. To work in India, models must handle code‑mixing, dialects, and curriculum alignment. Teachers need a dashboard that shows who is stuck, why, and what to try next. Personalisation should always keep teacher override as a primary control. Keep the footprint light. Start with high impact areas: early reading fluency, foundational numeracy, and vocabulary building. Combine AI with peer discussion and brief oral checks in order to transfer off the screen. When a child gains confidence at the right level, behaviour in the classroom improves for everyone.
How should schools tackle the ethical concerns of bias and misinformation in AI-driven learning platforms?
Treat safety like procurement and process, not slogans. Any student‑facing tool should ship with a model card, privacy policy, language coverage, and results from red‑teaming in at least two Indian languages. Schools should require content provenance tags wherever AI generates text, images, or audio. Set error budgets for accuracy and safety, and trigger a human review when confidence is low. Maintain an incident log just like a lab record: what failed, how it was detected, who fixed it, and what changed. Create a small, rotating review board of teachers and parents that audits prompts, outputs, and data retention each term. Teach critical reading. Build short lessons on how to question sources, spot manipulations, and report issues. Finally, keep governance local. Districts can agree on a common baseline while allowing schools to add controls that fit their community. Bias and misinformation are not abstract ideas, they are classroom realities. Clear standards and steady practice keep them in check.
You’ve been deeply involved in open-source. Do you see open-source AI models playing a role in democratising access to quality education technology?
Open‑source AI is a practical lever for equity. It gives schools control over cost, privacy, and language coverage, and invites community scrutiny. Lightweight models can run on‑premises or on shared district servers, which reduces bandwidth dependence and protects student data. They can be fine‑tuned for NCERT objectives and local examples, from agriculture to regional history. Openness also enables rapid improvement, since teachers and researchers can submit evaluation sets and prompts that reflect real classrooms, not only English test beds. The caution is supported. A pile of repos will not teach anyone. Pair open models with a maintained distribution that includes safety filters, a prompt library, and documentation written for educators. Encourage state universities and teacher training institutes to host hubs that package updates and run regular audits. With that scaffold, open‑source becomes more than a cost play. It becomes a national capability that reflects India’s languages and values.
How can Indian schools, many of which lack resources, benefit from open-source AI in ways that big commercial products cannot offer?
Open‑source lets schools build “enough tech” rather than chase glossy platforms. A workable stack includes a local language model, a content server that caches lessons and exercises, and a teacher copilot that runs on a mid‑range laptop. Use it to generate worksheets, reading passages, and quizzes aligned to the textbook. Share everything in a district repository so successes travel fast. Prioritise offline and low‑bandwidth modes, printable outputs, and simple mobile interfaces for homework. Launch three ninety‑day pilots. First, a reading tutor that listens, scores fluency, and suggests phonics practice in two languages. Second, a maths practice app that shows step‑by‑step hints and logs misconceptions for the teacher’s review. Third, check the sidebar in social studies that explains sources and flags weak claims. Measure attendance, fluency, and error reduction, then scale. When budgets are thin, sovereignty over data and adaptability beat feature lists. Open‑source gives that leverage, while keeping teachers at the centre.
You’ve worked across multiple technology shifts — from cloud security to open-source to AI. How do you see AI reshaping classrooms compared to those earlier disruptions?
Cloud reduced friction in deployment and cost. Open source widened access and let schools adapt tools to local needs. AI changes the learning loop itself. Instead of tools that sit beside the lesson, AI participates in it, generating explanations, practice items, and feedback on demand. That shifts teacher time from production work to higher judgment work. It also enables formative assessment inside everyday activities rather than at the end of a unit. The risk is to confuse speed with depth, which is why AI must be paired with tasks that require reasoning, discussion, and analog work. When used well, AI becomes the scaffolding that helps each learner attempt harder problems with confidence. The teacher remains the architect of the learning experience, curating resources, setting expectations, and deciding where to slow down. In short, cloud and open source changed the plumbing, AI touched pedagogy. It pulls us toward mastery learning, frequent feedback, and flexible pacing while freeing teachers from repetitive preparation. That is a qualitatively different shift.