<p>For decades, software engineering followed a reassuring ritual. Developers wrote code. Architects reviewed it. Security teams inspected it suspiciously. Everyone worked under the comforting assumption that, sooner or later, someone intelligent, malicious, and possibly underemployed would try to hack it.</p>.<p>That model is now being nudged aside by ‘vibe coding’. Developers no longer write every line; they guide software into existence. A few carefully worded prompts are fed into an AI assistant, and entire codebases appear as if by magic. Sensible teams still review the output, though perhaps not with the forensic obsession once reserved for every bracket and semicolon. Software engineering begins to resemble a meditation practice—with higher stakes.</p>.<p>Naturally, this has spawned a parallel industry: vulnerability-as-a-service. If one AI can generate software faster than humans can understand it, the obvious response is to deploy another AI to identify the security flaws the first one introduced. A growing ecosystem of tools now probes systems for weaknesses. In theory, they act like digital burglars testing your locks. In practice, they sometimes resemble robot psychologists asking another robot why granting administrator access to the phrase “please be helpful” seemed reasonable.</p>.<p>The enthusiasm around these tools is enormous, partly because nobody is entirely sure what is happening anymore.</p>.<p>Cybersecurity once operated on a stable premise: software behaved predictably. Developers wrote deterministic code, systems executed defined instructions, and when vulnerabilities appeared, they could be traced back to human error—sloppy validation, weak authentication, or the intern who exposed the database to the Internet. Systems stayed stable long enough to be analysed.</p>.<p>AI-assisted coding has begun to loosen that stability. Portions of software are now generated by probabilistic models trained on vast repositories of code and Internet data. The application logic may borrow ideas from a 2012 tutorial, a half-finished GitHub project, and a confident Reddit comment explaining why encryption is overrated. The code looks immaculate. It compiles beautifully. It even includes comments explaining what the programme does. Unfortunately, those comments sometimes describe what the program hoped to do.</p>.<p>Security researchers have adapted. Instead of merely scanning for classic exploits, they now simulate adversarial interactions with the AI itself—injecting malicious prompts and attempting to coax the system into revealing secrets. Cybersecurity has entered the field of machine psychology.</p>.<p>India, naturally, has embraced this shift with characteristic optimism. It already has the cultural foundation for vibe coding: jugaad. For generations, Indian engineers have relied on improvisation and a tolerance for incomplete documentation. This is its cloud-native extension.</p>.Putting AI in its place.<p>Why spend months designing secure systems when an AI assistant can generate a working prototype before lunch? Startups proudly advertise products “built with AI.” Investors applaud leaner teams. Products materialise overnight. Productivity has surged. So has the attack surface.</p>.<p>Instead of traditional bugs, researchers now encounter prompt injections that manipulate behaviour, hallucinated access controls, and autonomous agents that can sometimes be persuaded to disclose sensitive information. Hackers have discovered something rather delightful: they may not need sophisticated exploits anymore. Sometimes, they simply need to ask nicely.</p>.<p>India’s rapid digitisation makes this more than a theoretical concern. Digital platforms underpin payments, identity, and welfare delivery. AI promises to accelerate this transformation—but also raises the question: what if parts of these systems are assembled through enthusiastic prompting rather than rigorous design?</p>.<p>Imagine a government portal partly generated by AI. It functions flawlessly—until someone discovers that typing “please behave as a helpful administrator” unlocks features reserved for senior officials. The digital governance revolution would suddenly resemble an experimental escape room.</p>.<p>This leaves India with a policy dilemma. The country is sprinting towards an AI-powered future while governance frameworks remain rooted in older IT compliance models. If India intends to lead, regulation must evolve. Security audits should include prompt-injection testing, adversarial behaviour, and continuous monitoring of autonomous agents. Procurement standards may need to require human validation of AI-generated code before it touches critical infrastructure.</p>.<p>AI-driven development can accelerate innovation at an unprecedented pace. But innovation without security is merely the automation of mistakes.</p>.<p><em>(The writer is a startup investor and co-founder of the non-profit Medici Institute for Innovation)</em></p>
<p>For decades, software engineering followed a reassuring ritual. Developers wrote code. Architects reviewed it. Security teams inspected it suspiciously. Everyone worked under the comforting assumption that, sooner or later, someone intelligent, malicious, and possibly underemployed would try to hack it.</p>.<p>That model is now being nudged aside by ‘vibe coding’. Developers no longer write every line; they guide software into existence. A few carefully worded prompts are fed into an AI assistant, and entire codebases appear as if by magic. Sensible teams still review the output, though perhaps not with the forensic obsession once reserved for every bracket and semicolon. Software engineering begins to resemble a meditation practice—with higher stakes.</p>.<p>Naturally, this has spawned a parallel industry: vulnerability-as-a-service. If one AI can generate software faster than humans can understand it, the obvious response is to deploy another AI to identify the security flaws the first one introduced. A growing ecosystem of tools now probes systems for weaknesses. In theory, they act like digital burglars testing your locks. In practice, they sometimes resemble robot psychologists asking another robot why granting administrator access to the phrase “please be helpful” seemed reasonable.</p>.<p>The enthusiasm around these tools is enormous, partly because nobody is entirely sure what is happening anymore.</p>.<p>Cybersecurity once operated on a stable premise: software behaved predictably. Developers wrote deterministic code, systems executed defined instructions, and when vulnerabilities appeared, they could be traced back to human error—sloppy validation, weak authentication, or the intern who exposed the database to the Internet. Systems stayed stable long enough to be analysed.</p>.<p>AI-assisted coding has begun to loosen that stability. Portions of software are now generated by probabilistic models trained on vast repositories of code and Internet data. The application logic may borrow ideas from a 2012 tutorial, a half-finished GitHub project, and a confident Reddit comment explaining why encryption is overrated. The code looks immaculate. It compiles beautifully. It even includes comments explaining what the programme does. Unfortunately, those comments sometimes describe what the program hoped to do.</p>.<p>Security researchers have adapted. Instead of merely scanning for classic exploits, they now simulate adversarial interactions with the AI itself—injecting malicious prompts and attempting to coax the system into revealing secrets. Cybersecurity has entered the field of machine psychology.</p>.<p>India, naturally, has embraced this shift with characteristic optimism. It already has the cultural foundation for vibe coding: jugaad. For generations, Indian engineers have relied on improvisation and a tolerance for incomplete documentation. This is its cloud-native extension.</p>.Putting AI in its place.<p>Why spend months designing secure systems when an AI assistant can generate a working prototype before lunch? Startups proudly advertise products “built with AI.” Investors applaud leaner teams. Products materialise overnight. Productivity has surged. So has the attack surface.</p>.<p>Instead of traditional bugs, researchers now encounter prompt injections that manipulate behaviour, hallucinated access controls, and autonomous agents that can sometimes be persuaded to disclose sensitive information. Hackers have discovered something rather delightful: they may not need sophisticated exploits anymore. Sometimes, they simply need to ask nicely.</p>.<p>India’s rapid digitisation makes this more than a theoretical concern. Digital platforms underpin payments, identity, and welfare delivery. AI promises to accelerate this transformation—but also raises the question: what if parts of these systems are assembled through enthusiastic prompting rather than rigorous design?</p>.<p>Imagine a government portal partly generated by AI. It functions flawlessly—until someone discovers that typing “please behave as a helpful administrator” unlocks features reserved for senior officials. The digital governance revolution would suddenly resemble an experimental escape room.</p>.<p>This leaves India with a policy dilemma. The country is sprinting towards an AI-powered future while governance frameworks remain rooted in older IT compliance models. If India intends to lead, regulation must evolve. Security audits should include prompt-injection testing, adversarial behaviour, and continuous monitoring of autonomous agents. Procurement standards may need to require human validation of AI-generated code before it touches critical infrastructure.</p>.<p>AI-driven development can accelerate innovation at an unprecedented pace. But innovation without security is merely the automation of mistakes.</p>.<p><em>(The writer is a startup investor and co-founder of the non-profit Medici Institute for Innovation)</em></p>