
Image for representational purpoes.
Credit: iStock Photo
The proposition that politicians might one day replace artificial intelligence (AI) sounds like the opening line of a cosmic joke—an idea so structurally improbable that even Douglas Adams, master of intergalactic absurdity, would raise an eyebrow and ask, “Have you considered using a towel instead?” It is rooted less in real political or technical reasoning than in science fiction—the cheerful kind that imagines humans becoming far more efficient than they actually are.
At first glance, politicians and AI share a few superficial similarities. Both communicate in full sentences, often confidently. Both generate long speeches. Both claim to have the answers. And both occasionally produce content that feels suspiciously automated. But the resemblance stops there. The gulf between AI and the political class is not just wide; it is ontological.
To begin with, AI is trained, while politicians are processed. AI systems accumulate data, integrate it, and try—however imperfectly—to update their models. Politicians accumulate experiences, talking points, and public-relations instincts, and then rapidly prune away anything that might later be used against them. One evolves by learning; the other evolves by strategic forgetting. If AI forgot as frequently as politicians do, every chatbot would begin answers with “I don’t recall the specifics of that situation,” and every search query would return, “That depends on the committee report.”
Error correction is another point of divergence. When AI makes a mistake, it usually apologises—immediately and unprompted. When a politician errs, an entire communications team materialises like a Greek chorus, generating euphemisms, context, subtext, counter-text, and “what I actually meant was…” narratives. If AI misclassifies an image of a banana as a helicopter, it acknowledges the failure. If a politician misclassifies a national crisis as a photo opportunity, we get a press release titled “A Bold New Vision for Strategic Aerial Yellow Initiatives”.
AI systems also maintain consistency. A model cannot say one thing today, the opposite tomorrow, and claim nuanced interpretation on the third day. A politician, however, considers this flexibility not a bug but a badge of honour: the ability to face both north and south without turning one’s head. Where AI seeks coherence, politicians seek coalitions; the truth is what survives the parliamentary vote.
Importantly, AI is not driven by electoral compulsions. It does not adjust its answers based on opinion polls, party whips, or the sudden realisation that its most ardent supporters live in swing constituencies. AI does not perform political acrobatics—promising one thing to urban voters, another to rural voters, and a third to itself in private. It has no constituency except logic, no ideology except probability, and no manifesto except its training corpus.
Some might argue that politicians possess something AI does not: emotional intuition. But this is precisely where the structure collapses. AI is criticised for being too mechanical, too literal, and too rational. Politicians, conversely, are criticised for being insufficiently literal, excessively emotional, and allergic to rational constraints—budgetary or otherwise.
Consider a useful literary parallel. In George Orwell’s Animal Farm, the pigs master the art of adjusting commandments while insisting that nothing has changed. If AI behaved like this, it would push software updates that rewrote their own patch notes: “Version 3.1—No major modifications except the removal, revision, and redefinition of all previous rules.” Fortunately, AI lacks the instinct to quietly edit its own constitution at midnight.
Another reference is Joseph Conrad’s Heart of Darkness. Conrad’s narrator journeys into the unsettling depths of human ambition, discovering that the real threat lies not in the wilderness but in the human appetite for power. If an AI were dispatched into this metaphorical forest, it would return with a neat taxonomy of vegetation types and a probability distribution of risks.
The bottom line is this: AI is built for pattern recognition; politicians are built for narrative construction. AI aims for precision; politicians aim for persuasion. AI optimises outputs; politicians optimise optics. You can ask AI the same question a thousand times and receive stable answers. Ask a politician the same question twice, and you may discover parallel universes.
AI cannot replicate the political instinct for contingency, charisma, double-speak, negotiation, or the fine art of shaking hands while calculating vote-share differentials. And politicians cannot replicate the computational rigour, consistency, and empirical neutrality of AI.
In short: AI will not replace politicians, and politicians cannot replace AI. One operates on algorithms; the other on alliances. One calculates; the other narrates. One seeks accuracy; the other seeks advantage.
And somewhere in that asymmetric dance lies the reason both continue to exist—and why, for the foreseeable future, neither is going out of business.
(The writer is a Delhi-based journalist)
Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.