<p>Early last month, Shruthi, a close friend, was excited when she matched with someone promising on Hinge. In the wastelands of modern dating, this man, let’s call him Joe, seemed to hit every mark of curiosity, humility and sensitivity. It felt almost too good to be true.</p>.<p>“I think he’s using AI,” she joked frequently in the days leading up to their first date.</p>.<p>Sitting in a booth at a busy biryani joint in Bengaluru a few weeks later, it became clear to Shruthi that her hunch had been right. Joe turned out to be a ‘chatfisher’— someone who relies on artificial intelligence to start and sustain conversations. In real life, Joe was reserved and showed little interest in engaging with their personal histories, shared interests, events of the day, or, for that matter, much of anything at all.</p>.<p>“He was very passive in real life and was responding with single-word answers,” Shruthi told me on a call afterwards.</p>.<p>In the race to appear quick-witted and articulate, many like Joe have turned to AI to do the heavy lifting. About 65% of Indians on dating apps admit to using AI tools, particularly ChatGPT, to create content for dating apps, according to a McAfee survey of 7,000 participants.</p>.<p>Romantic life has been mediated by technology for years now, so this trend is not a shock. What is less clear is how AI use settles into the crevices of the rest of our relationships.</p>.<p><strong>The elephant in every chat</strong></p>.<p>Generative AI has become impossible to escape — an elephant in every room. Beyond its original purpose of making work more efficient, all-purpose artificial intelligence tools have begun to seep into our most personal and intimate spaces. Large Language Models (LLMs) are crafting congratulatory wishes, breakup texts and even condolence messages. Messages that were once misspelt and clumsy are now perfect, polished, and replete with contextual emojis.</p>.<p>But what is this doing to how we see ourselves, how we learn, write, express ourselves, or how we think?</p>.<p>For Pallavi N B, a content lead at a venture capital firm, ChatGPT has become a sounding board. Sharing the details of emotionally turbulent situations with the chatbot has made her feel more confident in her decisions. Particularly in conflict, the tool offers the unheated, neutral perspective she is looking for.</p>.<p>“I do ask it questions like, if I said this, would it hurt them? What if I said this instead?” she says.</p>.<p>In the past, she would have turned to friends with these queries, though she was never entirely comfortable speaking about one friend to another. The model has taken away that guilt.</p>.<p>“It gave me confidence and validation for a few responses, and I felt like I could trust what I was thinking and the way I was saying it,” she adds.</p>.<p>ChatGPT has provided a steadying hand for Abhi, too, a social media manager based in Bengaluru.</p>.<p>“I used it with friends mostly, but yeah, also for my dates when I want to end a conversation with them. I used to take AI assistance to craft a well-tailored message without hurting their sentiments while maintaining my boundary,” he says.</p>.<p>A message or task that would once take hours of fretting — hours of considering the other perspective — now takes less than a minute. Humans are hard-wired to take the path of least resistance, after all. So what is the problem?</p>.<p><strong>The laws of friction</strong></p>.<p>When Abhi and his colleagues sat down to discuss artificial intelligence and the instant outcomes it provides, the conversation turned to what we might be losing: the process itself, and the difficulty that makes learning possible.</p>.<p>“We were talking about AI’s impact and how raw and flawed messages should be appreciated as human because that’s how we evolve our communication and fine-tune it with practice and time,” he says.</p>.<p>The Bengaluru-based social media executive recognises that his dependence on AI has increased tenfold. “At that moment I realised, let me have an authentic take on how I message, and that it does not have to be polished all the time,” he adds.</p>.<p>Indeed, many who have used AI to assist them over the long term acknowledge that personal evolution can stall with prolonged dependence on these systems. By outsourcing our ability to think through conflict, navigate difficult situations or sit with discomfort, we risk allowing these skills to atrophy.</p>.<p>A classic example of this played out in the 2010s, when people began to rely heavily on GPS. There is ample scientific evidence to suggest that habitual dependence on turn-by-turn navigation leads to hippocampal atrophy, as this brain region — crucial for spatial memory — shrinks when underutilised.</p>.<p>Similarly, a study from researchers at MIT’s Media Lab suggests that outsourcing thinking comes at the cost of learning. The researchers tested whether tools like ChatGPT might encourage metacognitive laziness — the tendency to offload cognitive effort to AI, sidestepping deeper engagement with the task at hand.</p>.<p>They divided 54 participants into three groups. One group wrote essays using OpenAI’s ChatGPT, another relied on Google Search, and a third wrote without any assistance. As they worked, researchers monitored their brain activity across 32 regions using EEG scans.</p>.<p>Of the three groups, those using ChatGPT showed the lowest levels of brain engagement and consistently underperformed in neural, linguistic and behavioural markers.</p>.<p>Over several months, this pattern appeared to become more entrenched. With ChatGPT, users exerted less effort, and some eventually resorted to copy-and-paste responses by the end of the study.</p>.<p>Ultimately, AI’s greatest strength — removing friction from work and relationships — might also be its biggest liability.</p>.<p>“By subtracting effort from life, AI risks removing the struggles that teach us, the loneliness that connects us, and the labour that makes life meaningful,” a 2026 article published in Communications Psychology reads.</p>.<p>While techno-optimists have long touted the virtues of AI, its use as a vice threatens to erode the very skills through which we navigate the world.</p>.<p>Curious about its performance in interpersonal conflict, Sweekruthi shared a few scenarios with ChatGPT.</p>.<p>“While initially I felt like the advice it gave me had value, over time I noticed that its response depended heavily on how I presented the information,” she explains.</p>.<p>In this way, if one approaches the machine with a certain bias, it is unlikely to challenge it and may even reinforce it. Rather than spurring self-reflection or corrective behaviour, AI can entrench existing beliefs.</p>.<p>A study by Cornell University found that across 11 AI models, chatbots were highly sycophantic — about 50% more likely to endorse user decisions, even in cases where users reported engaging in manipulation, deception or other socially harmful behaviour.</p>.<p><strong>Telltale signs of AI</strong></p>.<p>If the goal is to represent ourselves at our best, or at AI’s best, even that end may be undermined.</p>.<p>There is something about AI-generated writing that sets off subtle alarms. The unnecessary verbosity, the formulaic sentence structures, the overuse of certain words like “delve,” the frequent reliance on em dashes, all contribute to an uncanny valley effect: text that sounds human, but lacks the inflexions that make it so.</p>.<p>When Sal, a content strategist based in Germany, received a message from an old classmate, she noticed these telltale markers immediately.</p>.<p>“The overly generic corporate small-talk flavour and the em dashes in every message were making me suspicious,” she says.</p>.<p>She hadn’t spoken to this acquaintance in a decade. Now, he was attempting to reconnect, but seemingly with the help of AI. Confused and curious, Sal decided to confront him.</p>.<p>“Can I ask, are you using ChatGPT to write your messages?” she asked.</p>.<p>“No response since I asked the ChatGPT question,” she says.</p>.<p>The sense of deception proved difficult to move past for Shruthi as well. The openness with which ‘AI Joe’ approached their conversations was what persuaded her to invest time and vulnerability.</p>.<p>“Those moments of connection ultimately turned out to be hollow, and the artifice of it was too much. I stopped responding to him after that, even though we had already met in person,” she says.</p>.<p>When asked if she could foster genuine connections with people who rely heavily on AI to communicate, Sal is sceptical. “Even if I did develop a connection with someone who uses AI to heavily edit their correspondence, would that connection really be with them? I don’t think so.”</p>.<p>A 2023 study by Ohio State University supports this sentiment. The 208 adults who participated received a congratulatory, thoughtful or condolence message from a friend. Those who later learned that the message had been crafted with AI reacted negatively.</p>.<p>The study concludes: “Using AI assistance led participants to perceive that the friend expended less effort, reducing relationship satisfaction and increasing uncertainty.”</p>.<p>The idea that a friend might feed private conversations into AI to generate responses also makes Sal hesitant to share anything personal.</p>.<p><strong>Pushing us into silos?</strong></p>.<p>Rather than helping us navigate relationships more thoughtfully, AI assistance could push us into silos, reinforcing our own perspectives and narrowing our view of others. In an age already marked by social alienation, this is particularly concerning.</p>.<p>People between the ages of 16 and 24 report higher levels of loneliness than any other age group. About 73% of Gen Z say they feel alone either sometimes or all the time. Add to this the explosion of content on social media and streaming platforms, which has reduced shared experiences, and the problem deepens.</p>.<p>The intrusion of AI into the most intimate parts of our lives — the spaces where we find rest, purpose and connection — may only worsen this crisis.</p>.<p>A world of convenience can also be a world of isolation. Even at work, the integration of AI into workflows has led to a net decrease in interpersonal interaction.</p>.<p>“If we had to design a brand campaign in the past, we would discuss our ideas and brainstorm together. With AI, now we are able to come up with many ideas ourselves. Only the bigger clients demand AI use,” says Manogna Murari, a social media marketing professional.</p>.<p><strong>Friction-maxxing</strong></p>.<p>Many, like Abhi and Manogna, are already grappling with how AI is reshaping their social and professional lives. In response, they have begun setting personal boundaries around its use.</p>.<p>Online, too, a movement that embraces process over outcome has begun to take shape. Dubbed “friction-maxxing” by humanities professor Kathryn Jezer Morton in an article for The Cut, the idea has gained global traction.</p>.<p>In an age of convenience, friction-maxxing encourages people to rebuild their tolerance for discomfort by working through obstacles instead of relying on one-tap solutions.</p>.<p>Drawing boundaries with AI and recognising that not every facet of life needs to be optimised for speed or efficiency may be key to navigating this new world. Even though friction can be uncomfortable, the rewards of working through difficulty are significant.</p>.<p>Writer Shefali Mathew, recipient of the Sir Terry Pratchett Memorial Scholarship, points to the creative process as an example.</p>.<p>“Although writing is a frustrating process and sometimes takes a long time, the breaks that I take in between usually lead me to think in different, unique ways,” she says. “The experiences that I gain in the time that I have focused on different tasks add something to the text.”</p>.<p>Venting frustrations, sitting with difficulty, and engaging fully with our surroundings may lead to learning that we cherish or to friendships that last a lifetime.</p>.<p>Is the lure of convenience enough to pull us away from the very things that make us human?</p>
<p>Early last month, Shruthi, a close friend, was excited when she matched with someone promising on Hinge. In the wastelands of modern dating, this man, let’s call him Joe, seemed to hit every mark of curiosity, humility and sensitivity. It felt almost too good to be true.</p>.<p>“I think he’s using AI,” she joked frequently in the days leading up to their first date.</p>.<p>Sitting in a booth at a busy biryani joint in Bengaluru a few weeks later, it became clear to Shruthi that her hunch had been right. Joe turned out to be a ‘chatfisher’— someone who relies on artificial intelligence to start and sustain conversations. In real life, Joe was reserved and showed little interest in engaging with their personal histories, shared interests, events of the day, or, for that matter, much of anything at all.</p>.<p>“He was very passive in real life and was responding with single-word answers,” Shruthi told me on a call afterwards.</p>.<p>In the race to appear quick-witted and articulate, many like Joe have turned to AI to do the heavy lifting. About 65% of Indians on dating apps admit to using AI tools, particularly ChatGPT, to create content for dating apps, according to a McAfee survey of 7,000 participants.</p>.<p>Romantic life has been mediated by technology for years now, so this trend is not a shock. What is less clear is how AI use settles into the crevices of the rest of our relationships.</p>.<p><strong>The elephant in every chat</strong></p>.<p>Generative AI has become impossible to escape — an elephant in every room. Beyond its original purpose of making work more efficient, all-purpose artificial intelligence tools have begun to seep into our most personal and intimate spaces. Large Language Models (LLMs) are crafting congratulatory wishes, breakup texts and even condolence messages. Messages that were once misspelt and clumsy are now perfect, polished, and replete with contextual emojis.</p>.<p>But what is this doing to how we see ourselves, how we learn, write, express ourselves, or how we think?</p>.<p>For Pallavi N B, a content lead at a venture capital firm, ChatGPT has become a sounding board. Sharing the details of emotionally turbulent situations with the chatbot has made her feel more confident in her decisions. Particularly in conflict, the tool offers the unheated, neutral perspective she is looking for.</p>.<p>“I do ask it questions like, if I said this, would it hurt them? What if I said this instead?” she says.</p>.<p>In the past, she would have turned to friends with these queries, though she was never entirely comfortable speaking about one friend to another. The model has taken away that guilt.</p>.<p>“It gave me confidence and validation for a few responses, and I felt like I could trust what I was thinking and the way I was saying it,” she adds.</p>.<p>ChatGPT has provided a steadying hand for Abhi, too, a social media manager based in Bengaluru.</p>.<p>“I used it with friends mostly, but yeah, also for my dates when I want to end a conversation with them. I used to take AI assistance to craft a well-tailored message without hurting their sentiments while maintaining my boundary,” he says.</p>.<p>A message or task that would once take hours of fretting — hours of considering the other perspective — now takes less than a minute. Humans are hard-wired to take the path of least resistance, after all. So what is the problem?</p>.<p><strong>The laws of friction</strong></p>.<p>When Abhi and his colleagues sat down to discuss artificial intelligence and the instant outcomes it provides, the conversation turned to what we might be losing: the process itself, and the difficulty that makes learning possible.</p>.<p>“We were talking about AI’s impact and how raw and flawed messages should be appreciated as human because that’s how we evolve our communication and fine-tune it with practice and time,” he says.</p>.<p>The Bengaluru-based social media executive recognises that his dependence on AI has increased tenfold. “At that moment I realised, let me have an authentic take on how I message, and that it does not have to be polished all the time,” he adds.</p>.<p>Indeed, many who have used AI to assist them over the long term acknowledge that personal evolution can stall with prolonged dependence on these systems. By outsourcing our ability to think through conflict, navigate difficult situations or sit with discomfort, we risk allowing these skills to atrophy.</p>.<p>A classic example of this played out in the 2010s, when people began to rely heavily on GPS. There is ample scientific evidence to suggest that habitual dependence on turn-by-turn navigation leads to hippocampal atrophy, as this brain region — crucial for spatial memory — shrinks when underutilised.</p>.<p>Similarly, a study from researchers at MIT’s Media Lab suggests that outsourcing thinking comes at the cost of learning. The researchers tested whether tools like ChatGPT might encourage metacognitive laziness — the tendency to offload cognitive effort to AI, sidestepping deeper engagement with the task at hand.</p>.<p>They divided 54 participants into three groups. One group wrote essays using OpenAI’s ChatGPT, another relied on Google Search, and a third wrote without any assistance. As they worked, researchers monitored their brain activity across 32 regions using EEG scans.</p>.<p>Of the three groups, those using ChatGPT showed the lowest levels of brain engagement and consistently underperformed in neural, linguistic and behavioural markers.</p>.<p>Over several months, this pattern appeared to become more entrenched. With ChatGPT, users exerted less effort, and some eventually resorted to copy-and-paste responses by the end of the study.</p>.<p>Ultimately, AI’s greatest strength — removing friction from work and relationships — might also be its biggest liability.</p>.<p>“By subtracting effort from life, AI risks removing the struggles that teach us, the loneliness that connects us, and the labour that makes life meaningful,” a 2026 article published in Communications Psychology reads.</p>.<p>While techno-optimists have long touted the virtues of AI, its use as a vice threatens to erode the very skills through which we navigate the world.</p>.<p>Curious about its performance in interpersonal conflict, Sweekruthi shared a few scenarios with ChatGPT.</p>.<p>“While initially I felt like the advice it gave me had value, over time I noticed that its response depended heavily on how I presented the information,” she explains.</p>.<p>In this way, if one approaches the machine with a certain bias, it is unlikely to challenge it and may even reinforce it. Rather than spurring self-reflection or corrective behaviour, AI can entrench existing beliefs.</p>.<p>A study by Cornell University found that across 11 AI models, chatbots were highly sycophantic — about 50% more likely to endorse user decisions, even in cases where users reported engaging in manipulation, deception or other socially harmful behaviour.</p>.<p><strong>Telltale signs of AI</strong></p>.<p>If the goal is to represent ourselves at our best, or at AI’s best, even that end may be undermined.</p>.<p>There is something about AI-generated writing that sets off subtle alarms. The unnecessary verbosity, the formulaic sentence structures, the overuse of certain words like “delve,” the frequent reliance on em dashes, all contribute to an uncanny valley effect: text that sounds human, but lacks the inflexions that make it so.</p>.<p>When Sal, a content strategist based in Germany, received a message from an old classmate, she noticed these telltale markers immediately.</p>.<p>“The overly generic corporate small-talk flavour and the em dashes in every message were making me suspicious,” she says.</p>.<p>She hadn’t spoken to this acquaintance in a decade. Now, he was attempting to reconnect, but seemingly with the help of AI. Confused and curious, Sal decided to confront him.</p>.<p>“Can I ask, are you using ChatGPT to write your messages?” she asked.</p>.<p>“No response since I asked the ChatGPT question,” she says.</p>.<p>The sense of deception proved difficult to move past for Shruthi as well. The openness with which ‘AI Joe’ approached their conversations was what persuaded her to invest time and vulnerability.</p>.<p>“Those moments of connection ultimately turned out to be hollow, and the artifice of it was too much. I stopped responding to him after that, even though we had already met in person,” she says.</p>.<p>When asked if she could foster genuine connections with people who rely heavily on AI to communicate, Sal is sceptical. “Even if I did develop a connection with someone who uses AI to heavily edit their correspondence, would that connection really be with them? I don’t think so.”</p>.<p>A 2023 study by Ohio State University supports this sentiment. The 208 adults who participated received a congratulatory, thoughtful or condolence message from a friend. Those who later learned that the message had been crafted with AI reacted negatively.</p>.<p>The study concludes: “Using AI assistance led participants to perceive that the friend expended less effort, reducing relationship satisfaction and increasing uncertainty.”</p>.<p>The idea that a friend might feed private conversations into AI to generate responses also makes Sal hesitant to share anything personal.</p>.<p><strong>Pushing us into silos?</strong></p>.<p>Rather than helping us navigate relationships more thoughtfully, AI assistance could push us into silos, reinforcing our own perspectives and narrowing our view of others. In an age already marked by social alienation, this is particularly concerning.</p>.<p>People between the ages of 16 and 24 report higher levels of loneliness than any other age group. About 73% of Gen Z say they feel alone either sometimes or all the time. Add to this the explosion of content on social media and streaming platforms, which has reduced shared experiences, and the problem deepens.</p>.<p>The intrusion of AI into the most intimate parts of our lives — the spaces where we find rest, purpose and connection — may only worsen this crisis.</p>.<p>A world of convenience can also be a world of isolation. Even at work, the integration of AI into workflows has led to a net decrease in interpersonal interaction.</p>.<p>“If we had to design a brand campaign in the past, we would discuss our ideas and brainstorm together. With AI, now we are able to come up with many ideas ourselves. Only the bigger clients demand AI use,” says Manogna Murari, a social media marketing professional.</p>.<p><strong>Friction-maxxing</strong></p>.<p>Many, like Abhi and Manogna, are already grappling with how AI is reshaping their social and professional lives. In response, they have begun setting personal boundaries around its use.</p>.<p>Online, too, a movement that embraces process over outcome has begun to take shape. Dubbed “friction-maxxing” by humanities professor Kathryn Jezer Morton in an article for The Cut, the idea has gained global traction.</p>.<p>In an age of convenience, friction-maxxing encourages people to rebuild their tolerance for discomfort by working through obstacles instead of relying on one-tap solutions.</p>.<p>Drawing boundaries with AI and recognising that not every facet of life needs to be optimised for speed or efficiency may be key to navigating this new world. Even though friction can be uncomfortable, the rewards of working through difficulty are significant.</p>.<p>Writer Shefali Mathew, recipient of the Sir Terry Pratchett Memorial Scholarship, points to the creative process as an example.</p>.<p>“Although writing is a frustrating process and sometimes takes a long time, the breaks that I take in between usually lead me to think in different, unique ways,” she says. “The experiences that I gain in the time that I have focused on different tasks add something to the text.”</p>.<p>Venting frustrations, sitting with difficulty, and engaging fully with our surroundings may lead to learning that we cherish or to friendships that last a lifetime.</p>.<p>Is the lure of convenience enough to pull us away from the very things that make us human?</p>