<p>In September 2025, a 57-year-old woman from Bengaluru was browsing YouTube when she came across a message from spiritual leader Sadhguru Jaggi Vasudev. In the video, he appeared to endorse a trading platform promising large financial gains for a$250 investment. Believing the message to be genuine, the woman engaged with two representatives from the firm over Zoom for nearly two months, ultimately transferring a total of Rs 3.75 crore (about US$450,000). It was only five months later, when she tried to withdraw her supposed profits, that she realised she had been scammed.</p>.<p>While this may seem like a particularly egregious case of deepfake fraud, it is far from unique. Earlier that year, another victim lost Rs 1.43 crore to a similar deepfake investment scam. Together, these cases highlight the growing threat posed by deepfakes--not only extracting large sums of money but also in harassing public figures and manipulating voter perceptions during elections. </p>.<p>Polling from last April hints at the scale of the problem. As many as three-quarters of Indians who engage with online content reported exposure to deepfakes over the last 12 months, with over one-third (38%) encountering a deepfake scam in that period. The public is understandably anxious: 86% of respondents to a in July 2024 survey said they feared misinformation and deepfakes could shape future elections.</p>.<p>Of course, deepfakes are not a problem confined to India or South Asia. In early 2024, a Hong Kong employee of the British engineering firm Arup fell victim to a deepfake scam in which fraudsters impersonated senior executives on a video call, resulting in a loss of US$25 million. </p>.<p>The financial impact of such scams is staggering. Losses linked to deepfake fraud in the US financial sector alone were estimated at around US$158 billion in 2023. Even more concerning is the rapid escalation in frequency. Deepfake content on social media surged by 550% between 2019 and 2023, with the number of global deepfakes detected quadrupling in 2024 alone.</p>.<p>While deepfakes are a truly global problem, South Asia is particularly vulnerable. India has 650–700m smartphone users --second only to China--while Pakistan has around 73 million. Despite a young and tech-savvy population, AI literacy remains low, leaving millions ill-equipped to identify or comprehend deepafakes.</p>.<p>As the Hong Kong scam shows, existing consumer protection mechanisms are outdated. They are leargely built to tackle traditional forms of deception such as forgery, misleading claims, or phone and email impersonation--not hyper-realistic, AI-generated impersonations. The absence of effective protective framework is compounded by fragmented regulation and a lack of dedicated deepfake laws or commonly accepted global standards. In countries like India, law enforcement agencies simply do not have the tools, knowledge or capacity to investigate large-scale, often cross-border, deepfake crimes.</p>.<p>Fortunately, technology can also be part of the solution. Decentralised AI detection tools make it harder for fraudsters to create convincing fakes. For example, biometric verification (such as iris scanning) can be stored on a blockchain to confirm that a person is human rather than an AI bot. Similarly, self-sovereign identity (SSI) systems--digital IDs built on blockchain--can make tampering nearly impossible. Under such a system, anyone joining a phone or video call would need to “check in” using a verified SSI signature before proceeding.</p>.<p>However, even the best technological solutions cannot succeed in isolation. Users should play their part--verifying videos against official sources, remaining sceptical of unsolicited financial offers, confirming identities through offline channels, using strong passwords, and watching for subtle visual or audio inconsistencies.</p>.<p>More broadly, we must redouble efforts to improve financial and AI literacy at both national and local levels to reduce societal vulnerability to deepfake scams, particularly in India, Pakistan and Nepal, where digital literacy remains low. This will require closer coordination between the public and private sectors, including tech firms, regulators and civil society groups. Together, these actors can develop robust real-time detection systems and standardised protocols to take the fight to deepfake perpetrators--and win. </p>.<p>Deepfakes have already caused serious harm to individuals and society. With the threat growing rapidly, the situation could soon spiral out of control. A world where deepfakes erode trust in public figures and cultural icons, fracture and polarise our communities and destabilise economies and national security is not hypothetical--it is imminent. Confronting this reality with clarity, cooperation and resolve, and leveraging the tools at our disposal, is essential if society is to thrive in the age of deepfakes.</p>.<p><em>(The writer is the founder of a company that builds decentralised systems for AI-powered deepfake detection)</em></p> <p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>
<p>In September 2025, a 57-year-old woman from Bengaluru was browsing YouTube when she came across a message from spiritual leader Sadhguru Jaggi Vasudev. In the video, he appeared to endorse a trading platform promising large financial gains for a$250 investment. Believing the message to be genuine, the woman engaged with two representatives from the firm over Zoom for nearly two months, ultimately transferring a total of Rs 3.75 crore (about US$450,000). It was only five months later, when she tried to withdraw her supposed profits, that she realised she had been scammed.</p>.<p>While this may seem like a particularly egregious case of deepfake fraud, it is far from unique. Earlier that year, another victim lost Rs 1.43 crore to a similar deepfake investment scam. Together, these cases highlight the growing threat posed by deepfakes--not only extracting large sums of money but also in harassing public figures and manipulating voter perceptions during elections. </p>.<p>Polling from last April hints at the scale of the problem. As many as three-quarters of Indians who engage with online content reported exposure to deepfakes over the last 12 months, with over one-third (38%) encountering a deepfake scam in that period. The public is understandably anxious: 86% of respondents to a in July 2024 survey said they feared misinformation and deepfakes could shape future elections.</p>.<p>Of course, deepfakes are not a problem confined to India or South Asia. In early 2024, a Hong Kong employee of the British engineering firm Arup fell victim to a deepfake scam in which fraudsters impersonated senior executives on a video call, resulting in a loss of US$25 million. </p>.<p>The financial impact of such scams is staggering. Losses linked to deepfake fraud in the US financial sector alone were estimated at around US$158 billion in 2023. Even more concerning is the rapid escalation in frequency. Deepfake content on social media surged by 550% between 2019 and 2023, with the number of global deepfakes detected quadrupling in 2024 alone.</p>.<p>While deepfakes are a truly global problem, South Asia is particularly vulnerable. India has 650–700m smartphone users --second only to China--while Pakistan has around 73 million. Despite a young and tech-savvy population, AI literacy remains low, leaving millions ill-equipped to identify or comprehend deepafakes.</p>.<p>As the Hong Kong scam shows, existing consumer protection mechanisms are outdated. They are leargely built to tackle traditional forms of deception such as forgery, misleading claims, or phone and email impersonation--not hyper-realistic, AI-generated impersonations. The absence of effective protective framework is compounded by fragmented regulation and a lack of dedicated deepfake laws or commonly accepted global standards. In countries like India, law enforcement agencies simply do not have the tools, knowledge or capacity to investigate large-scale, often cross-border, deepfake crimes.</p>.<p>Fortunately, technology can also be part of the solution. Decentralised AI detection tools make it harder for fraudsters to create convincing fakes. For example, biometric verification (such as iris scanning) can be stored on a blockchain to confirm that a person is human rather than an AI bot. Similarly, self-sovereign identity (SSI) systems--digital IDs built on blockchain--can make tampering nearly impossible. Under such a system, anyone joining a phone or video call would need to “check in” using a verified SSI signature before proceeding.</p>.<p>However, even the best technological solutions cannot succeed in isolation. Users should play their part--verifying videos against official sources, remaining sceptical of unsolicited financial offers, confirming identities through offline channels, using strong passwords, and watching for subtle visual or audio inconsistencies.</p>.<p>More broadly, we must redouble efforts to improve financial and AI literacy at both national and local levels to reduce societal vulnerability to deepfake scams, particularly in India, Pakistan and Nepal, where digital literacy remains low. This will require closer coordination between the public and private sectors, including tech firms, regulators and civil society groups. Together, these actors can develop robust real-time detection systems and standardised protocols to take the fight to deepfake perpetrators--and win. </p>.<p>Deepfakes have already caused serious harm to individuals and society. With the threat growing rapidly, the situation could soon spiral out of control. A world where deepfakes erode trust in public figures and cultural icons, fracture and polarise our communities and destabilise economies and national security is not hypothetical--it is imminent. Confronting this reality with clarity, cooperation and resolve, and leveraging the tools at our disposal, is essential if society is to thrive in the age of deepfakes.</p>.<p><em>(The writer is the founder of a company that builds decentralised systems for AI-powered deepfake detection)</em></p> <p><em>Disclaimer: The views expressed above are the author's own. They do not necessarily reflect the views of DH.</em></p>