<p>Artificial intelligence has emerged as a powerful force shaping modern life. Powered by machine learning algorithms and predictive analytics, AI offers wide-ranging benefits to businesses and individuals alike. While AI holds promise due to the speed, accuracy and scale at which it can accomplish tasks, its flip side is the growing sophistication and proliferation of fraud. One of the most concerning developments is the rise of deepfake-driven financial fraud. Pi-Labs’ report, Digital Deception Epidemic: 2024 Report on Deepfake Fraud’s Toll on India, estimates that deepfake fraud could result in losses of Rs 70,000 crore in 2025.</p>.<p>Deepfakes are a product of deep learning technology applied to the creation of synthetic media. Deep learning uses powerful, multilayered neural networks to analyse vast amounts of data. This capability is harnessed to create highly realistic images, audio, and videos. According to a global survey by online security firm McAfee, 70% of people cannot confidently tell the difference between a real and cloned voice.</p>.<p>Such hyper-realistic videos or voice recordings carry serious risks, including crippling financial losses, erosion of public trust, and intensified regulatory scrutiny.</p>.<p>Deepfake fraud relies on impersonation, leveraging AI’s remarkable ability to mimic human voices and create realistic videos. For example, fraudsters can produce deepfake videos of a senior executive authorising a transaction or clone the voice of a loved one asking for financial help.</p>.<p>Financial institutions have long relied on traditional KYC processes to onboard customers. The realism achieved by deepfake fraudsters has challenged KYC checks that rely on facial recognition. Phone verification is equally vulnerable, as it takes merely 15 seconds of a person’s voice to create a deepfake. </p>.<p>In response, financial institutions have adopted video verification to prevent deepfake-based KYC fraud. Yet this too is under threat. Deepfake technology has evolved to simulate blinking, subtle head movements, and even micro-expressions. In fact, deepfake use is currently led by video (46%) and images (32%), followed by audio (22%).</p>.<p>The immediate threat that financial institutions and individuals face is that of financial losses. Deepfake fraud impacted businesses across industries globally in 2024, resulting in an average financial loss of almost $450,000, as per a report by identity verification solutions company Regula.</p>.<p>For businesses, especially financial institutions, the adverse consequences of deepfake fraud go far beyond immediate financial losses, as damage to their reputation and loss of customer trust are difficult to quantify, particularly because of their long-term impact. A KPMG survey in India showed that 72% of organisations consider reputational damage as the severest impact of fraud.</p>.<p>With fraudsters using increasingly advanced technologies to find ways to attack, governments and regulatory bodies are forging more stringent regulations. </p>.<p>Recognising deepfakes requires a powerful and multi-faceted approach, much more advanced than traditional security paradigms.</p>.<p>Tech to the rescue</p>.<p>Artificial intelligence: Advanced AI and machine learning algorithms can play an important role in detecting anomalies in facial features, verifying with digital footprints, identifying irregular head movements or facial expressions and changes in voice timbre, and detecting lip-sync errors that may escape the human eye. Biometric inputs may also be analysed in real time for signs of synthetic manipulation.</p>.<p>Blockchain-based identity verification: Blockchain technology offers a robust solution for identity verification by creating immutable and verifiable digital identities. By decentralising and encrypting identity data, blockchain can make it harder for deepfakes to create or copy personas.</p>.<p>Training programmes are essential to equip employees with the knowledge to recognise the red flags of deepfake scams. Banking customers must be warned about avoiding unknown callers, refraining from instantly reacting to an emergency, using a code word with loved ones to quickly confirm their identity, and checking the source of videos or photos before taking any action.</p>.<p>The rise of deepfake fraud signals a pivot in digital risk management. Financial institutions need to embrace defences based on the latest technologies and foster a culture of vigilance to safeguard not just themselves but also the stability of the economy and the global financial system.</p>.<p><em>(The writer is the chief of <br>operations and customer <br>success of a financial platform)</em></p>
<p>Artificial intelligence has emerged as a powerful force shaping modern life. Powered by machine learning algorithms and predictive analytics, AI offers wide-ranging benefits to businesses and individuals alike. While AI holds promise due to the speed, accuracy and scale at which it can accomplish tasks, its flip side is the growing sophistication and proliferation of fraud. One of the most concerning developments is the rise of deepfake-driven financial fraud. Pi-Labs’ report, Digital Deception Epidemic: 2024 Report on Deepfake Fraud’s Toll on India, estimates that deepfake fraud could result in losses of Rs 70,000 crore in 2025.</p>.<p>Deepfakes are a product of deep learning technology applied to the creation of synthetic media. Deep learning uses powerful, multilayered neural networks to analyse vast amounts of data. This capability is harnessed to create highly realistic images, audio, and videos. According to a global survey by online security firm McAfee, 70% of people cannot confidently tell the difference between a real and cloned voice.</p>.<p>Such hyper-realistic videos or voice recordings carry serious risks, including crippling financial losses, erosion of public trust, and intensified regulatory scrutiny.</p>.<p>Deepfake fraud relies on impersonation, leveraging AI’s remarkable ability to mimic human voices and create realistic videos. For example, fraudsters can produce deepfake videos of a senior executive authorising a transaction or clone the voice of a loved one asking for financial help.</p>.<p>Financial institutions have long relied on traditional KYC processes to onboard customers. The realism achieved by deepfake fraudsters has challenged KYC checks that rely on facial recognition. Phone verification is equally vulnerable, as it takes merely 15 seconds of a person’s voice to create a deepfake. </p>.<p>In response, financial institutions have adopted video verification to prevent deepfake-based KYC fraud. Yet this too is under threat. Deepfake technology has evolved to simulate blinking, subtle head movements, and even micro-expressions. In fact, deepfake use is currently led by video (46%) and images (32%), followed by audio (22%).</p>.<p>The immediate threat that financial institutions and individuals face is that of financial losses. Deepfake fraud impacted businesses across industries globally in 2024, resulting in an average financial loss of almost $450,000, as per a report by identity verification solutions company Regula.</p>.<p>For businesses, especially financial institutions, the adverse consequences of deepfake fraud go far beyond immediate financial losses, as damage to their reputation and loss of customer trust are difficult to quantify, particularly because of their long-term impact. A KPMG survey in India showed that 72% of organisations consider reputational damage as the severest impact of fraud.</p>.<p>With fraudsters using increasingly advanced technologies to find ways to attack, governments and regulatory bodies are forging more stringent regulations. </p>.<p>Recognising deepfakes requires a powerful and multi-faceted approach, much more advanced than traditional security paradigms.</p>.<p>Tech to the rescue</p>.<p>Artificial intelligence: Advanced AI and machine learning algorithms can play an important role in detecting anomalies in facial features, verifying with digital footprints, identifying irregular head movements or facial expressions and changes in voice timbre, and detecting lip-sync errors that may escape the human eye. Biometric inputs may also be analysed in real time for signs of synthetic manipulation.</p>.<p>Blockchain-based identity verification: Blockchain technology offers a robust solution for identity verification by creating immutable and verifiable digital identities. By decentralising and encrypting identity data, blockchain can make it harder for deepfakes to create or copy personas.</p>.<p>Training programmes are essential to equip employees with the knowledge to recognise the red flags of deepfake scams. Banking customers must be warned about avoiding unknown callers, refraining from instantly reacting to an emergency, using a code word with loved ones to quickly confirm their identity, and checking the source of videos or photos before taking any action.</p>.<p>The rise of deepfake fraud signals a pivot in digital risk management. Financial institutions need to embrace defences based on the latest technologies and foster a culture of vigilance to safeguard not just themselves but also the stability of the economy and the global financial system.</p>.<p><em>(The writer is the chief of <br>operations and customer <br>success of a financial platform)</em></p>