<p>In 2019, two scholars of legal studies Danielle Citron and Robert Chesney authored an influential essay introducing a new social construct: The ‘Liar’s Dividend’. Their concern was how deepfakes could potentially create political and social disruption. <a href="https://www.deccanherald.com/tags/technology">Technology</a> to produce fake videos and audios that appear real is becoming cheaper and commonplace. Anyone who wants to malign a politician or celebrity can easily do so with the veneer of realism. While the reports of the discussions, in the Karnataka legislature no less, of the alleged ‘honey trapping’ of politicians might make for entertaining, even salacious reading, the most common uses of deepfakes aren’t political. They’re mostly created for two purposes: as attempts to humiliate and demean women celebrities, and more often for scams, as with scammers deepfaking a family member’s voice.</p>.<p>The more interesting part of the essay comes later. The authors predict that deepfakes will cause a second-order problem. If deepfakes become widespread, the common citizen will perhaps become cynical and doubt the veracity of news reportage in the media. This would provide those politicians and celebrities an escape route: If an actual video of their wrongdoing or misconduct were released, they could easily claim that it is a deepfake or that it was an attempted honey trap. This is problematic because deepfakes then pose an accountability threat: unscrupulous public figures or stakeholders can routinely and falsely attribute malice when in fact, there is culpability, to claim that legitimate audio or video content is artificially generated and fake. They call this dynamic the ‘Liar’s Dividend’ – liars aiming to avoid accountability will, in the public eye, become more believable as the public awareness about the threats posed by deepfakes grows.</p>.<p>The concept is straightforward: as individuals become aware that deepfakes can appear very realistic, they may find false allegations that authentic content is created by AI just as convincing. Certain politicians may still leverage the fear of deepfakes to dodge responsibility for their actual behaviour; however, this situation does not have to upend the epistemic foundations of our democracy. Establishing norms against these lies, and developing technology to determine the provenance of the audiovisual content can help the citizen discern the truth and can blunt the benefits of lying in public life. Morphing audiovisual media is not new. Three techniques are commonplace: face swap, lip sync, and puppet master. In a face swap, a person’s face in a video is replaced with another’s. In a lip sync, lip movements are altered to appear to match the audio recording. In a more advanced approach, the puppet master-style deepfake, a person is reprised by a performer in a recording.</p>.<p>It is important to remember that deepfakes only exacerbate uncertainty, making the citizen feel that it is epistemically irresponsible to simply believe what is depicted in a video is the truth. This might undermine their ability to consider true information objectively. The mere suspicion that AI content is in circulation can result in people dismissing genuine information or data. The implications are enormous because it reinforces a post-truth society in which people are more likely to accept an argument based on appeals to emotions and beliefs rather than objective facts. The thin dividing line between truth and lies, honesty and dishonesty, fiction and non-fiction are blurring. Deceiving others becomes a lucrative profession. In a world where such uncertainty is widespread, those in public life and private citizens alike can leverage that uncertainty to potentially benefit from the Liar’s Dividend. In the courts, lawyers are already using the ‘deepfake defence’, asserting that real audiovisual evidence against a defendant is fake.</p>.<p>Countering the Liar’s Dividend poses a challenge. Since the Liar’s Dividend allows bad actors to dismiss real evidence as fake, countering it requires a coherent policy response and coordinated action by the state and civil society. Courts, media, and institutions should adopt established digital forensic verification methods including detection algorithms, digital watermarking, and provenance checks. Countries such as the US have started regulating deepfakes. Legislation should aim to protect national security against the threats posed by deepfake technology and provide legal recourse to victims of harmful deepfakes.</p>.<p>Deepfakes and the Liar’s Dividend are two sides of a coin in our digital information landscape, and both can be dangerous in different ways. A summing up should help: First, what are we dealing with? Deepfakes are AI-generated fake images, videos, or audio that mimic real people. The Liar’s Dividend is the idea that someone caught doing or saying something incriminating can just claim ‘It is fake’ – and people might believe them, thanks to deepfakes.</p>.AI in legal system.<p><strong>What can a citizen do?</strong></p>.<p>Be a sceptical viewer. Don’t accept sensational videos at face value – pause and ask: Who posted this? Is it reported elsewhere by credible sources? Does it seem too outrageous to be true? Use fact-checking websites.</p>.<p>Recognise tell-tale signs of deepfakes: Lip-sync errors, unnatural blinking or facial expressions, strange lighting, or shadows, unusual audio artefacts.</p>.<p>Protect your digital identity is key. Be cautious about posting personal images and videos online – they can be used to create deepfakes. Use privacy settings on social media. If you’re targeted, report it to cybercrime units; in India, for instance, at cybercrime.gov.in.</p>.<p>The Liar’s Dividend is eroding our shared trust in the truth. Since there has been a brouhaha about honey traps, a good place to start is to shift the burden of proof: Instead of allowing bad actors to claim, ‘it is a deepfake,’ they should be compelled to provide counter-evidence proving their claim. It is time to speak truth to power. It is not AI but the eternal human quest for the truth that can counter the Liar’s Dividend.</p>.<p><em>(The writer is Director, School of Social Sciences and the School of Law, Ramaiah University of Applied Sciences)</em></p>
<p>In 2019, two scholars of legal studies Danielle Citron and Robert Chesney authored an influential essay introducing a new social construct: The ‘Liar’s Dividend’. Their concern was how deepfakes could potentially create political and social disruption. <a href="https://www.deccanherald.com/tags/technology">Technology</a> to produce fake videos and audios that appear real is becoming cheaper and commonplace. Anyone who wants to malign a politician or celebrity can easily do so with the veneer of realism. While the reports of the discussions, in the Karnataka legislature no less, of the alleged ‘honey trapping’ of politicians might make for entertaining, even salacious reading, the most common uses of deepfakes aren’t political. They’re mostly created for two purposes: as attempts to humiliate and demean women celebrities, and more often for scams, as with scammers deepfaking a family member’s voice.</p>.<p>The more interesting part of the essay comes later. The authors predict that deepfakes will cause a second-order problem. If deepfakes become widespread, the common citizen will perhaps become cynical and doubt the veracity of news reportage in the media. This would provide those politicians and celebrities an escape route: If an actual video of their wrongdoing or misconduct were released, they could easily claim that it is a deepfake or that it was an attempted honey trap. This is problematic because deepfakes then pose an accountability threat: unscrupulous public figures or stakeholders can routinely and falsely attribute malice when in fact, there is culpability, to claim that legitimate audio or video content is artificially generated and fake. They call this dynamic the ‘Liar’s Dividend’ – liars aiming to avoid accountability will, in the public eye, become more believable as the public awareness about the threats posed by deepfakes grows.</p>.<p>The concept is straightforward: as individuals become aware that deepfakes can appear very realistic, they may find false allegations that authentic content is created by AI just as convincing. Certain politicians may still leverage the fear of deepfakes to dodge responsibility for their actual behaviour; however, this situation does not have to upend the epistemic foundations of our democracy. Establishing norms against these lies, and developing technology to determine the provenance of the audiovisual content can help the citizen discern the truth and can blunt the benefits of lying in public life. Morphing audiovisual media is not new. Three techniques are commonplace: face swap, lip sync, and puppet master. In a face swap, a person’s face in a video is replaced with another’s. In a lip sync, lip movements are altered to appear to match the audio recording. In a more advanced approach, the puppet master-style deepfake, a person is reprised by a performer in a recording.</p>.<p>It is important to remember that deepfakes only exacerbate uncertainty, making the citizen feel that it is epistemically irresponsible to simply believe what is depicted in a video is the truth. This might undermine their ability to consider true information objectively. The mere suspicion that AI content is in circulation can result in people dismissing genuine information or data. The implications are enormous because it reinforces a post-truth society in which people are more likely to accept an argument based on appeals to emotions and beliefs rather than objective facts. The thin dividing line between truth and lies, honesty and dishonesty, fiction and non-fiction are blurring. Deceiving others becomes a lucrative profession. In a world where such uncertainty is widespread, those in public life and private citizens alike can leverage that uncertainty to potentially benefit from the Liar’s Dividend. In the courts, lawyers are already using the ‘deepfake defence’, asserting that real audiovisual evidence against a defendant is fake.</p>.<p>Countering the Liar’s Dividend poses a challenge. Since the Liar’s Dividend allows bad actors to dismiss real evidence as fake, countering it requires a coherent policy response and coordinated action by the state and civil society. Courts, media, and institutions should adopt established digital forensic verification methods including detection algorithms, digital watermarking, and provenance checks. Countries such as the US have started regulating deepfakes. Legislation should aim to protect national security against the threats posed by deepfake technology and provide legal recourse to victims of harmful deepfakes.</p>.<p>Deepfakes and the Liar’s Dividend are two sides of a coin in our digital information landscape, and both can be dangerous in different ways. A summing up should help: First, what are we dealing with? Deepfakes are AI-generated fake images, videos, or audio that mimic real people. The Liar’s Dividend is the idea that someone caught doing or saying something incriminating can just claim ‘It is fake’ – and people might believe them, thanks to deepfakes.</p>.AI in legal system.<p><strong>What can a citizen do?</strong></p>.<p>Be a sceptical viewer. Don’t accept sensational videos at face value – pause and ask: Who posted this? Is it reported elsewhere by credible sources? Does it seem too outrageous to be true? Use fact-checking websites.</p>.<p>Recognise tell-tale signs of deepfakes: Lip-sync errors, unnatural blinking or facial expressions, strange lighting, or shadows, unusual audio artefacts.</p>.<p>Protect your digital identity is key. Be cautious about posting personal images and videos online – they can be used to create deepfakes. Use privacy settings on social media. If you’re targeted, report it to cybercrime units; in India, for instance, at cybercrime.gov.in.</p>.<p>The Liar’s Dividend is eroding our shared trust in the truth. Since there has been a brouhaha about honey traps, a good place to start is to shift the burden of proof: Instead of allowing bad actors to claim, ‘it is a deepfake,’ they should be compelled to provide counter-evidence proving their claim. It is time to speak truth to power. It is not AI but the eternal human quest for the truth that can counter the Liar’s Dividend.</p>.<p><em>(The writer is Director, School of Social Sciences and the School of Law, Ramaiah University of Applied Sciences)</em></p>