<p><em>By F D Flam</em></p><p>One of the more subtle and insidious threats posed by artificial intelligence and related technology is its ability to tamper with memories.</p><p>Psychologist Elizabeth Loftus has spent the last 50 years demonstrating how easily humans can be manipulated into remembering things that never happened — especially in the hands of prosecutors and police questioning witnesses. </p><p>Now Loftus, a professor at the University of California, Irvine, has teamed up with researchers at the Massachusetts Institute of Technology to explore how AI can manipulate what we think we remember. This manipulation occurs even when subjects know they’re looking at AI-generated text and images. The findings suggest that artificial intelligence could amplify humans’ ability to implant false memories.</p><p>In a famous series of experiments starting in the 1970s, Loftus showed that with the right suggestions, psychologists could implant memories that people had been lost in a shopping center as children, or that they’d been sickened by eggs or strawberry ice cream at a picnic. The latter actually turned people off from wanting those foods. Despite the evidence, we still can’t shake the idea that memory is like a tape recording of events — and that misperception of our own minds makes us vulnerable.</p><p>“People who adhere to this tape recorder model of memory don’t seem to appreciate that memory is a constructive process,” Loftus said, explaining that our brains build memories from bits and pieces acquired at different times. We intuitively understand forgetting as losing or fading memories, but not the addition of false details.</p><p>Loftus also has been studying the mind-scrambling potential of “push polls,” where pollsters embed misinformation in a question, such as, “What would you think of Joe Biden if you knew he’d been convicted of tax evasion?” She said it’s chilling to consider how effectively AI might conduct this kind of deception at scale.</p>.Are you in a mid-career to senior job? Don’t fear AI - you could have this important advantage.<p>Memory manipulation, notes Pat Pataranutaporn, a researcher with the MIT Media Lab, is a very different process from fooling people with deep-fakes. You don’t need to create an elaborate fake of, say, the <em>New York Times</em> website — you just have to convince people they read something there in the past. “People don’t usually question their own memory,” he said.</p><p>Pataranutaporn was the lead author of three memory experiments, the first of which showed how chatbot interrogators can alter witness testimony simply by embedding suggestions into their questions — an AI extension of Loftus’ earlier work on human interrogations.</p><p>In that study, participants watched video footage of an armed robbery. Some were then asked misleading questions, such as: “Was there a security camera near the place the robbers parked the car?” About a third of those participants later recalled seeing the robbers arrive by car. There was no car. The false memory persisted even a week later. </p><p>Subjects were divided into three groups: one received no misleading questions, another received them in written form, and the third received them from an AI chatbot. The chatbot group went on to form 1.7 times as many false memories as those who received misleading information in writing.</p><p>Another study demonstrated that dishonest AI summaries or chatbots can easily insert false memories into a story people read. What’s even more concerning, Pataranutaporn said, was that participants who received misleading AI summaries or chats also retained less of the real information from their reading — and reported less confidence in the true information they recalled.</p><p>The third study demonstrated how AI can implant false memories using images and video. Researchers divided 200 volunteers into four groups. Participants in each group looked at a set of 24 images — some were typical images found on news websites, while others were personal, such as wedding photos someone might post on social media.</p><p>During a second viewing, a few minutes later, each group was shown a different version of the images. One group saw the same (unaltered) images. A second group saw AI-altered versions. A third group saw AI-modified images converted into short AI-generated videos. The final group viewed entirely AI-generated images that had been transformed into AI-generated videos.</p><p>Even the group that saw the original images retained a few false memories — not surprising, given how difficult it is to recall 24 distinct pictures . But participants exposed to any level of AI manipulation reported significantly more false memories. The group with the highest rate of memory distortion was the one that viewed AI-generated videos based on AI-generated images.</p><p>Younger people were somewhat more likely to incorporate false memories than older people. And education levels didn’t seem to affect susceptibility. Notably, the false memories didn’t rely on fooling participants into thinking the AI-generated content was real — they were told at the outset they’d be seeing AI-created content.</p><p>In the image experiment, some of the alterations entailed changes in the background — adding a military presence to a public gathering or changing the weather. The images retained most of the features of the original. As I’ve learned from experts, to have a real impact, disinformation must incorporate a story that’s at least 60 per cent true.</p><p>This latest research should spur more discussion of the effects of technology on our grasp of reality, which can go beyond merely spreading misinformation. Social media algorithms also encourage people to embrace fringe ideas and conspiracy theories by creating the false impression of popularity and influence.</p><p>AI chatbots will have even more subtle and unexpected effects on us. We should all be open to having our minds changed by new facts and a strong argument — and wary of attempts to change our minds by distorting what we see, feel or remember.</p>
<p><em>By F D Flam</em></p><p>One of the more subtle and insidious threats posed by artificial intelligence and related technology is its ability to tamper with memories.</p><p>Psychologist Elizabeth Loftus has spent the last 50 years demonstrating how easily humans can be manipulated into remembering things that never happened — especially in the hands of prosecutors and police questioning witnesses. </p><p>Now Loftus, a professor at the University of California, Irvine, has teamed up with researchers at the Massachusetts Institute of Technology to explore how AI can manipulate what we think we remember. This manipulation occurs even when subjects know they’re looking at AI-generated text and images. The findings suggest that artificial intelligence could amplify humans’ ability to implant false memories.</p><p>In a famous series of experiments starting in the 1970s, Loftus showed that with the right suggestions, psychologists could implant memories that people had been lost in a shopping center as children, or that they’d been sickened by eggs or strawberry ice cream at a picnic. The latter actually turned people off from wanting those foods. Despite the evidence, we still can’t shake the idea that memory is like a tape recording of events — and that misperception of our own minds makes us vulnerable.</p><p>“People who adhere to this tape recorder model of memory don’t seem to appreciate that memory is a constructive process,” Loftus said, explaining that our brains build memories from bits and pieces acquired at different times. We intuitively understand forgetting as losing or fading memories, but not the addition of false details.</p><p>Loftus also has been studying the mind-scrambling potential of “push polls,” where pollsters embed misinformation in a question, such as, “What would you think of Joe Biden if you knew he’d been convicted of tax evasion?” She said it’s chilling to consider how effectively AI might conduct this kind of deception at scale.</p>.Are you in a mid-career to senior job? Don’t fear AI - you could have this important advantage.<p>Memory manipulation, notes Pat Pataranutaporn, a researcher with the MIT Media Lab, is a very different process from fooling people with deep-fakes. You don’t need to create an elaborate fake of, say, the <em>New York Times</em> website — you just have to convince people they read something there in the past. “People don’t usually question their own memory,” he said.</p><p>Pataranutaporn was the lead author of three memory experiments, the first of which showed how chatbot interrogators can alter witness testimony simply by embedding suggestions into their questions — an AI extension of Loftus’ earlier work on human interrogations.</p><p>In that study, participants watched video footage of an armed robbery. Some were then asked misleading questions, such as: “Was there a security camera near the place the robbers parked the car?” About a third of those participants later recalled seeing the robbers arrive by car. There was no car. The false memory persisted even a week later. </p><p>Subjects were divided into three groups: one received no misleading questions, another received them in written form, and the third received them from an AI chatbot. The chatbot group went on to form 1.7 times as many false memories as those who received misleading information in writing.</p><p>Another study demonstrated that dishonest AI summaries or chatbots can easily insert false memories into a story people read. What’s even more concerning, Pataranutaporn said, was that participants who received misleading AI summaries or chats also retained less of the real information from their reading — and reported less confidence in the true information they recalled.</p><p>The third study demonstrated how AI can implant false memories using images and video. Researchers divided 200 volunteers into four groups. Participants in each group looked at a set of 24 images — some were typical images found on news websites, while others were personal, such as wedding photos someone might post on social media.</p><p>During a second viewing, a few minutes later, each group was shown a different version of the images. One group saw the same (unaltered) images. A second group saw AI-altered versions. A third group saw AI-modified images converted into short AI-generated videos. The final group viewed entirely AI-generated images that had been transformed into AI-generated videos.</p><p>Even the group that saw the original images retained a few false memories — not surprising, given how difficult it is to recall 24 distinct pictures . But participants exposed to any level of AI manipulation reported significantly more false memories. The group with the highest rate of memory distortion was the one that viewed AI-generated videos based on AI-generated images.</p><p>Younger people were somewhat more likely to incorporate false memories than older people. And education levels didn’t seem to affect susceptibility. Notably, the false memories didn’t rely on fooling participants into thinking the AI-generated content was real — they were told at the outset they’d be seeing AI-created content.</p><p>In the image experiment, some of the alterations entailed changes in the background — adding a military presence to a public gathering or changing the weather. The images retained most of the features of the original. As I’ve learned from experts, to have a real impact, disinformation must incorporate a story that’s at least 60 per cent true.</p><p>This latest research should spur more discussion of the effects of technology on our grasp of reality, which can go beyond merely spreading misinformation. Social media algorithms also encourage people to embrace fringe ideas and conspiracy theories by creating the false impression of popularity and influence.</p><p>AI chatbots will have even more subtle and unexpected effects on us. We should all be open to having our minds changed by new facts and a strong argument — and wary of attempts to change our minds by distorting what we see, feel or remember.</p>