×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Researchers build tools to counter AI’s privacy threat

As the use of AI becomes more widespread and concerns over privacy grow, researchers are finding ways to combat the threat
Last Updated 10 May 2021, 09:07 IST

Creating and training artificial intelligence (AI) has become easier than ever, raising questions on its regulation and what it could mean for online privacy. As AI recognition systems become more commonplace, some researchers fear the technology could lead us down a dystopian Big Brother-like rabbit hole.

Cheaper hardware, shorter training times courtesy research advances and easy access to photos on social media have made it possible for anyone with a decent computer and coding knowledge to develop AI-based facial recognition software. Although AI could help in making our daily lives more efficient and convenient, it comes at the cost of privacy.

One New York-based startup, Clearview AI, has trained a tool with billions of images that it claims to have aggregated from Facebook, YouTube and millions of other websites and counts hundreds of law enforcement agencies in the United States as its customers, according to a New York Times article.

Some AI researchers and developers have taken notice of rising privacy concerns and are taking steps to build software to counter the threat of increased data collection using the technology.

A group of researchers from the University of Chicago invented a tool called Fawkes to prevent AI facial recognition technology from gleaning insights into users’ personal data. Fawkes works by making tiny changes to an image that are almost unobservable to the human eye but are capable of fooling AI into misidentifying who or what it sees in a photo.

“This technology can be used as a key by an individual to lock their data,” Daniel Ma from Deakin University in Australia said to the MIT Technology Review. “It’s a new frontline defense for protecting people’s digital rights in the age of AI.”

Researcher Emily Wenger from the University of Chicago and her team found that their tool was 100% effective against several widely used commercial facial recognition systems, including Amazon’s AWS Rekognition, Microsoft Azure, and Face++.

Fawkes has already been downloaded almost half a million times from the project website and one user has also built a third-party online version, though there is no phone app as yet.

Another online tool called LowKey, developed by researchers at the University of Maryland, expands on Fawkes’ capabilities by using a stronger protection system that can fool even pretrained commercial models.

LowKey turns images into what the researchers term “unlearnable examples”, which make an AI ignore selfies entirely. Instead of trying to fool the AI into making a mistake, LowKey tricks the AI into ignoring an image during training, turning its understanding of the image into random guesswork.

While safety-conscious web surfers wait for more regulation to prevent wider misuse of AI and facial recognition, software to counter AI technology will likely be the only solution to allay privacy concerns on an individual level in the meantime.

ADVERTISEMENT
(Published 10 May 2021, 09:05 IST)

Follow us on

ADVERTISEMENT
ADVERTISEMENT