×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
Amazon's AI stores seemed too magical and they were

Amazon's AI stores seemed too magical and they were

Amazon says on its website that Just Walk Out uses “computer vision, sensor fusion, and deep learning” but doesn’t mention contractors. The company told Gizmodo that the workers were annotating videos to help improve them, and that they validated a “small minority” of shopping visits when its AI couldn’t determine a purchase.
Last Updated 04 April 2024, 03:07 IST

By Parmy Olson

There’s a grey area in artificial intelligence filled with millions of humans who work in secret — they’re often hired to train algorithms but end up operating much of their work instead. These crucial workers took the spotlight this week when The Information reported that Amazon’s Just Walk Out technology, which allowed customers to grab grocery items from a shelf and walk out of the store, was being phased out of its grocery stores. It partially relied on more than 1,000 people in India who were watching and labeling videos to make sure the checkouts were accurate.

Amazon says on its website that Just Walk Out uses “computer vision, sensor fusion, and deep learning” but doesn’t mention contractors. The company told Gizmodo that the workers were annotating videos to help improve them, and that they validated a “small minority” of shopping visits when its AI couldn’t determine a purchase.

Even so, the Amazon story is a stark reminder that “artificial intelligence” still often requires armies of human babysitters to work properly. Amazon even has an entire business unit known as Amazon Turk devoted to helping other companies do just that — train and operate AI systems.

Thousands of freelancers around the world count themselves as “MTurkers,” and the unit is named after the story of the Mechanical Turk, an 18th-century chess-playing contraption that was secretly controlled by a man hiding inside.

Far from an incident consigned to history, there are plenty more examples of companies that have failed to mention humans pulling the levers behind supposedly cutting-edge AI technology. To name just a few:

* Facebook famously shut down its text-based virtual assistant M in 2018 after more than two years, during which the company used human workers to train (and operate) its underlying artificial intelligence system.

* A startup called x.ai, which marketed an “AI personal assistant” that scheduled meetings, had humans doing that work instead and shut down in 2021 after it struggled to get to a point where the algorithms could work independently.

* A British startup called Builder.ai sold AI software that could build apps even though it partly relied on software developers in India and elsewhere to do that work, according to a Wall Street Journal report.

There’s a fine line between faking it till you make it — justifying the use of humans behind the scenes on the premise they will eventually be replaced by algorithms — and exploiting the hype and fuzzy definitions around AI to exaggerate the capabilities of your technology. This pseudo AI or “AI washing” was widespread even before the recent generative AI boom.

West Monroe Partners, for instance, which does due diligence for private-equity firms, examined marketing materials provided to prospective investors by 40 US firms that were up for sale in 2019 and analyzed their use of machine learning and AI models. Using a scoring system, it found that the companies’ marketing claims about AI and machine learning exaggerated their technology’s ability more than 30 per cent, on average.

That same year, a London-based venture capital firm called MMC found that out of 2,830 startups in Europe that were classified as being AI companies, only 1,580 accurately fit that description.

One of the obvious problems of putting humans behind the scenes of AI is that they might end up having to snoop on people’s communications. So-called “supervised learning” in AI is why Amazon had thousands of contractors listening in on commands to Alexa, for instance. But there’s also the broader proliferation of snake oil.

The good news for investors is that regulators are on the case. Last month Ismail Ramsey, the US attorney for the Northern District of California, (aka Silicon Valley) said he would target startups that mislead investors about their use of AI before they go public.

In February, Securities and Exchange Commission Chair Gary Gensler warned that AI washing could break securities law. “We want to make sure that these folks are telling the truth,” Gensler said in a video message. He meant it: A month later, two investment firms reached $400,000 settlements with the SEC for exaggerating their use of AI.

Even when AI systems aren’t exaggerated, it’s worth remembering there’s a vast industry of hidden workers, who are still propping up many high-tech AI systems often for low wages. (This has been well documented by academics like Kate Crawford and in books like Code Dependent by Madhumita Murgia.) In other words, when AI seems too magical, sometimes it is.

ADVERTISEMENT

Deccan Herald is on WhatsApp Channels| Join now for Breaking News & Editor's Picks

Follow us on

ADVERTISEMENT
ADVERTISEMENT