×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

OpenAI unveils new ChatGPT that listens, looks and talks

The San Francisco artificial intelligence startup unveiled a new version of its ChatGPT chatbot that can receive and respond to voice commands, images and videos.
Last Updated : 14 May 2024, 17:01 IST
Last Updated : 14 May 2024, 17:01 IST

Follow Us :

Comments

San Franscisco: As Apple and Google transform their voice assistants into chatbots, OpenAI is transforming its chatbot into a voice assistant.

On Monday, the San Francisco artificial intelligence startup unveiled a new version of its ChatGPT chatbot that can receive and respond to voice commands, images and videos.

The company said the new app — based on an AI system called GPT-4o — juggles audio, images and video significantly faster than previous versions of the technology. The app is available free of charge, for both smartphones and desktop computers.

“We are looking at the future of the interaction between ourselves and machines,” said Mira Murati, the company’s chief technology officer.

The new app is part of a wider effort to combine conversational chatbots such as ChatGPT with voice assistants like the Google Assistant and Apple’s Siri. As Google merges its Gemini chatbot with the Google Assistant, Apple is preparing a new version of Siri that is more conversational.

OpenAI said it would gradually share the technology with users “over the coming weeks.” This is the first time it has offered ChatGPT as a desktop application.

The company previously offered similar technologies from inside various free and paid products. Now, it has rolled them into a single system that is available across all its products.

During an event streamed on the internet, Murati and her colleagues showed off the new app as it responded to conversational voice commands, used a live video feed to analyze math problems written on a sheet of paper and read aloud playful stories that it had written on the fly.

The new app cannot generate video. But it can generate still images that represent frames of a video.

With the debut of ChatGPT in late 2022, OpenAI showed that machines can handle requests more like people. In response to conversational text prompts, it could answer questions, write term papers and even generate computer code.

ChatGPT was not driven by a set of rules. It learned its skills by analyzing enormous amounts of text culled from across the internet, including Wikipedia articles, books and chat logs. Experts hailed the technology as a possible alterative to search engines like Google and voice assistants like Siri.

Newer versions of the technology have also learned from sounds, images and video. Researchers call this “multimodal AI.” Essentially, companies like OpenAI began to combine chatbots with AI image, audio and video generators.

(The New York Times sued OpenAI and its partner, Microsoft, in December, claiming copyright infringement of news content related to AI systems.)

As companies combine chatbots with voice assistants, many hurdles remain. Because chatbots learn their skills from internet data, they are prone to mistakes. Sometimes, they make up information entirely — a phenomenon that AI researchers call “hallucination.” Those flaws are migrating into voice assistants.

While chatbots can generate convincing language, they are less adept at taking actions like scheduling a meeting or booking a plane flight. But companies like OpenAI are working to transform them into “AI agents” that can reliably handle such tasks.

OpenAI previously offered a version of ChatGPT that could accept voice commands and respond with voice. But it was a patchwork of three different AI technologies: one that converted voice to text, one that generated a text response and one that converted this text into a synthetic voice.

The new app is based on a single AI technology — GPT-4o — that can accept and generate text, sounds and images. This means that the technology is more efficient, and the company can afford to offer it to users for free, Murati said.

“Before, you had all this latency that was the result of three models working together,” Murati said in an interview with the Times. “You want to have the experience we’re having — where we can have this very natural dialogue.”

ADVERTISEMENT
Published 14 May 2024, 17:01 IST

Deccan Herald is on WhatsApp Channels | Join now for Breaking News & Editor's Picks

Follow us on :

Follow Us

ADVERTISEMENT
ADVERTISEMENT