KEY POINTS
OpenAI’s ChatGPT can now “see, hear and speak,” the company said.
The update to the chatbot will roll out to paying users in the next two weeks, OpenAI said.
OpenAI’s big feature push comes alongside ever-rising stakes of the AI arms race among chatbot leaders such as OpenAI, Microsoft, Google and Anthropic.
"OpenAI’s ChatGPT can now “see, hear and speak,” or, at least, understand spoken words, respond with a synthetic voice and process images, the company announced Monday.
The update to the chatbot — OpenAI’s biggest since the introduction of GPT-4 — allows users to opt into voice conversations on ChatGPT’s mobile app and choose from five different synthetic voices for the bot to respond with. Users will also be able to share images with ChatGPT and highlight areas of focus or analysis (think: “What kinds of clouds are these?”).
The changes will be rolling out to paying users in the next two weeks, OpenAI said. While voice functionality will be limited to the iOS and Android apps, the image processing capabilities will be available on all platforms.
The big feature push comes alongside ever-rising stakes of the artificial intelligence arms race among chatbot leaders such as OpenAI, Microsoft, Google and Anthropic. In an effort to encourage consumers to adopt generative AI into their daily lives, tech giants are racing to launch not only new chatbot apps, but also new features, especially this summer. Google has announced a slew of updates to its Bard chatbot, and Microsoft added visual search to Bing.
"
Full Article Below