ChatGPT is seeing competition from Microsoft's Bing and Google's Bard. Here's a breakdown of how all three chatbots fare.
When ChatGPT launched late last year, it earned instant and widespread attention for bringing an AI engine to the masses, free of charge. Suddenly, anyone could type in queries and ChatGPT would give novel, humanlike answers in seconds. From writing an essay on the First Crusade to a short poem about Al Gore's love of Toyota Prii (plural of Prius), ChatGPT would spit out answers in a way Google or Bing never could.
Where traditional search engines populate a list of links to websites that most closely match a person's query, ChatGPT gives people the answer by looking through large sets of data and using a large language model (LLM) to produce sentences that mimic a human response. It's been described as autocorrect on steroids.
Still, not all AI chatbots are built the same. In the tests below, we compared responses from the paid version of ChatGPT, which uses GPT-4 (versus 3.5 for the free version), versus responses from both the version of ChatGPT built into the Bing search engine and Google's own Bard AI system. (GPT, by the way, stands for "generative pretrained transformer.") Bard is currently in an invite-only beta, and Bing is free but requires people to use Microsoft's Edge web browser.