Last updated on December 25th, 2022 at 04:51 pm
The New York Times says that the launch of ChatGPT, a popular chatbot with artificial intelligence that can have natural conversations and was made by OpenAI, has caused Google management to issue a “code red.”
This has raised fears about the future of Google’s search engine.
The Times looked at an internal memo and an audio recording that said Sundar Pichai, the CEO of Google and Alphabet, has attended several meetings about Google’s AI strategy and told many groups within the company to refocus their efforts on dealing with the threat that ChatGPT poses to its search-engine business.
Shifting Focus
The Times said that teams from Google’s research, trust, and safety divisions, among others, have been told to change their focus to help develop and release AI prototypes and products.
According to The Times, some personnel has been given the goal of developing AI products that produce art and images comparable to OpenAI’s DALL-E, which is used by millions of people.
An inquiry for comment was not immediately answered by a Google spokesman.
Google’s decision to expand its AI product line comes as experts and Google workers disagree about whether ChatGPT, which is managed by former Y Combinator president Sam Altman, has the ability to displace the search engine and harm Google’s ad-revenue business model.
Read: In business, it’s ALL OUT WAR.
The Real Alarm
According to Sridhar, who managed Google’s advertising division from 2013 to 2018, ChatGPT might stop consumers from clicking on Google links with ads, which in 2021 brought in $208 billion, or 81% of Alphabet’s total revenue, according to Insider.
ChatGPT got over a million users in the first five days after it went public in November. It does this by gathering information from millions of websites and responding to questions in a natural way.
Users have asked the chatbot to write college essays, help with programming, and even be a therapist.
However, some people have already pointed out that the bot frequently makes mistakes.
AI experts told Insider that ChatGPT cannot fact-check what it says and cannot tell the difference between a verified fact and false information. AI researchers refer to this ability to invent answers as “hallucinations.”
Read: 10 Lessons to Learn from the Failures of Big Tech
How far is this tech?
The Times says that Google is hesitant to make its AI chatbot LaMDA, which stands for Language Model for Dialogue Applications, available to the public because it has a large margin of error and could be harmful.
A recent article on CNBC said that Google executives were worried about “reputational risk” if they made it widely available in its current form.
Before ChatGPT was made public, Zoubin Ghahramani, the head of Google’s AI department, Google Brain, told The Times that chatbots are “not something that people can use consistently on a regular basis.”
Instead of removing its search engine, Google might concentrate on enhancing it over time, experts told The Times.