- Google has allegedly informed employees to be wary about using chatbots like Google Bard due to privacy concerns.
- The company is worried employees will give LLM chatbots confidential information and thus cause leaks.
- Google has also allegedly told engineers to avoid using code that LLM chatbots could generate.
Late last year, we heard that Google CEO Sundar Pichai had called a metaphorical “code red” for the company. The problem? Large language model (LLM) chatbots like ChatGPT are the first significant threat to Google’s cash cow Search. Google then fast-tracked the launch of its own chatbot known as Google Bard, which is available today as an “experimental” product.
Now, we are learning that, despite Google’s frantic push to inject AI into everything it does, it is not nearly as gung-ho behind the scenes. Per Reuters, Google has allegedly informed employees to be wary of using LLM chatbots — including its own Google Bard — over privacy and company security concerns.