Google's new Bard chatbot told an AI expert it was trained using Gmail data. The company says that's inaccurate and Bard 'will make mistakes.'
Google Bard started rolling out this week, and it's off to a bit of a rocky start.
The AI chatbot told one user that it was trained on data from Gmail, among other sources.
Google later said this was inaccurate, noting that Bard is an "early experiment" that "will make mistakes."
Google Bard began rolling out to some users this week, and it's already hit a few snags.
AI expert Kate Crawford posted an exchange she had with the new AI chatbot in which she asks where Bard's training dataset comes from.
In her screenshot of the conversation, Bard responds that its dataset "comes from a variety of sources," one of which is "Google's internal data," including data from Gmail.
"Anyone a little concerned that Bard is saying its training dataset includes... Gmail? I'm assuming that's flat out wrong, otherwise Google is crossing some serious legal boundaries," Crawford wrote.
A few hours later, Google tried to set the record straight.
"Bard is an early experiment based on Large Language Models and will make mistakes. It is not trained on Gmail data," the company said in a tweet.
In a separate response that has since been deleted, Google also said, "No private data will be used during Barbs [sic] training process."
In Bard's initial response to Crawford, the chatbot said it was also trained using "datasets of text and code from the web, such as Wikipedia, GitHub, and Stack Overflow," as well as data from companies that "partnered with Google to provide data for Bard's training."
Google CEO Sundar Pichai has instructed employees to anticipate errors as people begin using Bard.
"As more people start to use Bard and test its capabilities, they'll surprise us. Things will go wrong," he wrote in an email to staff on Tuesday, published by CNBC.
Read the original article on Business Insider