"Godfather of AI" leaves Google to talk about potential dangers

  • Oops!
    Something went wrong.
    Please try again later.

The man known as the "godfather of artificial intelligence" quit his job at Google so he could freely speak about the dangers of AI, the New York Times reported Monday.

Geoffrey Hinton, who worked with Google and mentors AI's rising stars, started looking at artificial intelligence more than 40 years ago, he told CBS Mornings in late March. He started working for the company in 2013, according to his Google Research profile. While at Google, he designed machine learning algorithms.

"I left so that I could talk about the dangers of AI without considering how this impacts Google," Hinton tweeted Monday. "Google has acted very responsibly."

Many developers are working toward creating artificial general intelligence. Until recently, Hinton said he thought the world was 20-50 years away from it, but he now thinks developers "might be" close to computers being able to come up with ideas to improve themselves.

"That's an issue, right? We have to think hard about how you control that," he said in March.

Artificial intelligence pioneer Geoffrey Hinton / Credit: MARK BLINCH / REUTERS
Artificial intelligence pioneer Geoffrey Hinton / Credit: MARK BLINCH / REUTERS

Hinton has called for people to figure out how to manage technology that could greatly empower a handful of governments or companies.

"I think it's very reasonable for people to be worrying about these issues now, even though it's not going to happen in the next year or two," Hinton said.

Hinton also told CBS he thought it wasn't inconceivable that AI could try to wipe out humanity.

When asked about Hinton's decision to leave, Google's chief scientist Jeff Dean told BBC News in a statement that the company remains committed to a responsible approach to AI.

"We're continually learning to understand emerging risks while also innovating boldly," he said.

Google CEO Sundar Pichai has called for AI advancements to be released in a responsible way. In an April interview with 60 Minutes, he said society needed to quickly adapt and come up with regulations for AI in the economy, along with laws to punish abuse.

"This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers and so on," Pichai told 60 Minutes. "And I think we have to be very thoughtful. And I think these are all things society needs to figure out as we move along. It's not for a company to decide."

Will anything change at the Supreme Court after the Senate ethics hearing?

Hollywood writers' strike shuts down top TV shows

Report: Lionel Messi suspended over Saudi Arabia trip