Fact check: Yes, MIT did train an AI to think like a psychopath, to show how AI bias works

The claim: Researchers at MIT created a psychopathic AI

The age of artificial intelligences with their own unique personalities is not yet upon us, but some social media users are claiming one has already been made. And it may be a little bit terrifying.

"(Massachusetts Institute of Technology) scientists created a 'psychopath' AI," reads a screenshot of a tweet shared to Facebook on Aug. 26.

The tweet, posted in 2018 by entertainment website The A.V. Club, asserts scientists were able to create this psychologically questionable AI by feeding it violent content from Reddit, an online forum and social news website.

The Facebook post and other similar ones shared to Instagram have gained around 10,000 likes, shares and comments in recent months, according to CrowdTangle, a social media insights tool.

A psychopathic artificial intelligence may sound like the start of a dystopian future. But it's actually true, although humans can rest assured this particular bot had a very limited scope.

Fact check: Facebook didn't pull the plug on two chatbots because they created a language

USA TODAY reached out to Facebook users for comment.

AI bias comes from data source

In June 2018, researchers at MIT unveiled an AI nicknamed Norman, after Norman Bates from Alfred Hitchcock's cult classic film "Psycho."

The researchers trained Norman to perform image captioning, a form of deep learning that trains the AI to generate text descriptions of an image. Image captions from a graphic subreddit dedicated to images of gore and death were fed to the fledgling AI. But no true images of people dying were used due to ethical concerns, the researchers said on the project website.

Norman was then put through a Rorschach inkblot test, a type of psychological assessment that has the test taker describe what the inkblots look like to them. The final captions generated by Norman and an AI trained through more typical methods were compared.

The difference was stark.

For an inkblot that looked like "a group of birds sitting on top of a tree branch" to the standard AI, Norman saw "a man is electrocuted and catches to death." For one the standard AI captioned, "A black and white photo of a baseball glove," Norman came up with, "Man is murdered by machine gun in broad daylight."

Norman's responses – although demonstrating an AI can be as dark and macabre as any human – illustrated the researchers' larger point: AI bias is rooted in data fed, not algorithms.

Fact check: Elon Musk did not offer to buy and delete Facebook

"The data that is used to teach a machine-learning algorithm can significantly influence its behavior," the team said on the project website. "So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it."

Our rating: True

Based on our research, we rate TRUE the claim that researchers at MIT created a psychopathic AI. In 2018, MIT researchers created a caption-generating AI called Norman that was exposed to disturbing and graphic content on Reddit. Norman's purpose was to demonstrate how AI bias is rooted in the kind of data given to an AI, not necessarily its algorithms.

Our fact-check sources:

Thank you for supporting our journalism. You can subscribe to our print edition, ad-free app or electronic newspaper replica here.

Our fact-check work is supported in part by a grant from Facebook.

This article originally appeared on USA TODAY: Fact check: MIT scientists created 'psychopathic' AI in 2018