Signal president calls ChatGPT "a bulls*** engine"

During Web Summit Rio 2023 last week, Yahoo News Chief Investigative Correspondent Michael Isikoff interviewed Signal President Meredith Whittaker. When Isikoff asked Whittaker a question that had been generated by ChatGPT about a Signal report that didn't exist, Whittaker called the chatbot "a bulls*** engine."

Video Transcript

MICHAEL ISIKOFF: Speaking of the predictive and perhaps, in some cases, inaccurate-- inaccuracies of AI, I have an example here, which I'd like to ask you. Because I went on ChatGPT the other day and asked, what should I ask Meredith Whittaker of Signal about AI?

And the first couple of questions were so prosaic, I'm not even going to-- what inspired you to work in the field of AI, and how did you get started, blah, blah, blah. And then ChatGPT said I should ask, Signal recently published a report on the role of AI in content moderation. Can you tell us a bit more about the key findings from that report?

MEREDITH WHITTAKER: That's a lie.

MICHAEL ISIKOFF: Where is your-- what did your report say?

MEREDITH WHITTAKER: There was no report.

MICHAEL ISIKOFF: No report at all.

MEREDITH WHITTAKER: This is bull [MUTED]. It's a bull [MUTED] engine. And I'm quoting Princeton scholar Arvind Narayanan so I say that. So I'm allowed to use that language on stage. But what ChatGPT is is it takes a huge amount of surveillance data of data that is scraped from the darkest holes of the internet-- it's Reddit, it's Wikipedia, it's message board comments, it's probably 4chan.

Nitasha Tiku at "Washington Post" actually did a really brilliant exposition of what is in the data sets that train OpenAI. And it's some pretty-- or train ChatGPT via Microsoft's OpenAI-- and it's some pretty disturbing content. But it has all of that.

And then, based on having been trained on that, so having been fed that massive amounts of data and trained with huge amounts of computational power, it predicts what is likely to be the next word in the sentence. So it's a statistical predictive engine.

And OK, that's a likely answer. But it's bull [MUTED], which is really, really important. Why are we using a bull [MUTED] engine for anything serious-- for real, for real? Because when you asked me this morning, you didn't know that was untrue.

MICHAEL ISIKOFF: I did not. I was-- I was madly googling to try to find this content moderation report--

MEREDITH WHITTAKER: Yeah. So wastes your time--

MICHAEL ISIKOFF: --which I couldn't find.

MEREDITH WHITTAKER: --besmirches my name. And it gets much more serious than that because we are in an information ecosystem that is overrun by falsehoods, by half truths, by misinformation. And we need, as a society, we need access to some form of shared reality if we're going to have healthy Democratic deliberation that helps us govern and ensure the social benefit.

That shouldn't be controversial. And yet we see these companies-- Microsoft, in particular-- releasing a ChatGPT demo that behaves kind of like that uncle who shows up to holiday gatherings, has a few drinks, and then just talks confidently about [MUTED] he does not know about.

And that's funny at the holidays, but that's not something we should be injecting into our information ecosystem, into our core infrastructures. We should not be championing this as something you add on to a search engine where people go to find information they don't already know and check facts.

It's stunning to me that any of this is controversial. But given the margins that are set to be made on generative AI, I understand that a lot of people are provoked to delusion by the prospect of profits.