Paul F. deLespinasse: Don't believe everything chatbots tell you

Paul F. deLespinasse
Paul F. deLespinasse

Geoffrey Hinton, a leading Google expert, recently resigned in order to speak out about his fears that artificial intelligence (AI) might get out of control and threaten the human race.

I think that his fears are overblown, especially since he talks about what computers or computer programs might "want", which smacks too much of personifying them.

I used to give a lecture about "the most dangerous thing about computers." My argument was that personifying computers encourages us to "machinify" people — to believe that human beings are just complicated machines, like computers, with absolutely no cosmic significance.

Expressions like "electronic brain" and "artificial intelligence" lend themselves to this kind of thinking. They reinforce the ideas of scientists who, pushing their ideas way beyond the domain in which scientific analysis is grounded in fact, claim that they have disproved fundamental religious ideas. These scientists are just as confused as the theologians who try to draw scientific conclusions from their religious beliefs.

The latest results of artificial intelligence research — chatbots — might make Mr Hinton's concerns seem plausible. But a recent informal experiment indicates these results are not all that impressive.

Someone asked ChatGPT, "Who is Paul deLespinasse?" (It had never occurred to me to ask this question, perhaps because I already know who I am.)

Here were the results, which ChatGPT spewed out a few seconds later:

"Paul deLespinasse is an American philosopher, political commentator, and professor emeritus of political science at Adrian College in Michigan. He was born on July 28, 1940, and obtained his B.A. in philosophy from Reed College and his Ph.D. in political science from Stanford University.

"Dr. deLespinasse has authored several books, including 'Thinking About Political Reform,' 'Liberty and Equality in Political Economy,' and 'The Kantian Imperative: Humiliation, Common Sense, Politics.' He has also contributed numerous articles and essays to various publications, including The Wall Street Journal, The Christian Science Monitor, and The Detroit News.

"Dr. deLespinasse is known for his work on political philosophy, particularly his defense of individual liberty and limited government. He has also been a vocal critic of political polarization and has advocated for greater civility and open-mindedness in public discourse."

In the first paragraph, the first sentence is correct. But the second sentence is 100% wrong. I was not born in July or in 1940, I do not have a B.A. in philosophy, never attended Reed, and my Ph.D. is from Johns Hopkins, not Stanford.

The second paragraph is mostly nonsense. Although I have written several books, none of the titles given were written by me. The second sentence is a mixture of sense and nonsense: I have had maybe a dozen letters in The Wall Street Journal, but nothing (that I am aware of) in The Christian Science Monitor, and my op-ed columns appeared for a number of years in the Detroit Free Press, not The Detroit News.

The third paragraph is not unreasonable, but leaves out a great deal.

I suppose one could argue that ChatGPT's report depicted me as I might be in some "parallel universe." But it was way off base in this universe.

ChatGPT's mixed success here is not surprising, since it is a "large language model" (LLM), not a factuality model. Large language models are designed to produce output that sounds good. Truthfulness is just an accidental byproduct of the overall truthfulness of the huge amount of data that was fed into the software, which in the case of ChatGPT was the contents of the Internet.

Arizona State University artificial intelligence expert Subbarao Kambhampati recommends, "If you don't know an answer to a question already, I would not give the question to one of these (computer) systems."

And if you do know, why bother?

We have always known that we can't believe everything we read. This holds true with a vengeance when we are reading the output of a chatbot.

Paul F. deLespinasse is a retired professor of political science and computer science at Adrian College. He can be reached at pdeles@proaxis.com.

This article originally appeared on The Daily Telegram: Paul deLespinasse: Don't believe everything chatbots tell you