Q&A: Sen. Mike Rounds addresses fears about AI concerns

Sen. Mike Rounds, R-S.D., listens during a Senate committee hearing on Senate Armed Services hearing to examine the posture of United States Central Command and United States Africa Command in review of the Defense Authorization Request for Fiscal Year 2024 and the Future Years, Thursday, March 16, 2023, on Capitol Hill in Washington.
Sen. Mike Rounds, R-S.D., listens during a Senate committee hearing on Senate Armed Services hearing to examine the posture of United States Central Command and United States Africa Command in review of the Defense Authorization Request for Fiscal Year 2024 and the Future Years, Thursday, March 16, 2023, on Capitol Hill in Washington.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

It’s obvious that Sen. Mike Rounds, R-SD, wants to talk about artificial intelligence. The second-term U.S. Senator met Friday with the Argus Leader, and when asked about AI, his eyes brightened and he leaned forward in his seat.

Rounds currently is the chairman of the Senate Armed Services Subcommittee on Cybersecurity and on May 3, the committee held a hearing on the use of AI in cybersecurity and cyberspace. He’s met with the founder of OpenAI, Sam Altman, who launched ChatGPT, an AI chatbot that can respond to questions and create articles, social media posts, essays and more, in November.

News reports have found ChatGPT had invented a sexual harassment scandal and falsely claimed a politician in Australia had served time in prison for a bribery charge.

Rounds said that while AI is already being used, people have to be aware of its limitations and fact-check the information being produced by the machine they’re using.

More: Gov. Kristi Noem bans Tencent Holding, which owns We Chat and Discord, from state devices

Below, in exclusive Q&A, Rounds spoke about his takes on AI, its ethics and what’s being done in Congress now to address artificial intelligence.

Editor's note: The Argus Leader used Otter.ai, an artificial intelligence transcription service to record and transcribe this interview in real time. Otter.ai isn’t perfect and responses have been further edited for clarity and conciseness.

What's the most fascinating thing you've found so far about AI? What should everyday folks know about it?

Rounds: It's real. It's in our everyday lives right now. It has been subtle in the past. It’s going to be more on the forefront. The algorithms that go with the creation of artificial intelligence are numerous. It's not one size only and works as a secret.

There's lots of different algorithms that create the ability for a machine to learn. You need really high computing capability. You need supercomputers and lots of them. The next thing you need are the algorithms themselves that allow and give direction for how the machines process, and then finally and the most important piece in all of this, and the part that's going to continue to be upgraded, is the actual data itself. You need to be able to label the data. You've got to be able to identify the data. You have to be able to trust the data. Otherwise, you can't trust the outcomes. That is going to be the key in artificial intelligence.

Our adversaries are using artificial intelligence today. We can't do this at the government-level without exploiting the private sector’s AI capabilities. We also have to limit what our near peer adversaries can get from our private businesses. And we've got to limit our adversaries' ability to gather in terms of data, like the American public and our businesses.

More: How did China's lost balloon end up floating by South Dakota?

They're doing everything they can. I mean, you wonder why China wants TikTok and why they have an interest in Zoom. It's because they've already done the hard work of learning how to collect huge amounts of data on their own people. But they don't have lots of data on the American public. They want it. They want to know how we think, they want to know what we react to, they want to know how to influence us. They can't do it based upon the cultures in their own country. So they're trying to have their systems learn our culture, and they want to influence our culture. And they're doing it today. TikTok is a really good example. If you do Zoom calls, and lots of people do Zoom calls − we've excluded it in our office, because the Chinese Communist Party actually sees what's on the Zoom call before the other party sees you on a Zoom call. That's how in-depth they are. And it's all based on a 2017 law that requires businesses in China to deliver at their request any of their business data. That's by law. Can you imagine a law like that in the United States? But they’re a police state.

Editors Note: The Department of Justice announced a case in April involving Chinese security officials allegedly spying on Chinese dissidents on Zoom calls and then later harassing them, according to ABC News. One Zoom employee and 10 Chinese officials were charged with conspiracy. In a statement, Zoom said it has not and will not provide access to Zoom meetings or webinars to any government. The American tech company has continued to cooperate with the DOJ.

You said we have to have the infrastructure for supercomputers. Where do we find that infrastructure?

Rounds: They exist today. The supercomputers are for real. And there's more of them coming on board. Google has theirs, Microsoft has theirs. But there's a number of major organizations that have supercomputers. And they not only talk to each other, they talk among themselves − the computers do today. But the vast majority of what you see today, you don't know where that computational activity is going on. It's somewhere within one of those supercomputers.

How do you educate people about the ethics of using AI? It feels like it’s a bit of a Wild West out there now.

Rounds: Right and wrong have not changed. It's a matter of making sure that you apply our existing norms to the use of AI. So if it's wrong to take money out of a checking account that doesn't belong to you and [you're] using AI to do it, it's still wrong. If it's wrong to have someone else write a book report for you, it's still wrong if you're not writing it, unless your instructor or the other people understand that you have simply asked a computer system to write something on your behalf that you're monitoring. If you're going to write a story for the Argus Leader, and if you're going to use AI to do it, then does that mean that you should attribute that you have used a computer system to look things up? On the other hand, if you want to know the definition of something, you can look it up right now on an automated thesaurus. Some people think well, that's fine. You used to look it up in a book, you’re just using a new mechanical item. Those same norms apply I think when it comes to AI. It's OK as long as appropriately disclosed.

More: Professors are using ChatGPT detector tools to accuse students of cheating. But what if the software is wrong?

The challenge is going to be we're going to want this to work in such a fashion that it does not infringe on people's individual rights to privacy. As an example, we're not going to want to have access to other people's patents and copyrighted information and not reimburse them for having access to them. But with AI, how do you show that it was ever done? Because you may very well found it someplace, but it's been incorporated into somebody else's data set.

Now, the one that I think some people have thought about, and it really scares them, is how do you know whether what you've just been told by a computer is true? And you don't, so you're going to have to have systems that can check and double check.

I'll give you an example. I went in and had breakfast with Sam Altman from ChatGPT, and he had his new fourth edition. He let us play with it for a little bit. When I left, I came back and I had some of the folks in my office pull the third edition of the ChatGPT. I had to do a recognition for one of my former staffers who had passed away. I said, let's look at this, see what it can find and would write 200 words or less just for the heck of it. Couldn't find my staffer’s work. They had nothing on him.

Sam Altman, CEO of OpenAI, maker of ChatGPT
Sam Altman, CEO of OpenAI, maker of ChatGPT

Then I said OK, I have to talk about the Sanford Underground Research Lab out in the [Black] Hills. The South Dakota Legislature had asked us to comment, because they were looking at $13 million in funding for the lab. They wanted to know what’s going on at the federal level. I said let’s try Chat GPT on that. This is 15 minutes after I had asked for the staff recognition. Fifteen minutes later, it wrote out in seconds, a beautiful layout of how [the Sanford Lab] came to be put in place, the work we did on it, what was in it now, scientists reporting on it, the whole bit. And it ended with a quote on the Sanford lab and its importance by my former staffer. It had taught itself to go find the quote, and the quote was correct.

Today, one of the other guys in the office put together the ChatGPT program, ran it and it put in quotes. The quotes were all made up. Same ChatGPT. It had found out that I liked quotes and it was now going to give me quotes. That's the anomaly that you have to watch out for. You still have to have the data checked to make sure that all the labeled data that went into it was accurate. That's going to be the issue.

How do you teach AI to separate fact from fiction?

Rounds: I think artificial intelligence can be really good about keeping track of the facts in a particular situation. What date were you born on? That's data and it's accurate. It's accurate today. It's accurate tomorrow. It's accurate the next day. How big is the sun? How far is the sun from the Earth? How many millimeters are there in a centimeter? How many miles an hour do you have to travel in miles an hour to do one mile per second? Ok, those are facts. They won’t change.

The other items that I think about or that we believe about the origin of the universe. How did it really happen? The best theory we have right now is dated as of today. Tomorrow that can change, but the actual facts about how far a star is from the Earth does not change. We have to be able to separate out fact from belief or best guess. And that's where AI is gonna have a difficult time. Once you go past the facts and into the assumptions, that's where I think we're going to have the biggest challenge with AI, because it’s going to want to make assumptions.

You have to recognize that labeled data has got to be a part of it, and it's got to be verifiable. We're learning this. The ethics of it is one thing. The reliability of it is another. But it all comes back down to is the data that you're relying on to make decisions and the information you're giving out, is it verifiable, identifiable data? Can you label it?

What’s happening with AI on the federal level?

Rounds: There's a group of us that have different working groups in the Senate right now. I'm involved in it because of my interest on behalf of the Armed Services Committee, but I can tell you there was an AI commission put together more than two years ago. Eric Schmidt, the former Google CEO, was on it, and others. Matter of fact, President José-Marie Griffiths from Dakota State University was on it. They published more than a year ago an unclassified discussion. If you haven't had a chance to look at it, it will be worth your time to look at it.

More: Dakota State president to join White House Cyber Workforce and Education Summit

There was also a very, very highly classified report as well. I've read it, I've got it. I've been able to go back in and ask questions on it. Because it is so highly classified, we have lost a year in acting on it with regard to quality of life issues here in the United States. That is our downfall. We are so afraid of this that we have not been willing to share how powerful this really could be for health care. Attaining disease, extending life. It has huge opportunities for us, but it also has some real implications for national defense as well.

This article originally appeared on Sioux Falls Argus Leader: US Sen. Mike Rounds weighs in on ChatGPT and artificial intelligence