Artificial intelligence can be used for good – and bad | Better Business Bureau

I’ve written columns about cryptocurrency, a technology I don’t thoroughly understand that has been exploited by scammers to steal victims’ money and personal information. This column is about another technology that is even farther over my head and may be even more of a boon to crooks – artificial intelligence or AI.

The Federal Trade Commission says “artificial intelligence” is an ambiguous term with many possible definitions depending on whether you see it as a discipline (e.g., a branch of computer science), a concept (e.g., computers performing tasks in a way that simulate human cognition), a set of infrastructures (e.g., the data and computational power needed to train AI systems), or the resulting applications and tools.

It notes that Congress has defined AI as “a machine-based system that can, for a given-set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”

How AI can be used for nefarious means

Advances in AI could do a lot of good by revolutionizing medicine, finance, business operations, media and other aspects of our lives. But the FTC and the Better Business Bureau are concerned about the potential for AI to be exploited for nefarious purposes.

A digital graphic that says Chat GPT floats above a laptop computer.
A digital graphic that says Chat GPT floats above a laptop computer.

The FTC says evidence already exists that crooks are using AI to generate realistic but fake content, including fake websites and consumer reviews; to create malware; and to use voice clones to facilitate impostor scams, extortion and financial fraud.

I cited an early example of voice cloning in a column in 2020. The head of a British firm received a call from the CEO of his parent company in Germany instructing him to wire $243,000 to the bank account of a supplier in Hungary. He was certainly familiar with his boss’s voice and didn’t for a second think it wasn’t the boss on the other end of the line.  But the call was actually originated by crooks using AI voice technology to mimic the boss’s voice. The crooks moved the money from Hungary to Mexico to other locations.

Heed the FTC’s cautionary advice

The FTC cites human factors that augment the technology in making AI-related scams successful:

  • Jargon such as “machine learning” and “neural networks” make AI seem almost magical.

  • “Automation bias” whereby people tend to be unduly trusting of answers from machines that seem neutral or impartial.

  • “Anthropomorphism” in which the use of personal pronouns and emojis may lead people  to think they’re conversing with something that understands them and is on their side.

Much of the FTC’s cautionary advice thus far is directed at businesses that develop or use AI applications. It starts with whether the foreseeable risks an AI tool or product could be used for fraud or other harm are so high that it shouldn’t be offered in the first place. And if they decide to offer the product, they should take all reasonable precautions before it hits the market. The burden shouldn’t be on consumers to figure out if an AI tool is being used to scam them.

Randy Hutchinson
Randy Hutchinson

It also warns businesses against:

  • Exaggerating what an AI product can do, falsely claiming it does something better than a non-AI product, or labeling a product as AI-powered simply because an AI tool was used in its development.

  • Making people think they’re communicating with a real person when it’s actually a machine.

  • Misleading people about what they’re seeing, hearing or reading.

Randy Hutchinson is the president of the Better Business Bureau of the Mid-South. Reach the BBB at 800-222-8754.

This article originally appeared on Nashville Tennessean: Better Business Bureau: Artificial intelligence used for good and bad