No law specifically against artificial images, but not new to courts either

Sep. 30—Artificial images falsely portraying individuals, known as deepfakes, are becoming more and more common with the rapid advancement of artificial intelligence and while the law has yet to catch up to the new technology, some longstanding principles apply.

Deepfakes are scattered across social media today and accessible enough for those with limited knowledge to create using AI, experts said. This application of AI poses many questions, perhaps the most prominent of which being, "what can I do if this happens to me?"

According to Dr. Shomir Wilson, assistant professor in the College of Information Sciences and Technology at Penn State University, there are no federal regulations regarding deepfakes.

"My impression from some research is there is not a national law against deepfakes, but there are some states trying to regulate it," Wilson said. "The law does tend to be a step behind. Often this is a natural thing as laws that reach ahead might interfere."

While the law remains a little behind in terms of targeted statutory remedies, legal actions can be taken by victims of a deepfake creator, said Eric (Ric) Cohen, partner at the law firm of Cohen and Silver LLC in Philadelphia.

Cohen practices in a specialized field of entertainment law. Although he is not a litigator, Cohen said he has experienced and follows technology very closely because of its impact on his clients which include YouTubers, television and movie writers and producers as well as individuals in the music industry.

In terms of AI technology and deepfakes, Cohen said longstanding legal principles can be applied.

"People are using arguments relating to longstanding principles like invasion of privacy or defamation and libel," Cohen said. "Defamation is when someone makes a false statement or creates something that is false and does so with malicious intention."

Though these principles can be applied, Cohen said it may be difficult to find an attorney willing to take on this sort of case unless they know the bad actor has money to go after. In instances like this, individuals can find themselves in difficult situations if they are unable to pay an attorney an hourly fee.

Finances may prove an issue for legal ramifications, but reporting the deepfake to the social media platform it was posted on is free and something these platforms take seriously, according to Cohen.

"Those who run them don't want something offensive and wrong on their platforms," he said.

For celebrities and politicians, legal action can be more difficult to pursue. Cohen references New York Times Co. v. Sullivan, which established a public official's ability to pursue defamation is limited.

In 1960, The New York Times ran an advertisement funded by civil rights activists that criticized the police department in Montgomery, Alabama. Though much of the ad was true, some of the statements were false and police Commissioner L.B. Sullivan sued the Times in an Alabama court claiming the ad had harmed his reputation and he had been a victim of libel, according to the United States Courts website.

The Alabama court ruled in Sullivan's favor, but the Times took the case all the way to the U.S. Supreme Court where it argued the ad was protected under the First Amendment. The Supreme Court unanimously ruled in favor of the New York Times and decided a public official must prove a statement was published with actual malice or with the knowledge that it was false, the site says.

"It's more difficult for someone who puts their persona out into the ether to pursue a defamation case," Cohen said. "It could be considered a joke or a parody by the defense."

AI-generated songs are another way the technology can impact artists. In these cases, people cannot use copyright laws to sue for infringement, but they can rely on the legal standard of misappropriation of name and likeness to sue for damages or to have a song removed from a platform like Spotify, according to Cohen.

Though use with AI-generated content is new, these legal principles have been used for years in similar instances, experts said.

"We've had social media clients end up in a post where they didn't use AI, but maybe used Photoshop," Cohen said. "It's kind of the same principles used to try and get it taken down."

In this way, this sort of application of technology is really nothing new in the eyes of the law, according to Cohen, who added that editing tools have been doing the same sort of thing for years.

The attorney said he thinks most people are on the same page in terms of AI regulation.

"I think everyone agrees there are problems with AI that need to be solved and I expect that as time goes on, you are going to see a lot more targeted legislation," Cohen said.

Dr. Thiago Serra, assistant professor of Analytics and Operations Management at Bucknell University, warned against broad regulations.

"If we start fearing this technology, it is going to be allowed for certain companies and not everyone else," Serra said. "I see hints at this kind of idea in some discussions and I think it would do more damage than good if we treat it as something to be taken away from people."