Does Artificial Intelligence Need a General Counsel? The Unintended Consequences of AI

Artificial Intelligence
Artificial Intelligence



 

In this three part series, Alan Brill, who a Senior Managing Director in Kroll’s Cyber Risk unit and an Adjunct Professor at Texas A&M Law School, and Elaine Wood, who is a Managing Director at Duff & Phelps specializing in compliance and regulatory consulting and a former federal prosecutor, look at the evolution of artificial intelligence, machine learning and autonomous decision making and how the skills of the General Counsel are likely to be critical in protecting the organization from avoidable risks.

Part 1 examined how The Law of Unintended Consequences affects general counsel dealing with the evolution of AI, machine learning, and decision making. Part 2 is below.

Well-Intended Actions Can Still Be Illegal, or Unwise



Imagine that a well-known hospital in the United States had a server compromised by criminals located in Asia. This could be done in a way that the hospital might not notice. The hackers would almost certainly have committed U.S. federal crimes in gaining unauthorized access to the hospital’s computer (e.g. 18 USC § 1030, Fraud and related activity in connection with computers). Let’s say that the criminals then use the hijacked hospital computer to carry out an attack on their ultimate target, Company X, and they are successful in extracting files which they store on the hospital’s system. An AI-based cyber defense software system at Company X detects the attack and identifies that it originated at a specific IP address—the IP address of the hospital. The system has evolved to take action to identify files that its logs show were transmitted to the criminals. It reaches out and runs software designed to allow it to enter the system that it associates with the breach.

The hospital’s cybersecurity system detects the hack-back activities. It notifies the hospital that it is under attack, and the hospital begins to execute its cyber breach intrusion plan. The hospital follows the protocols in the plan and takes steps including the notification of both local police and FBI cyber units, notification of the broker and carrier of its cyber insurance policy, and it engages a pre-arranged computer forensics company and outside law firm. Significant internal technology resources within the hospital are committed to defending the hospital’s network and identifying any information removed in connection with the attack. The hospital, after all, is subject to HIPAA and HITECH laws that require prompt notification of federal regulators should there be evidence of 500 or more patient records being compromised. Within hours, the hospital’s in-house experts and forensic consultants track the attack back to the Company X. All of this happens quickly. In the first 24 hours, the hospital has probably spent or committed to spend $50,000 to $100,000 responding to the attack.

Several bills have been introduced into Congress to provide immunity from U.S. criminal laws in connection with hacking back. But these bills, to date, have left two outstanding issues. First, while they deal with U.S. criminal law, they do not appear to address civil law. In our example, the hospital has expended as much as $100,000 in implementing its breach-response program and hiring experts. Should the hospital or its insurer have to cover such charges? Or should they be able to bring a lawsuit for damages against the source of the hack-back, Company X?

Let’s now assume that the hospital is located in France, and that Company X is in the U.S. Let’s also assume that there is a law in place immunizing an organization like Company X from criminal sanctions under U.S. law for hacking back. But in hacking back, they violate Article 323 of French law (France’s computer crimes law). The United States government can’t pass a law exempting a person or entity from liability for violating the laws of another country.

So if an AI system’s activities violate the criminal laws of one or more nations, who gets prosecuted? While in the U.S., corporations can be defendants in criminal cases, our laws—and those of every other country—were not designed to cover the criminal actions of a piece of software where such actions occurred as a result of decisions made through machine learning or AI.

It would be easy to suggest that the crime is the responsibility of the person or persons who programmed the AI software that “committed” the crime. And that certainly is appropriate for more conventional software, like malware, where we can find a direct link between the intent of the person planting the software and the destructive act itself. But is the same true in the case of self-evolving artificial intelligence software?

As AI software evolves, it learns, and sometimes that evolution is not what the developers expect. While it is certainly possible to understand that AI software can be developed specifically to support criminal activity—a case where the developers should be responsible for the crimes “committed” by their software—what about developers of software that evolves in a way that results in an unintended violation of national laws?

The problem is that developers of AI don’t necessarily think in terms of imposing boundaries on AI action that are defined by laws and regulations in the real world. They may not think in terms of specific limitations imposed by laws of their home country, let alone laws of foreign countries. Of course, while we don’t expect our AI system architects, designers and programmers to be experts on international law, those laws (including cyber laws) exist and can be enforced whether or not a company’s AI development team is aware of them. As Thomas Jefferson wrote in a letter from Paris in 1787, “… ignorance of the law is no excuse in any country. If it were, the laws would lose their effect, because it can always be pretended.”

Consider another hypothetical. A bank hires a “big data” analytics firm to develop a system for using AI to approve or reject personal loan applications. The analysts are provided with 10 years of loan applications, loan decisions, and payment records for all loans made. The analyst team determines that one of the objectives to be set for the new system is to minimize bad loans—those that are made, but never repaid.

The deep learning component of the AI essentially cross-tabulates all of the available data to the payment history of the loans to determine which elements could be used to identify loans that are the most likely to default. The system finds a correlation between loan defaults and the postal code of a loan applicant’s residence. While not a very strong correlation, the system determines that it is one of the strongest. As a result, the system recommends against offering loans to people based on their residency. This results in loan denial to all of the residents in several inner city areas and almost immediately results in reputation damaging headlines… “Bank Invents E-Redlining” and “Bank’s Artificial Stupidity denies loans to customers in minority neighborhoods.” While this outcome was never intended, the failure to understand and control the actions of the system have resulted in a serious crisis for the bank.

How could this have been avoided? We’ll try to answer that question in Part 3 of this series, to be published in February.

 

Alan Brill is a Senior Managing Director in Kroll’s Cyber Risk unit and an Adjunct Professor at Texas A&M Law School. Elaine Wood is a Managing Director at Duff & Phelps specializing in compliance and regulatory consulting and a former federal prosecutor.

Advertisement