State creates task force to consider merits, needed regulations for AI

Joining a host of states across the country, Illinois is taking a hard look at the emerging presence of artificial intelligence — a computer technology that can perform problem-solving and decision-making typically seen by humans — by considering its possibilities while weighing its risks.

How the state has started this process is the creation of a task force combining the brains of legislators and technology experts to craft AI policy recommendations to the legislature.

Rep. Abdelnasser Rashid, D-Bridgeview, is co-chairing the Generative AI and Natural Language Processing Task Force, chosen by House Speaker Emanuel "Chris" Welch for the freshman legislator's computer science background. Rashid's bill, House Bill 3563, received unanimous support before being signed into law by Gov. JB Pritzker in early August.

The dome of the Illinois State Capitol on Monday, Aug. 28, 2023.
The dome of the Illinois State Capitol on Monday, Aug. 28, 2023.

The task force will consist of 20 members and hold at least five public meetings in Chicago, Springfield, Metro East, Quad Cities and Southern Illinois. The summation of these meetings will lead to a report shared with the governor's office and the General Assembly by Dec. 31, 2024.

Rashid told The State Journal-Register in an interview last month the task force will balance needed regulations on AI while not stifling innovation.

Forming any specific legislation will take time, he said, but will focus on tackling transparency issues and whether the technology may discriminate or have biases.

"We have to be careful about the simple accuracy of what AI does," he said. "I think sometimes people take for granted that what AI thinks is right and that's not necessarily the case."

More: Proposed CWLP garage near race riot site is 'disrespectful,' city council urged to act

Increased conversations on how governments can and should use the technology have occurred across the country and Illinois is no different. According to the National Conference of State Legislatures, at least 25 states along with Puerto Rico and the District of Columbia introduced AI bills so far in 2023.

Several AI-related bills were introduced earlier in the Illinois General Assembly, one of which would prevent data collection by remote gambling platforms attempting to predict how a participant may gamble in particular situations. House Bill 2570 did not advance out of committee, however.

Back in 2020, the state's Artificial Intelligence Video Interview Act went into effect placing new requirements on employers using AI in its job hiring process. Through the bill, companies that do use the technology are also required to inform prospective employees that AI would be used in their interview and obtain their consent before proceeding. However, the bill does not employers to provide alternative hiring methods.

Illinois was the first to regulate AI's use in videoed job interviews, where other states such as New York have followed, after concerns of racial bias causing more Black candidates to be passed over for an opening.

To account for these concerns, the act requires an employer using artificial intelligence analysis of a video interview to report the race and ethnicity of applicants who do not make it to subsequent rounds of in-person interviews and those who are hired to the Illinois Department of Commerce and Economic Opportunity.

According to DCEO, no such data has been reported to the department as of Nov. 30, 2022.

Concerns of racial biases with computer programs has also extended to the courts. A 2016 ProPublica study investigated how courts in a slew of states used a computer algorithm to determine a chance someone could commit future crimes.

These risk assessments found Black defendants were falsely labeled as future criminals at a rate nearly two times their white counterparts. At the same time, white defendants were more likely to be labeled as low-risk.

The technology however still has merits, said University of Illinois-Springfield computer science associate professor Elham Buxton, when used under the right conditions. In both the courtrooms and in other avenues where AI has been and will be used, she believes a hybrid approach of those that measure the technology's social impact with those possessing technical knowledge is essential.

"When we consider our AI model, we need to compare it with the status quo," said Buxton, teaching at UIS for 10 years. "For example, if you want to use an AI model to predict recidivism, then you compare it, you audit that system, find those statistical metrics for fairness, and compare it with the status quo right now ... for the judges. Is our AI model, improving the status quo as far as fairness or not?"

Federal action

State conversations on the matter were complimented by a group of bipartisan attorneys general requesting federal guidance earlier this year.

Illinois Attorney General Kwame Raoul joined a group of 23 chief legal officers calling on the National Telecommunications and Information Administration to adopt transparency standards and to recognize risks associated with AI.

“Consumers should be informed if companies are using AI in their products and services, and the potential impacts on people should be considered in shaping regulations,” Raoul said in a June 13 press release.

Specifically, Raoul and the coalition urged the NTIA to require companies to perform impact assessments of their AI systems and others using more high-risk AI to request external audits of their systems.

Contact Patrick Keck: 312-549-9340, pkeck@gannett.com, twitter.com/@pkeckreporter.

This article originally appeared on State Journal-Register: Task force to determine how Illinois can use, regulate artificial intelligence