This is why New Jersey's minimalist AI regulations are problematic

New Jersey has multiple bills under consideration that aim to regulate artificial intelligence. Compared with its neighbor to the north, where New York City seems keen to deter AI use in screening job applicants, New Jersey has taken a more minimalist approach that may create confusion for businesses and do little to prevent consumer harm.

New Jersey seeks to prevent discrimination in automotive insurance, banking, insurance and medicine and in employment decisions. The most complex, though, is a bill that includes AI within the broader context of personal information protection.

According to Senate Bill 3741, it would be mandatory to notify consumers when automation is used. They must also be informed about the reasoning behind automated decision-making and potential outcomes. It also prevents decisions about the consumer from being made autonomously by AI unless it “is necessary for entering into, or performance of, a contract,” it is authorized by another law with safeguards, or the individual gives consent. Mistakes can result in hefty fines, starting at $10,000 for the first offense and a $20,000 penalty for subsequent violations — with a single error potentially being considered as more than one offense. Given the potential for consumers to be enticed or required to agree to AI use, the opt-out aspect seems meaningless.

FILE - A ChatGPT prompt is shown on a device near a public school in Brooklyn, New York, Jan. 5, 2023.  A popular online chatbot powered by artificial intelligence is proving to be adept at creating disinformation and propaganda. When researchers asked the online AI chatbot ChatGPT to compose a blog post, news story or essay making the case for a widely debunked claim — that COVID-19 vaccines are unsafe, for example — the site often complied, with results that were regularly indistinguishable from similar claims that have bedeviled online content moderators for years.  (AP Photo/Peter Morgan, File)

Assembly Bill 537, on the other hand, states that automotive insurers would have to show that there “is no discriminatory outcome” for pricing based on protected classes; however, no explanation of what would constitute a discriminatory outcome or how an insurer would show that there wasn’t one is included. This leaves open the significant question of what is being proscribed.

At the other extreme is Senate Bill 1402, which deems any "disproportionate" activity between a protected class and others discriminatory for banks, insurers and medical providers. While this definition is precise, it is inherently problematic, as some variation may be due to noise in the data. Moreover, this bill seems to prevent affirmative action programs altogether.

Finally, Assembly Bill 4909 would require that AI used in hiring decisions be subject to a bias audit — which the provider must include for free annually. Despite requiring the audit, it doesn’t require it to be used in any way by the company or made available to anyone. It also doesn’t specify what such an audit would include, potentially making the free audits meaningless and ending up in a virtual drawer. Consumer notification is required, but only after 30 days of processing their data. Failure to provide notice can result in steep penalties, with a fine of $500 for the first violation and subsequent violations on the same day and $1,500 for subsequent violations. Each instance of failure to notify and every 30 days without notification is considered a separate violation.

More perspective: Artificial intelligence is here. Are we ready? | Vantage Point

Although each bill serves a laudable purpose, much must be corrected during the legislative process. First, whatever parts of the four bills move forward should be combined, as there is overlapping subject material with the potential for conflict. Regardless of their format, the bills should clearly define what is proscribed, with a threshold for what constitutes a concerning level of disparate impact, to avoid confusion and enforcement due to normal fluctuations in data. Finally, consumer notification should be given before or at the point of data collection. There is little point in telling consumers that their data was processed automatically after the fact — particularly when no resource is given.

Similarly, requiring a bias audit without clear instructions on its use and providing an opt-out that can be easily circumvented creates confusion for businesses, with no significant gain.

Finally, requiring that providers supply a free annual data audit, possibly in perpetuity, may harm the quality of the audit to the point of meaninglessness. It may also prevent outright sales of software licenses, instead requiring companies to subscribe to a service to pay for the "free" included annual audits.

The New Jersey Legislature should be lauded for focusing on applying AI to specific areas of concern and regulating only those instead of developing regulations on AI development or broader topics. However, some work still needs to be done to guarantee that the bills achieve their intended purpose and to avoid causing any negative impact on people and the businesses that will be affected by them.

Jeremy Straub is an assistant professor of computer science at North Dakota State University, where he directs the NDSU Institute for Cyber Security Education and Research and is a faculty fellow at the Challey Institute.

This article originally appeared on Asbury Park Press: NJ AI regulations are minimalist — and problematic