Automation in Hiring Could Mean 'Significant Legal Risks'

An Amazon delivery truck. Photo credit: Diego M. Radzinschi / ALM

Amazon.com Inc. developed an experimental hiring tool one employee called the “holy grail” in recruiting: a data analytics program that could sift through thousands of applications and rate candidates.

The e-commerce giant scrapped the tool after discovering a bias against certain résumés from women, according to a report from Reuters. The algorithm picked up patterns from the company’s male-dominated workforce and devalued, according to the report, résumés identifying “women” and other gender-specific terms.

Companies and their lawyers often talk about how the infusion of technology will create new efficiencies, but the Amazon debacle shows there are pitfalls. The apparent shortcomings in Amazon's tool point to ways automation can still incorporate unconscious biases. Amazon told Reuters that its tool "was never used by Amazon recruiters to evaluate candidates."

“There are significant legal risks,” said Mark Girouard, an employment attorney at Minneapolis-based Nilan Johnson Lewis. “These tools find patterns in the data and look for correlations in whatever measure of success you are looking at. They can find correlations that are statistically significant. Just because something has a statistical correlation, doesn’t mean it’s a good or lawful way to select talent.”

Still, such programs are attractive to large companies. A recent survey by the management-side firm Littler Mendelson found that the most common use of data analytics and artificial intelligence is in hiring and recruiting. Nearly half of those employers surveyed said they use some kind of advanced data techniques to grow their workforce.

Girouard said employers seek these programs because they can ease the workload for hiring managers and they can be cheaper to develop than traditional assessments—such as a written or online test. He said employers also believe there is potential for less implicit or explicit bias since computers theoretically are neutral.

“When I am advising clients considering using these tools, I make sure that their vendor will let them look under the hook or look under the black box to look and monitor what they are finding,” he said. He added, “It’s such a new area. I think as we see more employers head in this direction, it will likely to lead to litigation.”

Arran Stewart, co-founder of the online portal Job.com, predicted the Reuters story that revealed the issues with Amazon’s experimental tool will “open a can of worms” and make employers and workers aware of the potential for mistakes with the automation.

Artificial intelligence is “like a child,” he said. “Anything you teach it, it will inherit. It will inherit bias. Developers and coders create AI and create the rules, set the dictionaries and taxonomies and the tools will inherit their biases, sometimes unknowingly.”

One Former EEOC Official's Perspective



Civil rights advocates and federal regulators said there could be unintended consequences from automation. While the area may be ripe for discrimination claims, it’s not obvious to job seekers that employers are using these tools.

In 2016, the U.S. Equal Employment Opportunity Commission held a meeting to discuss the implications of the rise of big data in the workplace. Kelly Trindel, then the EEOC’s chief analyst in the Office of Research, Information and Planning, predicted some of the potential pitfalls for protected classes for companies who are increasingly using these programs to recruit and hire.

“The primary concern is that employers may not be thinking about big data algorithms in the same way that they've thought about more traditional selection devices and employment decision strategies in the past,” Trindel said at the EEOC meeting. “Many well-meaning employers wish to minimize the effect of individual decision-maker bias, and as such might feel better served by an algorithm that seems to maintain no such human imperfections. Employers must bear in mind that these algorithms are built on previous worker characteristics and outcomes.”

Algorithms focused on hiring can replicate "past behavior at the firm or firms used to create the dataset," said Trindel, who has since left the agency. "If past decisions were discriminatory or otherwise biased, or even just limited to particular types of workers, then the algorithm will recommend replicating that discriminatory or biased behavior.”

At that same 2016 meeting, Littler Mendelson shareholder Marko Mrkonich said the challenge for employers “is to find a way to embrace the strengths of big data without losing sight of their own business goals and culture amidst potential legal risks.”

“The challenge for the legal system is to permit those engaged in the responsible development of big data methodologies in the employment sector to move forward and explore their possibilities without interference from guidelines and standards based on assumptions that no longer apply or that become obsolete the next year,” Mrkonich said.

An American Bar Association report from 2017 by Darrell Gay and Abigail Lowin of Arent Fox said there is “great liability” in allowing algorithms to take control without human oversight. Yet, they cited research that case law is lacking surrounding big-data guidance. They warned there is potential for lawsuits to ensue.

“On its face, this is good news for employers,” the attorneys wrote of the dearth of cases. They added, “In any case, it is difficult to build arguments or learn from past errors if those historical lessons are hidden behind obscure verbiage.”

The report said computers and algorithms are not easily trained to make nuanced judgments about job applicants. To defend against discrimination allegations, employers have to explain why one employee was hired instead of another.

“An algorithm can learn the population makeup of various protected groups; and it can learn the traits that an employer seeks in new employees; but it cannot adequately balance those potentially competing factors,” the attorneys wrote. They continued. “When employers rely too heavily on algorithms that do not receive the proper 'instruction' and oversight, this can create potential exposure.”