- Oops!Something went wrong.Please try again later.
Artificial Intelligence is barely out of its infancy in terms of its ability to mimic the intricacies of human intelligence, but the technology is making huge advances, powering everything from factory automation to bank-loan approvals. Experts agree that now is the time for companies to develop AI with deliberate care or risk further cementing bias in our economies.
For its third season, the #ChamberBreakers podcast series is unpacking capitalism to see what needs fixing, and what we can do as businesses to pave a more equitable future for all.
In this episode, Lianna Brinded, director at Yahoo and Xavier White, CSR and innovation marketing manager for Verizon Business, talk to acclaimed roboticist Dr Ayanna Howard, dean of Ohio State University College of Engineering, and founder of Zyrobotics, a non-profit making therapy and educational products for special-needs children.
Howard, who explored bias in AI in her book "Sex, Race, and Robots: How to Be Human in the Age of AI,” says she started to become concerned some years ago, as companies started adopting AI without tackling issues like bias and over-trust.
If developed and implemented with awareness, AI can be a tool to level the playing field for all within the capitalist system, for example by lowering barriers to entry or promotion faced by certain groups.
One of the positive aspects of AI is that it “allows us to integrate our values.” However, since AI is programmed by humans, we also imbue it with our biases, if the bias is already present in the data used to teach the AI.
Howard cites studies that found black women were not offered follow-up healthcare services, based on historic data that informed the AI. Likewise, there have been instances where algorithms factor gender into loan applications and loan rates.
On the positive side, she believes that AI could act as an anti-bias trainer, detecting nuance in sexist, racist, or homophobic language or practices that may not be explicit enough to allow for precise coding.
“I think AI can do this, but it has to be adaptive,” she says. “Imagine if you were typing in something and it knew your identity, and would say, ‘that word, it's showing you're a little biased against this certain group.’”
There are multiple ways where companies can act to remove AI bias in data or implement good AI practices, and one is by offering “bias bonuses.”
In the same way that companies give bounties to people who can find security bugs in their systems, Howard says they need to “really start committing to what I would call third party auditors with respect to bias.”
“If your company does not look like the world, how can you expect to have a competitive advantage if you're creating products for people you don't understand?” Howard says.
Companies should also consider diversity in experience, meaning getting people such as ethicists and social scientists on board.
For Howard, today’s debate on AI mirrors conversations had in the past about technology creating a digital divide. “That did not happen, but people were intentional, there were a lot of efforts, and a lot of understanding,” she says. “Today, this connectedness... has really enabled the world to expand.”
The six-part video series is also a podcast and is out every Monday. Next week’s episode features David Kenny, CEO of Nielsen.