Tory Burch Foundation
In early March, The Tory Burch Foundation hosted its Embrace Action 2020 Summit, which saw leaders and entrepreneurs from around the world gather to talk about gender parity and inclusion in the workforce.
At the Summit, Bank of America COO Thomas Montag spoke with former AOL CEO Steve Case, Goldman Sachs Partner Dina Powell, and Tory Burch about how their companies aimed to create a more inclusive industry.
Yasmin Green, director of research and development for Jigsaw, an internal tech incubator within Google parent company Alphabet, spoke about the role human bias plays when programming artificial intelligence.
The Tory Burch Foundation Summit in early March was a gathering of some of the most prominent executives and entrepreneurs in the world.
Bank of America COO Thomas Montag former AOL CEO Steve Case, and Dina Powell McCormick, partner and member of the Management Committee at Goldman Sachs, were a few of the execs who spoke about how they sought to make their companies more inclusive.
A prominent theme throughout the conference was gender parity in the workplace.
"The word 'ambition' takes on a completely different meaning when applied to a woman than when applied to a man," Burch told Business Insider. "Women are criticized for exhibiting the exact same quality men are praised for. This has to change. We do that by shining a light on unconscious gender bias, which was the focus of our Summit."
Yasmin Green, director of research and development at Jigsaw, a unit of Google parent company Alphabet, spoke about one particularly complex hurdle in modern society: the difficulty of programming artificial intelligence without bias.
The problem with training AI on humans, Green said, is that humans are biased, and when the data that feeds AI is biased, then the AI becomes biased itself.
"Are we content with algorithms that reflect back to us the way the world works?"
Green detailed an experiment that demonstrated this unconscious bias in AI. She and her team created the same fake professional profile for a woman and a man and browsed online job sites as each of these imaginary people. In the end, they found that men were five times as likely to see ads for higher-paying jobs than women.
This, she said, was because women believe they must fulfill 100% of the requirements before they apply to a job, whereas men believe they only need to meet at least 60% of the requirements before they apply to the job.
"So at the same skill level, we [women] are clicking on jobs that are less senior and less well paid," Green said. "But if we click that way, then the internet is going to learn and that's what we're going to see."
Green cited another example, in which she and her team had trained an AI model to pick up on hate speech on social media. After a few trials, their AI model began to flag the sentence, "I am a proud gay man," as a hate sentence.
Green said this was because they trained the AI model by using millions of example sentences that humans wrote on the internet, and most sentences and comments that contained the word "gay" were negative — 80% of them, in fact.
Therefore, the AI model Green's team experimented with took this data and learned to associate the term "gay" with something negative and hateful.
"I ask myself how can I raise my daughters to make good decisions in life, [and] to be more compassionate and less prejudiced than the world around them," Green said at the conference. "The question for us — are we content with algorithms that reflect back to us the way the world works?"
To help prevent situations like this and lessen the bias in AI, Green said social justice activism needs to be expanded to include algorithm. She also noted the importance of having diverse representation in AI programming.
"It's not enough just to automate human behavior," she said. "We need to make sure that what's reflected back to us in algorithms is something that's better than we are."
Read the original article on Business Insider