Technology

Can Artificial Intelligence Be Unbiased?

Short answer: Yes. But machine learning is only as good as the data it's based on. And that's where it gets challenging.

The “Diversity in Tech” session at the third annual Women in Innovation Forum in New York City (W.IN. Forum NY), held on May 21, posed a not-so-simple question to some of the most prominent women in technology: Is it possible to build unbiased artificial intelligence?

It’s problematic, says Rashida Richardson, director of policy research at the AI Now Institute at New York University and session panelist. “A lot of the existing AI machine learning systems are using what data is available, and often they’re not using data that they themselves are creating, so it’s often data that’s free. You’re either getting it from government information — public open data — or other data sets that maybe are just easily accessible,” said Richardson, who was joined on the panel by Y-Lan Boreau, a researcher at Facebook AI Research; Aurélie Jean, cofounder and CTO/CIO at MixR, and Marie-Eve Piche, CFO at Pymetrics. The problem, Richardson said, is that easily accessible data sets are often incomplete, which leads to predictive analytics that does not include a diverse group of people.

Easily accessible data sets are often incomplete, which leads to predictive analytics that does not include a diverse group of people.

“Let’s say you have a tool that’s predicting who is most likely to purchase an item. The data set only has consumers from a metropolitan area from ages 18 to 30, but it’s predicting for the general population, then it may produce outcomes that’s not going to necessarily capture what a baby boomer may be interested in,” Richardson said. If you aren’t properly representing everyone, then opportunities may be closed off for a certain group, she said.

Piche, whose company Pymetrics uses AI to match talent with employers (and claims to do so without bias), stressed that while obtaining complete data sets can be difficult, she believes it can be achieved to create unbiased outcomes. “The one thing that we say is you’ve got to check your tech. Your technology is only as good as the input,” Piche said. “For example, if you want to predict who’s going to be the next CEO, and you start putting a bunch of data in there including first name, what you’re going to find out is that ‘John’ is actually the best predictor of who’s going to be the next best CEO, and I think we can all agree that when you start checking this, then there you’ve got a big skew [toward] men, and you’re just like, ‘This doesn’t make any sense.’”

But, Piche said, when one takes out elements that increase a bias toward a gender, ethnicity, or socioeconomic background and instead focuses on job performance and a person’s aptitudes, “the prediction model became much stronger,” she said.
Of course, those working with AI will always introduce their own biases into the data, but by constantly vetting the data, Piche believes those biases can be exorcised from the technology over time.

“Teaching a computer an algorithm to do it is not easy, but it’s feasible. I want to point out that, from a business point of view, I’ve been to many unbiased training sessions, and training the human to be unbiased might actually be harder than getting actually training an algorithm,” Piche said. “As much as we are scared sometimes of the technology introducing bias, we can control it, we can educate it, we need to audit it, we need to make sure the data is accurate in there, but we have to realize as well that we need to train the human, because the human is also introducing lots of unconscious bias when they actually do things, and sometimes it’s easier to actually have the computer and train them to do the right thing than to actually train the humans.”

Casey Gale

Casey Gale is associate editor of Convene.