Artificial intelligence continues to contain biases, but progress continues

Artificial intelligence - Bias

Google acknowledges that AI is still highly biased, though the tech is moving forward fast.

Artificial intelligence has been rocketing forward both technologically and in popularity, but according to a researcher at Google, it continues to contain biases and repairing fairness in the tech “isn’t a simple thing”.

The researcher says the chance is there to get it right as the world adopts it at a rapid speed.

“It’s as though we been given a second chance on how we use this technology to dismantle some of the biases we see in society,” said Google senior product manager Komal Singh at a Toronto media briefing.

Artificial intelligence - Man's face and digital face

Virtually every industry has been getting to know AI this year, including both its advantages and disadvantages. While some view artificial intelligence as a disruptive game change for everyday life that will boost efficiency and speed while providing solutions to some of the largest challenges in the world, others view it differently.

Apple co-founder Steve Wozniak and Tesla founder Elon Musk have a slightly different opinion from the most optimistic ones. They have cautioned about the way the technology advances, saying that it is currently moving too fast and that it’s important to implement guardrails before it will be truly appropriate for mainstream use.

There remains a great deal of concern over the way artificial intelligence is developing and being used.

The “godfather of AI”, British-Canadian computer scientist Geoffrey Hinton, has expressed concern over the technology. In fact, he left Google to be able to more openly discuss the risks he said includes bias and discrimination, echo chambers, joblessness, artificial/fake news, battle robots and existential risk.

Singh, despite her optimism, did acknowledge that risks are inherent to the technology. In fact, she said that she’d witnessed them herself. For instance, she said that if an artificial intelligence model is asked to generate an image of a nurse, it will typically provide an image of a woman. Similarly, a request for an image of a CEO will usually result in an image of a white man. Software engineer image requests will provide images of racialized men.

“In a nutshell, I think the takeaway is a lot needs to happen to fix the fairness problem,” said Singh.

Leave a Comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.