Bias. You’ve heard about it, witnessed it, and let’s be honest, you’ve probably even been it. Whether we think so or not, we’re all biased. But what’s a bias and how does it come about? 

The conventional view of bias is favouring something over another without real or fair justification. In other words, it’s the human brain taking a shortcut in its thinking by grabbing information from a previous experience and using it for future decision-making. 

It’s your brain’s natural way to save energy to make the correct survival decision if the time comes. Without this essential skill, we probably wouldn’t be here today. 

If this all seems a bit fuzzy, here’s an example of bias in action. Imagine you’re walking along and you see a lion. At that moment, you don’t need to consciously analyse the situation to know that the lion could eat you. No, you immediately, and without thinking, react in a way that would hopefully save your life. You take a shortcut to perceive the situation to respond quickly. 

However, biases have evolved to more than just a way to survive. In fact, biases come in many different forms and are easy to overlook unless you understand them: 

  • Have you ever favoured information that confirms your existing beliefs, and disregarded information that doesn’t? That’s called confirmation bias. With this bias, we tend to accept ideas that support our views because being right feels good, and we don’t want anything to challenge that feeling. 
  • How about crediting yourself when things go right and blaming external factors when things go wrong? That’s self-serving bias at work. In this situation, we tend to think we’re the cause of our triumphs and abandon our responsibility when mistakes happen.

These examples represent only two of the many forms biases can take in daily life. So, why does recognising and understanding bias matter in our world today?

Bias in AI

As we start to translate our intelligence into artificial “beings,” we see the potential to use technology to create something that talks, moves and even thinks like us. But as we create this ideal being, we need to teach them using the experiences and knowledge of real humans. 

Now, here’s where things get interesting. 

AI learns by getting data from humans and processing that information by finding patterns. Just as a child uses patterns to make sense of the world, so does AI to find connections in the data to interpret their environment and make future decisions. But because the data AI received comes from real human experiences, it’s likely that information has some level of bias within it. 

The reality is the knowledge humans feed into AI isn’t “clean data,” or unbiased knowledge. In fact, there’s a high likelihood that AI is gathering biased information and mimicking negative behaviour on a greater scale.This issue is not a problem for the future, but a real issue happening today. 

Take Tay, the “Thinking About You” bot released by Microsoft in 2016.

Based on analysis and interactions on Twitter, Tay learned in the ins and outs of Twitter and then began to send out tweets. After 96,000 tweets and 16 hours, the bot was taken offline. Why? Because Tay started posting racist and sexually-charged messages. The bot learned from politically incorrect phrases – the ones getting the most attention and engagement – to create inflammatory tweets using its “repeat after me” capability. It wasn’t the bot’s fault, (remember it’s not conscious), it merely reflected the conversations people were having online.

Another example of bias in AI is the recruiting tool that Amazon was secretly using to filter through job applications. The algorithm behind the tool was scoring resumés on a scale, but ended up systematically scoring men higher than women for technical jobs. The algorithm learned to create this bias after reviewing historical data of applicants hired by the company, which showed many of the technical roles were filled by men. That’s human bias at work again.

From these examples, we can see how transferring our own unseen biases into AI algorithms is detrimental. But what can we do to improve the learning process and curb the biases that seem so ingrained in us?

How do we stop AI from becoming a terrible artificial human?

Let’s think back to why biases form in the first place. Biases occur because it’s an energy-saving method for the human brain. But AI doesn’t have a brain, or at least one that we refer to as human. With a constant source of electricity, AI can never tire as we do. If we think about it this way, then bias really has no place in AI at all. 

Now, that’s a nice theory but it tends to overlook the fact that as long as humans are in charge, biases will continue to sneak back into the algorithms. While there are many ideas floating around what is the best approach to overcome bias in AI, we believe a good first step is to introduce more transparency

Think about it: if the average person were to learn more about what AI is, how it works and where it’s used, it’s more likely they can be part of the conversation and add their insight to make the data more comprehensive and quell the bias. 

Imagine how transparency could have influenced the Amazon recruitment tool. If Amazon was open about the tool and how it works, applicants could have sent in their input or feedback, or at least better understand how the algorithm judges an application. 

In this case, reducing bias becomes this community-driven effort where people can collectively offer their perspective, and in the process, neutralise the space from the opinion of one data scientist. 

The future of bias in AI may be uncertain, but through increased awareness and transparency, we have an opportunity to control this great technology and use it to bring humankind a step closer to a more ideal society.