Skip to content

Creating Fair and Unbiased Artificial Intelligence

Dr. Alexander Tuzhilin, Dean, Computer Science

Updated: February 13, 2024 | Published: February 7, 2021

Updated: February 13, 2024

Published: February 7, 2021

Computer Science

Ample research has shown that humans are biased in many ways, from confirmation bias to gender bias and anchoring bias. Bias of any sort can be problematic, even when people don’t believe they fall prey to this way of thinking. Understanding and acknowledging various types of biases can help to root them out of science; however, a substantial amount of work still needs to be done in this area.

Bias in Artificial Intelligence

What does all this have to do with AI?

When discussing artificial intelligence, biases can play just as large a role as they do in human psychology. This is because it is humans, after all, who program the AI. In recent years, people have demanded that artificial intelligence be more transparent in how decisions are made and actions are taken.

By understanding how the AI was created and the input and machine learning that’s used in its development, it becomes easier to see where certain biases may lie. One of the common methods of teaching AI systems is by using supervised machine learning. In these cases, the AI will utilize a large volume of problems and solutions. The system would then learn from the solutions that were provided and determine what elements led to those solutions for new problems it faced.

The system would then be tested and the results reviewed to determine how accurate the AI was when making its own decisions. More training and machine learning will then help to make the AI more accurate.

Yet, there’s a chance the AI could be learning biases based on the data that it’s provided. If the information used for the training is not fully representative, inclusive, and fairly balanced, there is a possibility that the AI will be biased as a result.

For example, if the training set used for the AI had data that rejected a large number of women who tried to get a car loan, the AI could form a gender bias. It could “learn” that this is acceptable and base its decisions on gender rather than facts related to the loan. If this AI is put into use, it could mean that women would be far less likely to get a car loan when that system is used.

However, it’s not only a person’s gender that could cause a bias in AI. If the datasets used for training include more data for one group than another group, it could mean a discrepancy in the accuracy. Since there would be fewer datasets for certain groups, the AI will be less accurate for those groups. As always, the most datasets that are included the better the results, as long as the datasets are balanced and fair.

Real-World Examples of Biased Artificial Intelligence

It’s no longer just a theory that AI could discriminate against people. There are many real-life examples of AI bias that have been discovered in recent years. This prejudice in the data used to create the algorithms can cause a range of consequences including discrimination.

One of the hypothetical examples often used is in the realm of education. In this hypothetical, an AI algorithm helps decide whether an applicant would be accepted into a program at school. However, one of the inputs to the algorithm is the geographic location of the applicant. The location could be tied to ethnicity, which means that the AI may end up favoring certain ethnicities over another.

Below are 2 examples of AI bias in the real world.

1. Healthcare

Most agree that healthcare and positive health outcomes should be equally accessible to all. However, in 2019, researchers discovered problems with an algorithm used in United States hospitals on more than 200,000,000 people. This algorithm favored white patients over black patients when it came to determining who would need extra medical care.

This algorithm did not use race as a basis to make its decision, so it may not be immediately apparent as to why it made these decisions. When they looked deeper, it was discovered that it was using a variable that was connected to race—healthcare cost history.

The reasoning behind this was that cost would help determine the healthcare needs that a patient would have. On average, black patients had lower healthcare costs than white patients for the same conditions. Therefore, the AI determined that black patients would not need to have extra healthcare, and it was all based on cost.

Researchers were able to alter the AI to reduce the level of bias, which is fortunate. Had they not bothered to look into the veracity of the AI in the first place, though, the bias would have continued.

2. Amazon

Amazon has become a prominent tech company, and they utilize artificial intelligence in a host of ways. This includes creating an algorithm for hiring employees. In 2015, the company discovered that their algorithm was biased against women. The reason for this was they included the number of resumes they’d received over the previous decade, and most of those applications were from men. This meant that the AI “learned” that it should hire men rather than women.

How Artificial Intelligence Can Be Made Fair and Unbiased

Although there are certainly dangers present when it comes to AI being biased and unfair, it doesn’t have to be this way. There are methods of creating fair AI that can be implemented. However, it’s important to realize that AI is essentially a machine, so the concept of fairness as humans understand it is beyond its current scope.

Start with unbiased data. Before providing data for the algorithm to learn from, scientists and researchers need to be sure the data they are using is fair, balanced, and unbiased. Machines don’t question the data provided as humans do. If the data being used is biased from the start, a machine has no way of recognizing the problem.

Even with all of the right inputs and datasets, there could still be problems with perceived fairness. To create an algorithm that is as fair and unbiased as possible requires humans who essentially act as devil’s advocates. They will examine the results of the AI decisions and look for what it’s doing right and wrong. This allows the designers to take the feedback from people who are looking for these biases and alter their algorithm.

Regular testing should be a part of the process. Setting up an algorithm should never be a one-time affair. AI needs to be tested and transparent. Reviews should look for any issues with bias that may have inadvertently crept into the system.

Creating fairer AI is an ongoing and difficult process, but it’s one that needs to continue.

Can Bias Be Entirely Eliminated?

Whenever humans are involved, even when they play the role of devil’s advocate as described above, there’s a risk of hidden bias to slip through. Since machines learn from the data they are given by humans, it means that there’s always a risk for bias in AI. However, using the methods discussed above can help to reduce the instances of bias as much as possible.

IBM, for example, has been using a three-level ranking system that helps to determine whether the data being used is bias-free or not. Their goal, as should be the goal of all working in this field, is to reduce the bias of AI. IBM has created an ethics board, they’ve developed policies around AI, and work with trusted partners.

Additionally, they contribute to open-source toolkits that are geared toward helping to build trust in AI.
With awareness of the problem, commitment to this goal, and transparency, it should be possible to vastly reduce the amount of bias in algorithms going forward.

Sources: