What is Artificial Intelligence Bias and How to Remove it?
Have you ever experienced bias in your life? Bias is described as prejudice against a person of an extremely unfair group. If you are an Indian, you might have experienced bias for being dark-skinned. If you are an American, you might have experienced bias for being African-American. And this continues…
Humans are unfortunately biased against other humans for a variety of illogical reasons. This may happen consciously where humans are biased towards racial minorities, religions, genders, or nationalities. For example, a UN report found at least 90% of men and women in the world held some sort of bias against females with no country in the world having zero gender bias. This bias may also happen unconsciously where it develops as a result of society, family, and social conditioning since birth. Whatever the reason, biases do exist in humans and now they are also passed into the artificial intelligence systems created by humans. This is Artificial Intelligence Bias but the real question is “How does it occur”? How is the bias in humans passed to the artificial intelligence systems even when measures are taken to handle it? This is the question that this article aims to answer along with understanding how different companies are working towards removing it completely from their systems.
What is Artificial Intelligence Bias?
Artificial intelligence Bias constitutes the cumulative human biases that are passed into the artificial intelligence systems created by humans. These biases can be passed into the artificial intelligence systems when they are trained on data that includes human biases, historical inequalities, or different metrics of judgment based on gender, race. nationality, sexual orientation, etc. of humans.
For example, Amazon found out that their AI recruiting algorithm was biased against women. This may have occurred as the recruiting algorithm was trained to analyze the candidates’ resume by studying Amazon’s response to the resumes that were submitted in the past 10 years. However, the human recruiters who analyzed these resumes in the past were mostly men with an inherent bias against women candidates that were passed on to the AI algorithm. When Amazon studied the algorithm, they found that it automatically handicapped the resumes that contained words like “women” and also automatically downgraded the graduates of two all-women colleges. Therefore Amazon finally discarded the algorithm and didn’t use it to evaluate candidates for recruitment.
As you can see from this example, biases in Artificial Intelligence causes a lot of damage. This bias hurts the chances of the biased group to participate fully in the world and provide equal benefits to the economy. It means that the biased group may also be discriminated against and even lose the ability to live freely in society in the worst case. This was demonstrated by the COMPAS artificial intelligence algorithm which was used in the USA to predict which criminals were more likely to re-offend in the future. Based on these forecasts, judges would make decisions about the future of these criminals ranging from their jail sentences to the bail amounts for release. However, it was found that COMPAS was biased. Black criminals were judged to be much more likely to recommit crimes in the future than they committed. On the other hand, white criminals were judged less risky than they were by COMPAS. This meant that a biased Artificial Intelligence caused the black criminals to be judged much more harshly than their white counterparts in the legal system.
But this is not the extent of the harm caused by Artificial intelligence Bias. In the long term, Artificial intelligence Bias reduces human trust in technology. While it hurts the groups that the algorithm is biased against, it also hurts the trust of humans in the artificial intelligence algorithms to work without bias. It reduces the chances of Artificial Intelligence being used in all aspects of business and industry as this produces mistrust and the fear that people may be discriminated against by the AI. So technical industries that produce these artificial intelligence algorithms need to ensure that their algorithms are bias-free before releasing them in the market. Companies can do this by encouraging research on Artificial Intelligence Bias to eradicate bias in the future. And on that note, let’s see what leading tech companies are doing to remove Artificial intelligence Bias from algorithms.
How to Remove Artificial Intelligence Bias?
Artificial Intelligence systems are only as good as the data that is put into them. So if the data is biased, obviously the AI system will be biased as well. Bad data can contain racial, sexual, gender, or ideological biases which makes the AI systems trained on this data problematic as well. Now, there are many methods to enforce fairness in the data so that the Artificial Intelligence systems are fair.
One method is to preprocess the data so that the bias is eliminated before training the AI systems on the data. This is a way to create unbiased AI systems by training them with data that is unbiased. Another method is to post-process the AI system after it is trained on the data. This means altering some of the predictions of the AI system so that it satisfies an arbitrary fairness constant that can be decided beforehand. However, both of these methods include developing AI algorithms that can be easily explained. This is necessary because AI algorithms are normally black boxes and it is very difficult to understand how they arrived at their conclusions. So it is also difficult to understand where the bias is in the AI algorithm. But if the AI algorithms are easily explainable, then the bias can be found and eliminated.
For example, IBM Research is currently working on removing Artificial Intelligence Bias. They believe that within five years, the number of AI bias will only increase in algorithms as the usage of AI increases. But they are working on new solutions to control this bias and create Artificial Intelligence systems that are free of it. The MIT-IBM Watson AI Lab is using recent advances in computational cognitive modeling and artificial intelligence to consider ethical principles and how humans apply them to decision making so that they can incorporate these principles into machines that have human values and ethical decision-making skills. IBM scientists have also created an independent bias rating system that can be used to determine the fairness of an Artificial Intelligence system.