Open In App

5 Algorithms that Demonstrate Artificial Intelligence Bias

Improve
Improve
Like Article
Like
Save
Share
Report

It is an unfortunate fact of our society that human beings are inherently biased. This may happen consciously where humans are biased towards racial minorities, religions, genders, or nationalities and this may even happen unconsciously where biases develop as a result of society, family, and social conditioning since birth. Whatever the reason, biases do exist in humans and now they are also passed into the artificial intelligence systems created by humans. 5-Algorithms-that-Demonstrate-Artificial-Intelligence-Bias These biases can be passed into Artificial Intelligence Bias in AI systems when they are trained on data that includes human biases, historical inequalities, or different metrics of judgement based on gender, race, nationality, sexual orientation, etc. of humans. For example, Amazon found out that their AI recruiting algorithm was biased against women. This algorithm was based on the number of resumes submitted over the past 10 years and the candidates hired. And since most of the candidates were men, so the algorithm also favored men over women. As you can see from this example, biases in Artificial Intelligence causes a lot of damage. This bias hurts the chances of the biased group to participate fully in the world and provide equal benefits to the economy. And while it hurts the groups that the algorithm is biased against, it also hurts the trust of humans in the artificial intelligence algorithms to work without bias. It reduces the chances of Artificial Intelligence being used in all aspects of business and industry as this produces mistrust and the fear that people may be discriminated against. So technical industries that produce these artificial intelligence algorithms need to ensure that their algorithms are bias-free before releasing them in the market. Companies can do this by encouraging research on Artificial Intelligence Bias to eradicate bias in the future. But before this can happen, we also need to know the examples where Artificial Intelligence Bias was demonstrated by different algorithms. So let’s see them so that we can understand what algorithms should not do in the coming times.

Which algorithms demonstrate Artificial Intelligence Bias?

These are some algorithms that have demonstrated Artificial Intelligence Bias. Notably, this bias is always demonstrated against the minorities in a group, such as Black people, Asian people, women, etc.

1. COMPAS Algorithm biased against black people

COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions is an artificial intelligence algorithm created by Northpointe and used in the USA to predict which criminals are more likely to re-offend in the future. Based on these forecasts, judges make decisions about the future of these criminals ranging from heir jail sentences to the bail amounts for release. However, ProPublica, a Pulitzer Prize-winning nonprofit news organization found that COMPAS was biased. Black criminals were judged to be much more likely to re-commit crimes in the future than they committed. On the other hand, white criminals were judged less risky than they were by COMPAS. Even for violent crimes, black criminals were misclassified as more dangerous almost double the time as compared to the white criminals. This discovery in COMPAS proved that it had somehow learned the inherent bias that is frequent in humans, which is, black people commit many more crimes than white people on average and more likely to commit crimes in the future as well.

2. PredPol Algorithm biased against minorities

PredPol or predictive policing is an artificial intelligence algorithm that aims to predict where crimes will occur in the future based on the crime data collected by the police such as the arrest counts, number of police calls in a place, etc. This algorithm is already used by the USA police departments in California, Florida, Maryland, etc. and it aims to reduce the human bias in the police department by leaving the crime prediction to artificial intelligence. However, researchers in the USA discovered that PredPol itself was biased and it repeatedly sent police officers to particular neighborhoods that contained a large number of racial minorities regardless of how much crime happened in the area. This was because of a feedback loop in PredPol wherein the algorithm predicted more crimes in regions where more police reports were made. However, it could be that more police reports were made in these regions because the police concentration was higher in these regions, maybe due to the existing human bias. This also resulted in ina bias in the algorithm which sent more police to these regions as a result.

3. Amazon’s Recruiting Engine biased against women

The Amazon recruiting engine is an artificial intelligence algorithm that was created to analyze the resumes of job applicants applying to Amazon and decide which ones would be called for further interviews and selection. This algorithm was an attempt by Amazon to mechanize their hunt for talented individuals and remove the inherent human bias that is present in all human recruiters. However, the Amazon algorithm turned out to be biased against women in the recruitment process. This may have occurred as the recruiting algorithm was trained to analyze the candidates’ resume by studying Amazon’s response to the resumes that were submitted in the past 10 years. However, the human recruiters who analyzed these resumes in the past were mostly men with an inherent bias against women candidates that were passed on to the AI algorithm. When Amazon studies the algorithm, they found that it automatically handicapped the resumes that contained words like “women” and also automatically downgraded the graduates of two all-women colleges. Therefore Amazon finally discarded the algorithm and didn’t use it to evaluate candidates for recruitment.

4. Google Photos Algorithm biased against black people

Google Photos has a labeling feature that adds a label to a photo corresponding to whatever is shown in the picture. This is done by a Convolutional Neural Network (CNN) that was trained on millions of images in supervised learning and then it uses image recognition to tag the photos. However, this Google algorithm was found to be racist when it labeled the photos of a black software developer and his friend as gorillas. Google claimed that they were appalled and genuinely sorry for this mistake and promised they would correct it in the future. However, all Google had done until 2 years later was removing gorillas and other types of monkeys from Convolutional Neural Network’s vocabulary so that it would not identify any photo as such. Google Photos displayed “no results” for all search terms relating to monkeys such as the gorilla, chimp, chimpanzee, etc. However, this is only a temporary solution as it does not solve the underlying problem. Image labeling technology is still not perfect and even the most complex algorithms are only dependent on their training with no way to identify corner cases in real life.

5. IDEMIA’S Facial Recognition Algorithm biased against black women

IDEMIA’S is a company that creates facial recognition algorithms used by the police in the USA, Australia, France, etc. Around 30 million mugshots are analyzed using this facial recognition system in the USA to check if anybody is a criminal or a danger to society. However, the National Institute of Standards and Technology checked the algorithm and found that it made significant mistakes in identifying back women as compared to white women or even both black and white men. According to the National Institute of Standards and Technology, Idemia’s algorithms falsely matched a white woman’s face at a rate of one in 10, 000 whereas it falsely matched a black woman’s face at a rate of one in 1, 000. This is 10 times more false matches in the case of black women which is a lot! In general, facial recognition algorithms are considered acceptable if their false match rate is one in 10, 000 while the false match rate found for black women was much higher. Idemia claims that the algorithms tested by NIST have not been released commercially and that their algorithms are getting better at identifying different races at different rates as there are physical differences involved in races.


Last Updated : 04 Jul, 2022
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads