Artificial Intelligence is a new revolution in the technology industry. But nobody knows exactly how it is going to develop! Some people believe that AI needs to be controlled and monitored otherwise robots may take over the world in the future! Other people think that AI will improve the quality of life for humans and maybe make them an even more advanced species. But who knows what will actually happen until it happens!!!
Currently, tech giants such as Google, Microsoft, Amazon, Facebook, IBM, etc. are all trying to develop cutting-edge AI technology. But this means that the Ethical Problems in Artificial Intelligence also need to be discussed. What are the dangers associated with developing AI? What should be their role in society? What sort of responsibilities should be given to them and what if they make mistakes? All of these questions (and more!) need to be addressed by companies before investing heavily in AI research. So now, let’s see some of these Ethical Problems that need to be dealt with in the world of Artificial Intelligence.
1. How to Remove Artificial Intelligence Bias?
It is an unfortunate fact that human beings are sometimes biased against other religions, genders, nationalities, etc. And this bias may unconsciously also enter into the Artificial Intelligence Systems that are developed by human beings. The bias may also creep into the systems because of the flawed data that is generated by human beings. For example, Amazon recently found out that their Machine Learning based recruiting algorithm was biased against women. This algorithm was based on the number of resumes submitted over the past 10 years and the candidates hired. And since most of the candidates were men, so the algorithm also favored men over women.
So the question is “How to tackle this Bias?” How to make sure that Artificial Intelligence is not racist or sexist like some humans in this world. Well, it is important that AI researchers specifically try to remove bias while developing and training the AI systems and selecting the data. There are many companies that are working towards creating unbiased AI systems such as IBM Research. IBM scientists have also created an independent bias rating system to calculate the fairness of an AI system so that the disasters given above can be avoided in the future.
2. What rights should be provided to Robots? And to what extent?
Robots are currently just machines. But what about when Artificial Intelligence becomes more advanced? There may come a time when robots not only look like human beings but may also have advanced intelligence. Then what rights should be given to robots? If robots become advanced enough emotionally, should they be given equal rights like humans or lesser rights? And what if robots kill someone. Should it be considered murder or a machine malfunction? All these are ethical questions that need to be answered as Artificial Intelligence becomes and more intelligent.
There is also the question of citizenship. Should robots be given citizenship of the country they are created in? This question was raised quite strongly in 2017 when the humanoid robot Sophia was granted citizenship in Saudi Arabia. While this was considered more of a publicity stunt than actual citizenship, it is still a question that governments may have to take seriously in the future.
3. How to make sure that Artificial Intelligence remains in Human Control?
Currently, human beings are the dominant species on Earth. And this is not because they are the fastest or the strongest species. No, human beings are dominant because of their intelligence. So the critical question is, “What happens when Artificial Intelligence becomes more intelligent than Human Beings?” This is known as “Technological singularity” or the point at which Artificial Intelligence may become more intelligent than humans and so become unstoppable. Humans could not even destroy that intelligence as it may even anticipate all our methods. This would make AI the dominant species on Earth and lead to huge changes in human existence or even human extinction.
However, is “Technological singularity” is even a possibility or just a myth? Ray Kurzweil, Google’s Director of Engineering believes it is very real and may even happen as early as 2045. However, he believes it is nothing to fear and would just lead to an expansion in the intelligence of human beings if they merge with artificial intelligence. Well, whatever the case, it is obvious that humans need to prepare for “Technological singularity” and how to deal with it. (Just in case!)
4. How to handle Human Unemployment because of Artificial Intelligence?
As Artificial Intelligence becomes more and more advanced, it will obviously take over jobs that were once performed by humans. According to a report published by the McKinsey Global Institute, around 800 million jobs could be lost worldwide because of automation by 2030. But then the question arises “What about the humans that are left unemployed because of this?” Well, some people believe that many jobs will also be created because of Artificial Intelligence and that may balance the scales a bit. People could move from physical and repetitive jobs to jobs that actually require creative and strategic thinking. And people could also get more time to spend with their friends and family with less physically demanding jobs.
But this is more likely to happen to people who are already educated and fall in the richer bracket. This might increase the gap between the rich and poor even further. If robots are employed in the workforce, this means that they don’t need to be paid like human employees. So the owners of AI-driven companies will make all the profits and get richer while the humans who were replaced will get even poorer. So a new societal setup will have to be generated so that all human beings are able to earn money even in this scenario.
5. How to Handle Mistakes made by Artificial Intelligence?
Artificial Intelligence may evolve into a super-intelligence in a few years but right now it is basic! And so it makes mistakes. For example, IBM Watson partnered with Texas MD Anderson Cancer Center to detect and eventually finish cancer in patients. But this AI system failed horribly as it gave totally wrong medicine suggestions to patients. In another failure, Microsoft developed an AI chatbot that was released on Twitter. But this chatbot soon learned Nazi propaganda and racist insults from other Twitter users and it was soon destroyed. And who knows, it may make even complicated mistakes in the future. And these were relatively safe failures that were easily handled. Who knows, Artificial Intelligence may make even more complicated mistakes in the future. Then what is to be done?
The question is about relativity. Do Artificial Intelligence systems make lesser or more mistakes than humans? Do their mistakes lead to actually lose of life or just embarrassment for companies like in the above cases? And if there is a loss of life, is it more or less than when humans make mistakes? All of these questions need to be taken into account when developing AI systems for different applications so that their mistakes are bearable and not catastrophic!