5 Dangers of Artificial Intelligence in the Future
Artificial Intelligence is a great tool for development currently. It has revolutionized technology in all industries and solved many problems faced by humanity. But AI is still in its beginning phases and it can also lead to great harm if it is not managed properly. There are many areas in which Artificial Intelligence can pose a danger to human beings and it is best if these dangers are discussed now so that they can be anticipated and managed in the future.
Success in creating effective Artificial Intelligence could be the biggest event in the history of our civilization. Or the worst. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.
– Stephen Hawking
This quote was made by legendary physicist Stephen Hawking at a tech conference in regards to Artificial Intelligence. And he’s absolutely correct. Keeping this in mind, let’s see 5 dangers that Artificial Intelligence can pose in the future.
1. Invasion of Privacy
Privacy is a basic human right that everyone deserves. However, Artificial Intelligence may lead to a loss of privacy in the future. Even today, it is possible to track you easily as you go about your day. Latest technologies like facial recognition can find you out in a crowd and all security cameras are equipped with it. The data gathering abilities of AI also mean that a timeline of your daily activities can be created by accessing your data from various social networking sites.
In fact, China is currently working on a Social Credit System that will be powered by Artificial Intelligence. This system will give all Chineses citizens a score based on how they behave. This may include behavior like defaulting on loans, playing loud music on trains, smoking in non-smoking areas, playing too many video games, etc. And having a low score may mean a ban on travel, having a lower social status, etc. This is a prime example of how Artificial Intelligence can lead to access in all parts of life and total loss of privacy.
2. Autonomous Weapons
Autonomous weapons or “killer robots” are military robots that can search their targets and aim independently according to pre-programmed instructions. And almost all technically advanced countries in the world are developing these robots. In fact, a senior executive in a Chinese Defense firm even stated that future wars would not be fought by humans and that using lethal autonomous weapons was inevitable.
But there are many dangers in having these weapons. What if they go rogue and kill innocent people. Or even more tragically, what if they cannot distinguish between their targets and innocent people and kill them by mistake. Then who would be responsible for that situation? An even bigger problem would be if these “killer robots” are developed by governments that don’t care about human life. Destroying these robots would be pretty difficult then! Keeping these problems in mind, it was decided in 2018 that autonomous robots would still have to take a final command from a human in order to attack. But this problem could increase exponentially with the increase of technology in the future.
3. Loss of Human Jobs
As Artificial Intelligence becomes more and more advanced, it will obviously take over jobs that were once performed by humans. According to a report published by the McKinsey Global Institute, around 800 million jobs could be lost worldwide because of automation by 2030. But then the question arises “What about the humans that are left unemployed because of this?” Well, some people believe that many jobs will also be created because of Artificial Intelligence and that may balance the scales a bit. People could move from physical and repetitive jobs to jobs that actually require creative and strategic thinking. And people could also get more time to spend with their friends and family with less physically demanding jobs.
But this is more likely to happen to people who are already educated and fall in the richer bracket. This might increase the gap between the rich and poor even further. If robots are employed in the workforce, this means that they don’t need to be paid like human employees. So the owners of AI-driven companies will make all the profits and get richer while the humans who were replaced will get even poorer. So a new societal setup will have to be generated so that all human beings are able to earn money even in this scenario.
4. Artificial Intelligence Terrorism
While Artificial Intelligence can contribute immensely to the world, it can unfortunately also help terrorists in order to perform terror attacks. Many terror agencies already use drones to carry out attacks in other countries. In fact, ISIS carried out its first successful drone attack in 2016, which killed 2 people in Iraq. If thousands of drones were launched from a single truck or car with programming to kill only a select type of people, this would be a very scary type of terrorist attack that is helped by technology.
Terrorist agencies could also use autonomous vehicles to deliver and explode bombs or create guns that can track movement and fire without any human help. These guns are already used at the North and South Korean border. Another fear is that terrorists could access and use the “killer robots” mentioned above. While governments may still be ethical and try to prevent the loss of innocent humans, terrorists will have no such morals and they will use these robots for terror attacks.
5. Artificial Intelligence Bias
It is an unfortunate fact that human beings are sometimes biased against other religions, genders, nationalities, etc. And this bias may unconsciously also enter into the Artificial Intelligence Systems that are developed by human beings. The bias may also creep into the systems because of the flawed data that is generated by human beings. For example, Amazon recently found out that their Machine Learning based recruiting algorithm was biased against women. This algorithm was based on the number of resumes submitted over the past 10 years and the candidates hired. And since most of the candidates were men, so the algorithm also favored men over women.
In a separate incident, Google Photos tagged two Africa-American people as ‘gorillas’ using Facial Recognition. This was a clear indication of racial bias that caused the algorithm to label humans wrongly. So the question is “How to tackle this Bias?” How to make sure that Artificial Intelligence is not racist or sexist like some humans in this world. Well, the only way to handle this is that AI researchers manually try to remove the bias while developing and training the AI systems and selecting the data.