Open In App

ChatGPT Founder Sam Altman Say AI could Surpass Humanity Within the Next 10 Years

Last Updated : 24 May, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

ChatGPT creator Sam Altman, warns the world about the “superintelligence” of Artificial Intelligence systems that could enable them to surpass humans within the next ten years.

 

On Monday, 22nd May 2023, OpenAI CEO Sam Altman along with Greg Brockman and Ilya Sutskever wrote down a blog post on how the AI systems have the potential to overtake humans in the next ten years. 

The post highlighted the “superintelligence” risks of AI and how there’s a dire need to regulate these AI systems to ensure a safer future.

The ChatGPT developers believe that the current AI system risks need to be governed and mitigated at the earliest while the superintelligence technology will be needing dedicated special attention and treatment to formulate and regulate it properly.

The blog post read  “Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations. In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future, but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.”

They also cited examples of nuclear energy and synthetic biology to highlight how superintelligence can be highly destructive and will require extraordinary treatment and coordination to reduce risks.

In the blog post, Sam Altman with others suggested ways to navigate the course of superintelligence and also reinforced that there shouldn’t be a stop to the development of AI. 

The executives at OpenAI proposed three main points mainly: the coordination between AI developers and governments to ensure integration and limit the development of AI to a certain rate per year; the need for safety research; and the requirement of an international authority like the IAEA that can set regulations, conduct tests, and audits and enforce safety standards above a specific capability threshold.

While answering the question of why the technology is being developed at all given the number of risks and threats it poses to human existence, the OpenAI officials mentioned two precise and discrete reasons for it:

“First, we believe it’s going to lead to a much better world than what we can imagine today. The world faces a lot of problems that we will need much more help to solve; this technology can improve our societies, and the creative ability of everyone to use these new tools is certain to astonish us. Second, we believe it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so tremendous, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.”

Before authoring this blog post, Sam Altman also appeared before Congress where he addressed the questions regarding the risks of advanced AI. He stated that AI chatbots are a significant area of concern and will require strict rules and regulations to govern the development of AI.


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads