Open In App

OpenAI is Exploring Wikipedia Like Collective Decisions on AI

Improve
Improve
Like Article
Like
Save
Share
Report

The creator of ChatGPT, OpenAI, is testing how to get wide input on issues affecting its artificial intelligence, according to its president, Greg Brockman, on Monday.

 

Brockman spoke on the broad strokes of how the creator of the enormously popular chatbot is pursuing regulation of AI globally at AI Forward, a conference in San Francisco presented by Goldman Sachs Group Inc. and SV Angel.

One of the announcements he hinted at is similar to the Wikipedia model, which, according to him, calls for people with various points of view to come together and agree on the articles in the encyclopedia.

“We’re not just sitting in Silicon Valley thinking, we can write these rules for everyone,” he said of AI policy. “We’re starting to think about democratic decision-making.”

Brockman also raised the idea that international cooperation between governments is necessary to ensure the development of AI is done securely. OpenAI expanded on this idea in a blog post published on Monday.

Since ChatGPT’s November 30 launch, the public has been enthralled with generative AI technology that can generate uncannily authoritative prose from text inputs, making the program the fastest-growing app ever. Concerns about AI’s capacity to produce deep fake images and other false information have also come to light.

Will AI progress halt?

It’s difficult to keep up with the current advances in the field of artificial intelligence because they are happening so quickly. Practically every sphere of human endeavour is being impacted by artificial intelligence.

Nobody will stop working on AI just because certain experts are concerned. They know for sure that their rivals would continue if they made an effort.
 

Last week, OpenAI CEO Sam Altman presented a range of suggestions to American politicians for establishing artificial intelligence guidelines, including requiring licenses to create the most complex AI models and establishing a corresponding governance structure. This week, he is meeting with European policymakers. 

Elon Musk’s Stance on Artificial Intelligence 

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction,” Musk said in his interview with Tucker Carlson


Last Updated : 31 May, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads