Open In App

AI or Authentic? Europe’s Bold Proposal for Labels Will Blow Your Mind!

Last Updated : 17 Jun, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

The European Union is pressuring websites like Google and Meta to increase their efforts to combat fake information by labeling text, pictures, and other content produced by artificial intelligence.

The development of artificial intelligence (AI) has raised both enthusiasm and worries. Even though AI has the power to completely change a variety of industries, it also makes it difficult to tell what is artificially generated from what is real. Recognizing this requirement, Europe has proposed the implementation of a labeling system to distinguish content produced by AI from content produced by humans in the battle against misinformation.

Vera Jourova, vice president of the EU Commission, stated that the potential of a new generation of AI chatbots to produce advanced content and visuals in a matter of seconds presents “fresh challenges for the fight against disinformation.”

She said that she contacted the 27-nation bloc’s voluntary disinformation pact signatories, including Google, Meta, Microsoft, TikTok, and other tech firms, asking them to cooperate in addressing the AI issue.

Online platforms that have integrated generative AI into their services, such as Microsoft’s Bing search engine and Google’s Bard chatbot, should build safeguards to prevent “malicious actors” from generating disinformation, Jourova said at a briefing in Brussels.

EU laws, according to Jourova, are intended to defend free speech, but when it comes to AI, “I don’t see any right for the machines to have the freedom of speech.”

The rapid development of generative AI technology, which can create text, graphics, and video that resembles human speech, has astounded some people and scared others due to its potential to drastically alter many parts of daily life. With the AI Act, Europe has assumed a leading position in the worldwide effort to regulate artificial intelligence; however, the law still needs to receive final approval and won’t go into effect for several years.

Officials from Europe and the United States announced last week that they are drafting a voluntary code of conduct for AI that could be completed in a matter of weeks as a means of bridging the gap before the EU’s AI regulations take effect.

The European Commission, the executive arm of the European Union (EU), is thinking about implementing a labeling system that would make it crystal apparent whether a piece of material was produced by AI to address this challenge. The suggested approach attempts to promote openness in the digital environment and give people more control over the information they consume.

Platforms and online services would have to indicate when AI is used in the production or distribution of content under the labeling system. Labeling social media posts, news articles, videos, and other online content could fall under this category. Users would be able to evaluate the legitimacy and dependability of the content they encounter by being given this information.

The E.U. disinformation code, which mandates businesses quantify their efforts to counteract false information and submit regular updates on their progress, has already been ratified by the majority of the world’s largest digital enterprises.

Following Elon Musk’s acquisition of the social media platform last year, Twitter pulled out last month in what appeared to be his latest effort to relax regulations at the business.

“Twitter has chosen the hard way. They chose confrontation,” she said. “Make no mistake, by leaving the code, Twitter has attracted a lot of attention, and its actions and compliance with EU law will be scrutinized vigorously and urgently.”

Later this month, when European Commissioner Thierry Breton and a team visit Twitter’s San Francisco headquarters, they will conduct a “stress test” to evaluate the platform’s capacity to adhere to the Digital Services Act.

Labeling systems can help people to make better decisions and develop into critical online information consumers by being given open access to information about the source of the stuff they consume. The program underscores the rising understanding of the impact of AI on the information landscape and the necessity for proactive actions to uphold trust and authenticity, even while problems and concerns still exist.


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads