!Discover over 1,000 fresh articles every day

Get all the latest

نحن لا نرسل البريد العشوائي! اقرأ سياسة الخصوصية الخاصة بنا لمزيد من المعلومات.

Europe was on the verge of leading the world in regulating artificial intelligence. But can the leaders reach an agreement?

Governments around the world are living in a state of panic to regulate emerging AI technology, but this also raises the risks of disrupting the European Union’s efforts to approve the world’s first comprehensive AI regulations.

European Efforts to Regulate AI

For many years, Europe has been working to establish rules to limit AI, but these efforts have been hindered by the emergence of generative AI systems like OpenAI’s ChatGPT, which have amazed the world with their ability to create human-like work but have raised concerns about the risks they pose.

Ongoing Issues

In addition to regulating generative AI, EU negotiators need to resolve a long list of other contentious issues, such as a complete ban on the use of facial recognition systems by the police, which has raised privacy concerns.

Possibility of Reaching an Agreement

The chances of reaching a political agreement among Members of the European Parliament, member state representatives, and executive commissioners are “relatively high due to all negotiators’ desire for political victory” in major legislative efforts, according to Chris Sherlack, a senior fellow specializing in AI governance at the Irish Council for Civil Liberties.

A consensus has already been reached on 85% of the technical terms in the draft, according to Carme Artigas, Spain’s Minister of Artificial Intelligence and Digital Advancement, who holds the EU’s rotating presidency.

If no agreement is reached in the final round of talks, which begins Wednesday afternoon and is expected to continue late into the night, negotiators will have to resume next year. This raises the likelihood of delaying the legislation until after the EU elections in June or taking a different direction with new leaders assuming office.

Main Outstanding Points

One of the main outstanding points is foundational models, which are advanced systems that support general AI services like OpenAI’s ChatGPT and Google’s Bard chatbot.

These systems are also known as large language models and are trained on massive datasets of written works and images gathered from the internet. They enable generative AI systems to create something new, unlike traditional AI, which processes data and completes tasks using predefined rules.

The goal of the AI Act was to legislate product safety, similar to existing European legislation for cosmetics, cars, and toys. AI uses will be classified according to four levels of risk – from minimal or no risk posed by video games and spam filters to unacceptable risks represented by social scoring systems that govern people based on their behavior.

The new wave of general AI systems released since the draft legislation in 2021 has garnered the interest of European lawmakers to enhance the proposal to include foundational models.

Researchers have warned that powerful foundational models, built by a few large tech companies, could be used to propagate misinformation and manipulation online, cyberattacks, or create biological weapons. They serve as the fundamental frameworks for software developers who build AI-powered services, so “if these models are corrupted, anything built on top of them will also be corrupted – and users will not be able to fix it,” according to the non-profit group Avaaz.

France, Germany, and Italy’s resistance to updates on the legislation and their call instead for self-regulation – a shift in position seen as an attempt to help European start-ups in generative AI, such as the French start-up Mistral AI and the German company Aleph Alpha, compete with large American tech firms like OpenAI.

Was

Brando Benifei, the Italian Member of the European Parliament involved in negotiation efforts, is optimistic about resolving disputes with member states.

There is “some movement” regarding the foundation models, although there are “more issues in finding an agreement” on facial recognition systems, he said.

Technology writer Matt O’Brien of the Associated Press contributed to this report from Providence, Rhode Island.

Source: https://apnews.com/article/ai-act-talks-artificial-intelligence-regulation-e2bb57eef7ba6d2b0c85ef757fcd3bb0


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *