Вештачка интелигенција

New Title: The European Union Approves Groundbreaking AI Act

Summary

The European Union has achieved a historic milestone by becoming the first legislative body in the world to adopt legislation regulating the use of artificial intelligence (AI) technology and establishing frameworks for its commercial and public application. European Commissioner Thierry […]

New Title: The European Union Approves Groundbreaking AI Act

The European Union has achieved a historic milestone by becoming the first legislative body in the world to adopt legislation regulating the use of artificial intelligence (AI) technology and establishing frameworks for its commercial and public application.

European Commissioner Thierry Breton stated, “Europe has positioned itself as a pioneer, recognizing the significance of its role as a global standard.” The AI Act was adopted by the European Commission, the European Council, and the European Parliament on Friday, December 8, after a 36-hour marathon debate.

President of the European Commission, Ursula von der Leyen, referred to the vote as a “historic moment,” stating that the AI Act will provide “legal certainty and pave the way for innovation in reliable artificial intelligence” and will “contribute to the development of global frameworks for trustworthy artificial intelligence.”

The AI Act aims to serve as a reference point for countries worldwide that seek to balance the promises and risks of AI technology. The legislation still needs to receive final approval from the European Parliament and Council before it becomes law. The vote is planned to take place before the EU parliamentary elections in early June 2024. If the law is voted on time, some parts of the legislation could come into effect as early as next year, but the majority will take effect in 2025 and 2026.

However, critics note that many of the technologies that the AI Act attempts to regulate can undergo significant changes. The initial version of the AI Act was published in 2021, but the emergence of ChatGPT and other general AI models necessitated a thorough revision of the legislation to account for these new technological breakthroughs.

During recent strikes by writers and actors, discussions on artificial intelligence focused on issues such as protecting character rights for actors and ensuring guarantees for writers and other creatives that AI systems would not be used to replace them. EU legislation is much broader in scope and encompasses the use of AI by companies and governments, including key sectors such as law enforcement and energy.

Key provisions include restrictions on the use of facial recognition software by police and governments, with certain exceptions for security and national safety, such as its use in preventing terrorist attacks or identifying victims or suspects based on a predefined list of serious crimes. The AI Act also introduces new transparency requirements for manufacturers of the largest general AI models, such as those powering ChatGPT. Here, the EU has used a standard similar to that applied by US President Joe Biden in his executive order of October 30, requiring the most powerful large language models, defined as those using base models that require training exceeding 1025 flops (a measure of computational complexity), to comply with the new transparency laws. Companies that violate the regulations may face fines of up to 7% of their total global revenue.

The impact of the new legislative document will largely depend on its implementation. The European Union has been a leader in digital privacy legislation, introducing the General Data Protection Regulation (GDPR) in 2016. However, the legislation has been criticized for its inconsistent implementation across the 27 EU member countries.

It is expected that companies affected by the AI Act will challenge some provisions in court, which could further delay implementation across the continent.

“There is much for companies to consider,” noted Irish expert in AI legal regulation, Barry Scannell, in a post-vote publication on Friday, pointing out that “increased transparency requirements may question intellectual property protection” and require “significant strategic changes” from companies using AI systems.

In a statement released on Saturday, October 9, the Computer and Communications Industry Association in Europe (CCIA), a corporate lobbying group representing leading internet services, software, and telecommunications companies, including Amazon, Google, and Apple, referred to the EU proposal as “poorly developed,” warning that it could excessively regulate many aspects of AI and impede technological innovation on the continent.

Civil liberties organizations, on the other hand, have pointed out that the new regulatory legislation does not sufficiently address issues, particularly regarding the use of AI facial recognition by governments and police.

“The three European institutions – Commission, Council, and Parliament – have effectively greenlit dystopian digital surveillance in 27 EU member states, setting a devastating precedent worldwide in terms of AI regulation,” said Mher Hakobyan, AI technology advocacy advisor at Amnesty International. “The failure to introduce a complete ban on facial recognition is a massive missed opportunity to stop and prevent colossal harm to human rights, civil society, and the rule of law, which are already under threat throughout the EU.”

FAQ