Вештачка интелигенција

The European Commission Adopts the Artificial Intelligence Act

Summary

The European Commission is set to regulate the use of artificial intelligence (AI) in the European Union through the upcoming Artificial Intelligence Act (AI Act). Once adopted, the AI Act will become the world’s first comprehensive legislation on AI. Experts […]

The European Commission Adopts the Artificial Intelligence Act

The European Commission is set to regulate the use of artificial intelligence (AI) in the European Union through the upcoming Artificial Intelligence Act (AI Act). Once adopted, the AI Act will become the world’s first comprehensive legislation on AI. Experts from FiSer Consulting present the key pillars of the new regulation and the impact it will have on the financial services sector.

As part of its digital strategy, the EU aims to regulate AI to ensure better conditions for the development and use of this technology. The proposed AI Act was adopted in April 2021 and states that AI systems used in various applications will be analyzed and classified according to the risks they pose to users. Different levels of risk will require varying degrees of regulation.

In June 2023, members of parliament reached an agreement to adopt the AI Act. Officials from EU member states have now begun discussions on the final shape of the law.

Goals of the EU AI Act and its Key Provisions

Supported by two goals – encouraging the adoption of AI and mitigating technology-induced risks – the EU AI Act sets out a vision for trustworthy AI. The purpose of this law includes:
– Protecting European citizens from the misuse of AI
– Ensuring transparency and trust
– Promoting innovation while ensuring safety and privacy

The AI Act takes a pragmatic risk-based approach, categorizing AI applications into four levels:
1) Unacceptable risks: AI applications deemed harmful to fundamental rights will be completely banned.
2) High risks: Thorough scrutiny and legal prerequisites are required before implementing these applications.
3) Limited risks: Organizations will be obliged to meet transparency requirements.
4) Low and minimal risks: These applications mostly remain unregulated, allowing for fast-paced innovation.

The AI Act regulates practices and AI systems by setting strict requirements for transparency, measures, and governance.

Scope and Implementation Deadlines

The jurisdiction of the AI Act goes beyond the origin of AI development and covers any AI system that is deployed, marketed, or operated in the European Union.

The journey of the EU AI Act began with its proposal in April 2021. By the end of 2022, the European Council adopted a comprehensive legislative approach. The European Parliament provided its final reflection on the law in May 2023. The path to implementation involves harmonizing laws and translating the basic requirements into concrete technical standards.

The implementation phase is expected to start in early 2025.

Enforcement and Challenges

While visionary, the EU AI Act also poses challenges. Enforcement requires precise monitoring and continuous revision. Potential shortcomings include possible inflexibility and exemptions.

In theory, classifying AI-based systems is simple, but in practice, it creates difficulties as determining the specific level of risk becomes a complex process. The success of the AI Act will depend on global regulatory harmonization, which will foster a global AI network based on shared ethical principles.

Specific Impact on the Financial Services Sector

Aligned with the global impact of the General Data Protection Regulation (GDPR), the AI Act proposes to become a universal reference framework for defining the ethical use of AI regardless of geographical boundaries. For the financial services sector, which is involved in AI for fraud detection, algorithmic trading, risk analysis, and enhancing customer experience, the AI Act presents a double-edged sword.

Financial institutions must adapt their AI systems to comply with the AI Act’s guidelines, especially when it comes to high-risk systems such as credit scoring. The AI Act emphasizes the need for transparent and interpretable AI models and mandates the use of unbiased high-quality data. Non-compliance with the law may result in financial penalties.

However, for institutions, compliance with the AI Act also means gaining consumer trust, ensuring ethical AI practices, and potentially achieving a competitive edge in the market. This would involve organizations’ commitment to allocate resources (money and time) and invest in relevant regulatory knowledge.

A “wait and see” approach is not recommended. Financial institutions need to proactively assess their AI systems to determine which ones are exposed to high-risk scenarios according to the AI Act. It is advisable to conduct a comprehensive analysis of the differences compared to the core requirements stated in the law.

Conclusion

The upcoming AI Act in Europe represents evidence of the continent’s proactive approach to AI governance. It strives for a balance between innovation and ethics. For the financial services sector, this law is not just a compliance issue; it is an opportunity to redefine the ethos of financial services in the age of AI.