Вештачка интелигенција

Exploring the Oversight of Foundation Models in the EU

Summary

The management framework for supervising the obligations of foundation models and high-impact foundation models, including the establishment of a scientific panel, has been proposed by the Spanish government during its presidency of the European Union. Specific obligations related to foundation […]

Exploring the Oversight of Foundation Models in the EU

The management framework for supervising the obligations of foundation models and high-impact foundation models, including the establishment of a scientific panel, has been proposed by the Spanish government during its presidency of the European Union.

Specific obligations related to foundation models, such as GPT-4, which supports ChatGPT, the world’s most famous chatbot developed by OpenAI, are currently under consideration in the context of AI legislation. This legislative proposal aims to regulate artificial intelligence, taking into account a risk-based approach.

The AI legislation is in the final stage of the legislative process, known as trilogues between the European Council, Parliament, and Commission. As a result, the proposed management approach presented by the Spanish government, disclosed on Sunday (November 5th), can have a significant impact on ongoing discussions.

Supervision of Foundation Models

The text suggests that the European Commission should have exclusive powers to oversee the obligations of foundation models, including those of high impact that are subject to stricter rules. The Commission can investigate and enforce these provisions, either on its own initiative or based on a complaint from an AI supplier with a contract with the foundation model provider or from the newly established scientific panel.

The Commission will establish procedures through implementing acts to monitor the implementation of obligations by foundation model providers, including the role of the Office for Artificial Intelligence, the appointment of a scientific panel, and the modalities for conducting reviews.

The EU executive body will have the authority to conduct reviews of foundation models “taking into account, to the utmost extent possible, the opinion of the scientific panel” to assess the degree of compliance of suppliers with the AI legislation or to investigate security risks based on a qualified scientific panel report.

The Commission can carry out reviews independently or delegate them to independent auditors or verified team of experts. Auditors may require access to the model through an application programming interface (API).

For high-impact foundation models, the Spanish presidency has proposed evaluation by external teams. If a political decision is made to choose external teams, the Spanish have suggested an article giving the Commission the authority to grant “verified team” status.

These verified testers must demonstrate special expertise, independence from foundation model providers, and be thorough, accurate, and objective in their work. The Commission will establish a register of verified teams and define the selection process through delegated acts.

The proposal grants powers to the EU executive body, after a dialogue with foundation model providers, to request them to take measures in accordance with the requirements of the AI legislation and risk-mitigation measures when serious concerns about risks are identified through reviews.

The EU executive body will be able to request documentation that foundation model providers will have to develop, for example, on the capabilities and limitations of their model. This documentation can be requested by and made available to the economic entity that develops an AI application based on the foundation model.

If the documentation raises concerns about potential risks, the Commission may request additional information, enter into a dialogue with the provider, and require corrective measures.

Madrid has also proposed a sanction regime for foundation model providers who violate obligations under AI legislation or fail to meet requirements for documentation, reviews, or corrective measures. The percentage of total global turnover has not yet been determined.

Basic Governance Framework

The Spanish presidency has proposed the creation of a “governance framework” for foundation models, including high-impact ones, which includes the Office for Artificial Intelligence and a scientific panel to support the Commission’s activities.

Planned activities include regular consultation with the scientific community, civil society organizations, and developers on the state of risk governance of AI models and the promotion of international cooperation with other countries.

Scientific Panel

The tasks of the scientific panel include contributing to the development of methodologies for assessing the capabilities of foundation models, advising on the identification and emergence of high-impact foundation models, and monitoring potential material risks associated with foundation models.

Panel members should be selected based on recognized scientific or technical expertise in the field of artificial intelligence, act objectively, and report any potential conflicts of interest. They can also apply for “verified team” status.

Dealing with Non-Compliant Risky Systems

The presidency has proposed a revised procedure for dealing with non-compliant AI systems that pose a significant risk at the EU level. In exceptional circumstances, when the proper functioning of the internal market may be jeopardized, the Commission can conduct an urgent assessment and take corrective measures, including market withdrawal.

FAQ

1. What are foundation models of artificial intelligence?
Foundation models of artificial intelligence are advanced algorithms that use deep learning to generate text, images, or other interactive content. An example of such a model is GPT-4, which supports the ChatGPT chatbot.

2. Who oversees the obligations of foundation models?
The European Commission has exclusive powers to oversee the obligations of foundation models, including both high-impact and other models.

3. What are the obligations for providing documentation on foundation models?
Economic entities developing AI applications based on foundation models must provide documentation on the capabilities and limitations of the model. In case of concerns about risks, the Commission can request additional information and enforce corrective measures.

4. What is the role of the scientific panel?
The scientific panel is responsible for contributing to the development of evaluation methodologies for foundation models and monitoring potential risks associated with them.

5. How are non-compliant AI systems posing risks dealt with?
The Commission can conduct an urgent assessment and take corrective measures, including market withdrawal, in case of non-compliant AI systems posing a significant risk at the EU level.