After the publication of the final version of the code of conduct for good practices regarding artificial intelligence, the European Commission has provided essential clarifications on the obligations set forth by the AI Act. These guidelines touch on four key aspects of the topic:
1. The definition of general-purpose artificial intelligence models;
2. The identification of providers trading such models;
3. Exemptions for providers releasing general artificial intelligence models under free or open-source licenses, provided they meet transparency requirements;
4. Compliance with obligations by providers of general artificial intelligence models.
The guidelines aim to clarify these points and offer useful guidance to all parties involved in the AI value chain, thereby complementing what has already been outlined in the aforementioned code of conduct.
In this context, the Commission aims to support companies by providing beneficial recommendations to ensure that artificial intelligence models are documented in a technical manner, that copyright compliance policies are implemented, and that a public summary of the content used during training is available. Furthermore, considering the importance of cybersecurity, it is essential to foresee continuous assessments, effective incident reporting, and, of course, adequate data protection.
It is important to emphasize that these guidelines, while highly recommended, are not legally binding. The oversight of these regulations will be managed by the AI Office, which will adopt a collaborative and proportionate approach. However, its enforcement powers will only come into effect from August 2, 2026, allowing companies an additional year to comply with the new system.
For providers of general-purpose artificial intelligence (GPAI) models at systemic risk, there are specific requirements. They must continuously assess and mitigate systemic risks, taking appropriate measures throughout the model’s lifecycle and ensuring an adequate level of cybersecurity protection, including the infrastructure associated with the model.
The notion of the “lifecycle” of a model is crucial in defining which obligations are imposed on providers of general artificial intelligence models at systemic risk. These obligations include drafting and maintaining detailed technical documentation on the model, communicating the model’s limitations and capabilities to downstream providers, and complying with copyright regulations, utilizing modern technologies to ensure rights are respected.
Providers of models released under free or open-source licenses may be exempted from some of these obligations under specific conditions. However, providers of general artificial intelligence models at systemic risk, including those that are open-source, are subject to additional obligations. For example, they must notify the Commission when they develop a systemically risky model and ensure the security of the model itself.
Companies involved in the production of artificial intelligence must understand whether they fall under the subjects obliged to comply with these new regulations. To this end, the Commission has established specific criteria, starting from the classification of models and the provider’s role in the market.
A crucial aspect in evaluating general artificial intelligence models is the concept of “systemic risk.” This refers to a potential significant negative impact on the Union market, either due to the scope of such models or because of predictable effects on public health, safety, or fundamental rights.
Therefore, systemic-risk GPAI models require more stringent obligations, which include conducting assessments on models, reporting critical events, and adopting measures to ensure cybersecurity. Additionally, the Commission has established that a general-purpose artificial intelligence model is considered to be at systemic risk if it meets specific high-impact capability requirements.
Notifying the Commission is an essential step; providers must promptly inform the agency when a model meets…

