After the introduction of the final version of the code of conduct for good practices in the field of artificial intelligence, clarification from the European Commission regarding the obligations established by the AI Act has become necessary. The recently published guidelines focus on four central themes.
First, a definition of general-purpose artificial intelligence models is provided. Second, the identification of providers who market these models is clarified. Third, an exemption is granted for providers who release general-purpose AI models under free or open-source licenses, provided they meet specific transparency requirements. Finally, the guidelines address the issue of compliance by providers of general-purpose AI models.
These guidelines aim to clarify the responsibilities of all stakeholders involved, integrating the aforementioned code of conduct. Additionally, the Commission is committed to supporting companies by providing practical guidance that includes the creation of comprehensive technical documentation for the model, copyright compliance policies, and the publication of a public summary of the contents used for training.
Particular attention is given to models with systemic risk, emphasizing the need for ongoing assessments, incident reporting, and data protection, with a focus on cybersecurity aspects. It is important to note that while these guidelines are strongly recommended, they do not have legally binding character.
Under the supervision of the AI Office, a collaborative approach will be adopted, although enforcement powers will not come into effect until August 2, 2026, thus providing additional time to ensure compliance with the regulatory framework.
When discussing general-purpose artificial intelligence (GPAI) models, providers of models that present systemic risk must continuously “assess and mitigate systemic risks.” They must also ensure an adequate level of cybersecurity for the model throughout its lifecycle.
The concept of the model’s “lifecycle” becomes crucial for understanding the responsibilities of providers. The obligations outlined in the AI Act include drafting technical documentation for the model and providing information to downstream providers so that they can understand the model’s capabilities and limitations. An adequate copyright compliance policy is also necessary.
Providers of GPAI models under a free license can benefit from exemptions under certain conditions, but those with systemic risk must certainly face additional requirements, including the obligation to notify the competent authorities when a model presents a systemic risk.
To determine whether a company must comply with these obligations, the Commission has established certain parameters to consider, such as the type of model and whether the entity is a producer placing the model on the European market. The guidelines provide examples for identifying providers in various scenarios.
GPAI models can entail “systemic risks,” referring to the significant impacts that such models may have on the European market in terms of public health, safety, and fundamental rights due to their widespread use. These risks are associated with models that use a high amount of computational resources.
Providers of such models must fulfill additional obligations, including conducting assessments of the models and ensuring cybersecurity. The risk classification is determined based on specific criteria, and providers must notify the Commission when a model meets certain known conditions.
Notification is a priority and must occur within two weeks of realizing that the model may have high-impact capabilities. The importance of planning and early allocation of computational resources is fundamental for the proper management of the model training process.
In this context, general-purpose AI models play a crucial role in promoting innovation within the European Union. These models can be used…

