The European Commission recently published a final version of the code of conduct for good practices regarding artificial intelligence (AI), providing significant clarifications on the requirements set forth by the AI Act. The new guidelines address four key aspects: the definition of general-purpose AI models, the responsibilities of suppliers bringing such models to market, exemptions for models released under free or open-source licenses that meet certain transparency criteria, and the need for suppliers to ensure compliance with the stipulated obligations.
These guidelines aim to provide clarity to stakeholders in the AI value chain, complementing the mentioned code of conduct. The Commission aims to support companies with practical suggestions for implementing complete technical documentation, copyright compliance policies, and public summaries of the content used for training.
A crucial aspect relates to AI models deemed to pose systemic risk. In this regard, continuous assessments, incident reporting, and data protection measures focused on cybersecurity are essential. It is important to note that although these guidelines are not legally binding, they are strongly recommended.
An office will oversee these practices, adopting a collaborative approach, although enforcement powers will not come into effect until August 2, 2026, allowing companies the necessary time to adapt.
Providers of general-purpose AI models with systemic risk are required to continuously assess and mitigate associated risks by taking preventive measures throughout the model’s lifecycle. This also includes ensuring an adequate level of cybersecurity protection.
Legal responsibilities for providers of AI models include drafting detailed technical documentation related to development processes, providing useful information to downstream suppliers, and adopting copyright compliance policies. They are also required to publish a summary of the content used for training and, in the case of models developed outside the European Union, to appoint an authorized representative before their market entry.
Providers of general-purpose models released under free or open-source licenses may benefit from certain exemptions, but must still address additional obligations if the model in question is deemed high-risk. Starting from August 2, 2025, the Commission, through its office, will ensure compliance with these requirements.
For companies involved in producing artificial intelligence, it is essential to understand if they must comply with obligations regarding general-purpose models. To this end, the Commission has established specific criteria indicating whether a model is considered general-purpose, whether the provider is bringing the model to market, and whether any applicable exemptions exist.
A key element to consider is the classification of “systemic risks” associated with AI models. Systemic risks should be understood as those that could have a significant impact on the European market, although they are not limited to specific sectors such as public health or fundamental rights. These risks manifest in models trained with computational resources exceeding a certain established threshold.
To ensure safety, providers of general-purpose AI models must conduct ongoing assessments and report any serious incidents. The Commission has defined classification criteria, specifying that an AI model is considered to pose systemic risk if it has “high-impact capabilities” or if a specific assessment determines such classification.
Notification to the Commission becomes crucial when a model meets the high-impact criteria. Providers must communicate promptly, within two weeks, after verifying such compliance or when they have reason to believe it will be achieved. This notification phase is critical and can occur even before the training process is completed.
Even though the requirements and…

