### The Code of Practice of the EU AI Act: A Step Towards Responsible Governance of Artificial Intelligence
With the increasing prevalence of General Purpose Artificial Intelligence (GPAI) models, the regulation of such technologies has become a priority for European institutions. In this context, the EU AI Act has introduced a fundamental mechanism to ensure effective and responsible governance of these powerful technologies: the Code of Practice.
Developed by the AI Office in collaboration with industry representatives, the Code of Practice serves as a “soft law” instrument. This means that, although it is not legally binding, it aims to complement legal obligations with concrete and practical recommendations for companies. Participation in this code is voluntary, but it is strongly supported by incentives that promise to make compliance a recognized value in the sector.
The Code of Practice outlines a series of best practices in key areas that are crucial for the operation of GPAI models. Among these, transparency is fundamental: companies are encouraged to accurately document the training data used to develop their systems. This not only fosters greater trust from users but also helps ensure the ethical use of information.
Another central element is security. Companies are called upon to identify and mitigate systemic risks associated with the use of their technologies. This includes the responsibility to prevent the generation of harmful content or the use of AI systems for nefarious purposes, such as election manipulation. Therefore, security governance is an indispensable aspect for any GPAI provider, which must grapple with the potentially devastating implications of poor risk management.
Finally, the Code of Practice also addresses copyright issues, urging companies to implement policies that respect the opt-out requests of rights holders. In an era where content generated by AI systems is becoming increasingly prevalent, the issue of recognizing copyright rights is of fundamental importance and cannot be overlooked.
Although participation in the Code of Practice is optional, the consequences for companies that do not comply can be significant. For GPAI models deemed to be at systemic risk, compliance with the code is set to become a fundamental criterion in the risk assessments conducted by competent authorities. This implies that companies that do not adopt the guidelines suggested by the code could find themselves at a disadvantage when it comes to obtaining the necessary approvals or licenses to operate in the European market.
Additionally, the AI Act provides for severe penalties for violations of regulations. A GPAI provider that does not adhere to the obligations set forth by the regulation—many of which are also included in the Code of Practice—can face substantial fines that can reach up to 7% of their annual global revenue. This figure even exceeds the penalties outlined under the GDPR, creating a strong incentive for companies, including those outside Europe, to comply with the guidelines.
The adoption of the Code of Practice thus represents a significant step forward in building a responsible artificial intelligence environment. Acting as a bridge between legislation and industrial practice, this code not only prepares companies for future legal obligations but also establishes a benchmark for accountability and security practices in the sector.
In the context of a rapidly evolving technological landscape, a proactive approach to the governance of AI models is essential. The best practices outlined in the Code of Practice are not merely recommendations; they represent a commitment to ethical and safe innovation. Industry players are called upon to seriously reflect on these directives, as the crafting of new rules governing the interaction between technology and society is in their hands.
In conclusion, the Code of Practice of the EU AI Act offers a unique opportunity.

