The adoption of the EU AI Act, officially known as Regulation EU 2024/1689, represents a significant step forward in the regulation of artificial intelligence (AI), creating the first comprehensive and far-reaching regulatory framework at a global level. Unlike approaches that tend to be sector-specific or merely recommendatory guidelines, this regulation introduces a system of legally binding rules, supported by a layered risk-based approach.
To fully understand the importance of this act, it is crucial to analyze the categorization of artificial intelligence practices based on their risk levels. At the top of this classification are AI practices deemed entirely unacceptable as they contradict the fundamental values of the European Union. Examples of such practices include government social scoring and subliminal manipulation systems, whose adoption is expressly prohibited.
The core of the regulation revolves around AI systems identified as “high-risk.” This is a broad and significant category that includes critical applications in essential sectors such as medical devices, recruitment, credit analysis, law enforcement, and the management of essential infrastructures. Given their potential to profoundly influence people’s lives and the functioning of society, these systems must comply with a series of stringent obligations before they can enter the market.
Among the requirements imposed by the AI Act are a thorough risk assessment and the exclusive use of high-quality data. Technical documentation is essential, as is transparency towards end-users and the assurance of effective human oversight. These measures aim to minimize risks and ensure that the use of AI respects the rights and dignity of the individuals involved.
Depending on the level of risk, the regulation also foresees transparency obligations for limited-risk systems, such as chatbots, while generally no specific regulation is anticipated for minimal-risk systems. This differentiated approach allows for innovation without compromising citizens’ safety.
An innovative element of the AI Act concerns general-purpose artificial intelligence (GPAI) models, which include those used to power generative AI applications. Such models can have a significant impact on various aspects of daily life and thus require specific regulations governing their use, ensuring that their effects are always aligned with European regulations and ethical values.
The intent of the European legislator, through this legislative act, is clear: the European Union seeks to establish a global standard that can reflect its principles of ethics and responsibility internationally, a phenomenon often referred to as the “Brussels effect.” With this approach, the aim is to promote the development of artificial intelligence that is not only innovative but also human-centered, balancing the drive for innovation with the necessity of protecting individuals’ fundamental rights.
This regulation is set to spark discussion and generate significant debate on how artificial intelligence can be managed responsibly. It is essential for companies, research entities, and citizens to be actively involved in the discussion regarding the implementation of AI rules, so that an ecosystem can be built that fosters a better future.
In conclusion, the AI Act represents an important milestone in a rapidly evolving context like that of artificial intelligence. We encourage readers to stay informed and contribute to this critical debate by following our social media profiles, where we continue to explore the latest news and discussions related to these crucial topics. Your participation is vital in building an AI society that is safe, fair, and respectful of everyone’s rights.

