The regulatory framework known as the EU AI Act aims to promote reliability and safety in the use of artificial intelligence, but it has sparked widespread debate among experts in the technology sector. Various companies and associations have expressed concerns regarding the potential impact that such a strict regulatory framework could have on innovation, particularly for start-ups and small and medium-sized enterprises (SMEs).
One of the most significant criticisms revolves around the complexity and prescriptiveness of the regulation. SMEs, which often operate with limited resources, may struggle to meet the compliance requirements imposed by the new regulations. By increasing bureaucracy and costs related to compliance assessments, there is a risk that these companies may be hindered in their development and ability to innovate. Many fear that, as a result, Europe could fall behind global competitors like the United States and China, who may benefit from a more permissive regulatory environment.
Another point of concern relates to the intersection of artificial intelligence and cybersecurity. Industry experts warn that transparency requirements, such as the obligation to disclose details about the architecture of AI models and the data used for training, could create vulnerabilities. If such information were accessible, it could be exploited by malicious actors to discover and attack new weaknesses in systems. This dilemma highlights the delicate balance between the need to ensure transparency, which is crucial for building trust among buyers and citizens, and the necessity to protect confidentiality, which is essential for the security of systems.
Technology companies are therefore calling for a more flexible and risk-based approach. A regulatory model is proposed that recognizes the dynamic nature of technological innovation and does not impose excessive burdens, especially in the context of open source software. This approach is vital to support the ongoing evolution and development of innovative technologies that are becoming increasingly central to everyday life.
Furthermore, issues related to the interpretation of certain terms in the regulation, such as “high-risk system,” create legal uncertainties that could slow the adoption of emerging technologies, further compromising Europe’s competitiveness. Start-ups and small companies may find themselves facing a confusing and unclear environment, which discourages them from investing in new ideas and solutions.
It should also be considered that the field of artificial intelligence is continually evolving, with innovations emerging at an increasingly rapid pace. A static regulation could prove ineffective in keeping up with this fast transformation. Therefore, it is crucial to find a balance between the need for protection and the promotion of an environment where innovation can thrive.
In summary, while the intention to regulate artificial intelligence to ensure its reliability is undoubtedly important, it is crucial for lawmakers to consider the opinions and concerns of the various stakeholders in the sector. A balanced approach could lead to a regulatory framework that not only protects users but also facilitates the growth of innovative businesses and competition.
We encourage readers to follow our social media profiles to stay updated on the latest news and to actively participate in this important conversation about artificial intelligence and its implications for the future.

