## The Specificity of ISO 42001 in Cybersecurity
In the current landscape of artificial intelligence (AI) system management, ISO/IEC 42001:2023 emerges as the world’s first standard specifically focused on AI, with an Italian translation attesting to its importance. This standard introduces several distinctive elements compared to other standards, particularly ISO/IEC 27001:2022, which is dedicated to information security.
One crucial aspect of ISO 42001 is its approach to the AI lifecycle and the management of specific risks associated with this technology. Unlike 27001, which primarily focuses on the protection of the confidentiality, integrity, and availability of information, ISO 42001 addresses the broader impacts that AI can have on individuals and society. In fact, the object of protection is no longer just the data, but also the individuals and groups within the community.
### An Innovative Document
One of the most significant innovations of ISO 42001 is the introduction of an impact assessment process for AI systems, as outlined in Clause 6.1.4. This assessment is distinctly different from a traditional risk assessment: while the latter focuses on measurable future events, the impact assessment aims to identify the long-term ethical and social consequences that may arise from the use of AI.
This paradigm shift requires a change in perspective towards risk: it is no longer a matter of identifying measurable harmful events, but of considering a complex set of potential consequences that may unfold over time through intricate social dynamics.
### PDCA Cycle: A New Interpretation
The Plan-Do-Check-Act (PDCA) cycle undergoes a significant semantic transformation when applied to AI systems. In ISO 27001, planning is based on mapping assets and defining measurable controls. In contrast, in ISO 42001, planning must start from an understanding of the social impacts and potentially unequal dynamics that an AI system can generate.
The implementation of controls is equally distinct: while 27001 relies on technical measures and standardized procedures, ISO 42001 requires an ethical and contextual approach necessary to address specific needs. Each AI system must be designed to ensure not only efficiency but also fairness and respect for human dignity.
### Auditing and Governance: New Challenges
The “Check” phase in ISO 27001 focuses on verifying technical compliance. However, for ISO 42001, this verification must confront a greater complexity. It is not enough to check that a system functions as intended; an evaluation of its social impact and its ability to maintain ethical standards over time is necessary.
One common misconception is that ISO 27001 certification provides significant advantages for obtaining ISO 42001. While it is true that familiarity with risk management processes helps, the challenges related to AI are fundamentally different. Information security management operates in a context where vulnerabilities can be identified and corrected, while artificial intelligence operates in an area of ambiguity that requires deep reflection on the implications of every algorithmic decision.
### Complexity of Governance in AI Systems
Governance in AI systems requires ethical consideration alongside technical aspects. Although both standards use a common framework, governance in ISO 42001 is complex and requires an analysis of the social and environmental consequences of automated decisions.
For example, questions arise about how to audit the “fairness” of an algorithm or how to ensure that an automated decision-making process respects the dignity of individuals. These questions necessitate evaluation frameworks that are not easy to implement, requiring a recontextualization of governance compared to traditional practices.
### Transparency and Explainability: New Necessities

