### The Specificity of ISO 42001 in Cybersecurity Management
The recent introduction of the ISO/IEC 42001 standard represents a significant novelty in the landscape of international standards for artificial intelligence (AI) management. This standard stands out particularly for its systematic approach that encompasses the entire life cycle of AI systems, addressing the characteristic risks associated with them and adopting specific governance mechanisms. ISO 42001 should not be seen merely as an extension of the well-known ISO/IEC 27001 dedicated to information security, but rather as an autonomous standard aimed at meeting the unique and particular needs of AI.
One of the main misunderstandings concerns the supposed similarity between ISO 27001 and ISO 42001. It is common to think that having ISO 27001 certification facilitates the transition to ISO 42001. However, despite both standards following the “High Level Structure” (HLS) and sharing the Plan-Do-Check-Act (PDCA) cycle, their practical applications and reference contexts are fundamentally different. While ISO 27001 focuses on aspects such as confidentiality, integrity, and availability of information, ISO 42001 aims to protect not only data but, more importantly, individuals and society as a whole.
ISO 42001 delves into the theme of the systemic impact of AI, emphasizing not only the risks associated with the organization developing AI systems but also the social and ethical consequences that the use of such systems may generate. Risks are no longer limited to measurable events but extend to potential consequences, sometimes difficult to predict, that may emerge even in the long term. For example, an algorithm may be technically accurate but simultaneously produce discrimination in real contexts. Consequently, the introduction of impact assessment, contemplated in clause 6.1.4 of the standard, becomes crucial. This assessment pursues an ethical and social reflection on how AI systems can influence people’s lives.
Another distinctive element of this standard is the transformation of the PDCA cycle in the context of AI. While in ISO 27001 planning focuses on identifying information assets and implementing security controls, in ISO 42001, planning requires a deep understanding of the impacts of algorithmic decisions on social dynamics, discrimination, and inequalities. This necessitates a substantial reconsideration of strategies to ensure that AI systems operate not only correctly from a technical standpoint but also respect human dignity and promote the collective good.
The implementation phase (Do) presents a clear distinction: while in security standards measures are largely codified and standardized, in ISO 42001 the “doing” becomes an act of responsible design, as each context requires tailored solutions that consider specific ethical and social dynamics. Models must be designed not just to maximize efficiency but also to reduce the risk of bias and ensure adequate human controls.
The “Check” in ISO 42001 differs in its complexity compared to ISO 27001, as it implies not only a technical verification but also an evaluation of the social consequences of decisions made by AI systems. Success criteria cannot be limited to technical measurements; it is essential to integrate indicators that reflect the well-being of the various categories of stakeholders involved.
An additional interesting aspect is the hope for a “benefit” arising from a pre-existing ISO 27001 certification. It is natural for organizations already experienced in risk management practices to feel more prepared to meet the governance requirements of AI. However, this can lead to misunderstandings: AI operates within a paradigm of intrinsic ambiguity, where models can generate unforeseen outcomes due to difficult-to-control variables.

