Katharina Koerner
Katharina Koerner
PhD, AI Privacy Expert

ISO/IEC 42001: A Leap Forward in Responsible AI Management

ISO 42001 is a certifiable framework for an AI Management System (AIMS) that aims to support organizations in the responsible development, delivery, or use of AI systems.

As artificial intelligence (AI) becomes increasingly integrated into various sectors and applications, there is a growing need for standardized practices to ensure the responsible development and utilization of AI systems. Standardization plays a pivotal role in shaping the AI landscape by leveraging the expertise and collaboration of stakeholders and experts across sectors to define best practices, ultimately fostering trust and acceptance of AI in both industry and society.

December 18, 2023, marked a significant milestone in establishing such best practices for AI management systems. On this date, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) jointly adopted ISO/IEC 42001:2023, Information technology, Artificial intelligence, Management system.

This highly anticipated standard is designed to guide organizations by helping them establish, follow, and refine processes to develop and govern AI systems responsibly from the onset and through continuous improvement and iteration.

Put simply, ISO 42001 is a certifiable framework for an AI Management System (AIMS) that helps to oversee AI systems within companies.

This standard is poised to play a vital role in conformity assessment and certification, thereby enhancing trust within the intricate AI supply chain. Its importance is anticipated to reach a level comparable to ISO/IEC 9001 for quality management or ISO/IEC 27001 for information security management.

ISO and IEC’s Joint Efforts in Standardization for ICT

In the realm of information and communication technology, the International Organization for Standardization collaborates with the International Electrotechnical Commission through their Joint Technical Committee 1 (JTC 1).

ISO operates as an international organization, involving national standards bodies from 127 countries as full members, thereby establishing a unique global-reaching standardization body oriented towards nation-states. Similarly, the IEC brings together a comparable number of countries. As a consequence, JTC 1 currently collaborates with hundreds of experts from over 100 countries to represent their nation’s national standards body via ISO, or their national standards committee via IEC.

This organizational structure distinguishes it from other standard-setting bodies like industry consortia such as the World Wide Web Consortium, professional organizations like the Institute of Electrical and Electronics Engineers, or government entities like the National Institute of Standards and Technology. Standardization in the area of Artificial Intelligence is done in the JTC 1, Subcommittee (SC) 42 which serves as the focus and proponent for JTC 1’s standardization program on Artificial Intelligence.

Who Should Consider ISO/IEC 42001?

ISO/IEC 42001 is a voluntary standard intended for use by organizations of all sizes, across various industries, whether they offer or use products or services that involve AI systems.  It is designed to be adaptable, scalable, and not restricted by the type of products or services the organization deals with.

The Purpose of ISO/IEC 42001

The purpose of this versatile standard is to offer comprehensive guidance for establishing, implementing, maintaining, and continually improving an AI management system within any organization.

Its primary aim is to support organizations in the responsible development, delivery, or use of AI systems, helping them achieve their objectives, meet applicable regulations, fulfill stakeholder obligations, and align with expectations. In essence, ISO/IEC 42001 centers on the promotion of responsible AI development, delivery, and utilization.

Here’s a breakdown of what this new standard addresses:

AI Governance: ISO/IEC 42001 guides organizations in establishing AI governance policies and procedures. This includes defining responsibilities, decision-making processes, and effective risk management strategies.

Impact Assessment: Organizations are encouraged to evaluate the societal, environmental, and individual impact of their AI systems. This helps ensure that AI technologies are used responsibly and ethically.

Data and Model Lifecycle Management: Best practices for managing data and model lifecycles are a critical part of the standard. This encompasses everything from data collection, labeling, and validation to model development, training, evaluation, deployment, and ongoing monitoring.

Diversity and Inclusiveness: The standard underscores the importance of considering diversity and inclusiveness in AI systems. It prompts organizations to think about how their AI technologies may impact people of various backgrounds, abilities, and characteristics.

Monitoring and Auditing: ISO/IEC 42001 emphasizes the need for ongoing monitoring and auditing of AI systems. This ensures that these systems perform as intended, allowing for necessary adjustments in policies, data, and models when issues arise.

Comparing ISO/IEC 42001 to Other Standardization Work

ISO/IEC 42001 isn’t the only standard related to AI but is part of a broader effort to create unified AI standards. It is helpful to understand how it compares to others in the field:

ISO/IEC 42001 vs. ISO/IEC 23894: These standards differ in scope. While ISO/IEC 23894 focuses on adapting generic risk management standards for AI, ISO/IEC 42001 zeroes in on internal procedures for managing AI systems within organizations.

ISO/IEC 42001 vs. NIST AI RMF: Both ISO/IEC 42001 and the National Institute of Standards and Technology (NIST) AI Risk Management Framework address high-level policies and procedures for AI system management. However, ISO/IEC 42001 can be used as an auditable standard and provides more detailed guidance, especially for continuous AI monitoring policies and procedures.

ISO/IEC vs. CEN/CENELEC: In parallel to the AI standardization work of ISO/IEC, the European Commission designated CEN/CENELEC to develop AI standards supporting the upcoming EU AI Act. CEN and CENELEC have formed a new Joint Technical Committee, CEN-CENELEC JTC 21 Artificial Intelligence, to develop and adopt AI standards and related data standards. It plans to adopt international standards already in development by organizations like ISO/IEC JTC 1 and SC 42 Artificial Intelligence, focusing on addressing European market needs and supporting EU legislation and policies.

Why to adopt ISO/IEC 42001

AI is rapidly permeating various industries, including healthcare, defense, transportation, finance, employment, and energy, offering a multitude of benefits in diverse applications. However, as AI’s presence grows, so do concerns related to fairness, the impact of automated decision-making on individuals, transparency, human oversight, and safety.

Industry players, including small and medium-sized enterprises (SMEs), have the flexibility to directly embrace this standard. When procuring AI components from third-party providers, they can also ensure that their suppliers adhere to these guidelines.

Implementing ISO, IEC, and ISO/IEC standards within organizations offers numerous advantages, whether or not certification is pursued.

  • Enhances trust and credibility: An ISO/IEC 42001 certification showcases an organization’s commitment to responsible AI practices, enhancing trust levels among customers and the public.
  • Competitive advantage: Organizations adhering to the standard gain a competitive edge in an AI-centric landscape.
  • Addresses pressing concerns: ISO 42001 provides a framework for addressing AI-related concerns such as fairness, transparency, and safety.
  • Flexible and adaptable: The standard is not rigid and can be tailored to an organization’s specific needs, making it more adaptable than sector-specific regulations.
  • Increases consumer confidence: Compliance with ISO/IEC 42001 can instill confidence in consumers, elevating their trust in AI-driven products and services.
  • Access to global markets: Standardization establishes consistent requirements, enabling organizations to access global markets effectively.
  • Third-party seal of approval: If a certification is pursued, it serves as a third-party seal of approval, showcasing accountability.
  • Contractual obligations: Some organizations may have contractual obligations to maintain certifications.
  • Internationally recognized risk mitigation: Certifications demonstrate a commitment to internationally recognized risk mitigation.
  • Signal of priority: ISO 42001 signals to customers and stakeholders that responsible AI system management is a top priority.
  • Internal governance: Implementing standards can strengthen internal governance and practices**.**
  • Board Awareness: Standards make robust AI system management evident to the board, ensuring top-level awareness and support.
  • Alignment with best practices: Even without immediate certification, a thorough examination of ISO/IEC processes helps organizations stay aligned with best practices and evolving developments in AI governance.

Daiki’s Expertise: Navigating Responsible AI with ISO/IEC 42001

As the world of technology and AI continues to evolve, standards will play an increasingly significant role in shaping the responsible use of AI. Choosing the right standard that aligns with your organization’s unique needs and goals can have far-reaching effects, impacting not only your internal operations but also how you are perceived by customers, partners, and the public.

The adoption of ISO/IEC 42001 offers organizations a pivotal opportunity to set up a solid and globally recognized AI Management System and lay the necessary foundation for their responsible AI practices.

At Daiki, we are committed to supporting you in this critical endeavor. Our AI Enablement platform provides organizations and teams with practical step-by-step guidance to develop your AI Strategy and help you realize your AI projects in a compliant and responsible manner. Daiki provides templates, processes, and guidance to adopt and implement 42001 easily.

We encourage organizations to take initiative, prioritize responsible AI practices, and leverage the power of standards to drive innovation while ensuring ethical AI deployment. The time to act is now. Daiki is dedicated to guiding you on your journey towards excellence in responsible AI practices, helping you seize the opportunities while ensuring ethical and compliant AI implementation.

Related articles

The Daiki process enables you to use and customize language models responsibly, enhancing the safety, truthfulness, and helpfulness of LLMs.
ISO 42001 is a certifiable framework for an AI Management System (AIMS) that aims to support organizations in the responsible development, delivery, or use of AI systems.
A tale of delicate balances, trade-offs, and the need for pragmatic solutions in regulatory eco-systems
Daiki Logo

Request a demo today

Daiki Logo

Apply today

Daiki Logo

Partner with us

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join waitlist for responsible AI development