Katharina Koerner
Katharina Koerner
PhD, AI Privacy Expert

NIST AI Risk Management Framework and EU AI Act: Dual Forces in Multinational AI Governance

U.S. President Biden's Recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI Broadens NIST's Role and Global Influence

In today’s artificial intelligence (AI) landscape, companies are closely monitoring the evolving global AI regulations that are shaping the industry’s future. The European Union’s Artificial Intelligence Act (EU AI Act) is currently undergoing tense negotiations and will likely become the world’s inaugural comprehensive AI regulation with extraterritorial effect. 

In the United States, U.S. President Biden’s Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of AI from October 30, 2023, complemented a patchwork of sectoral and state laws overseen by bodies such as the Equal Employment Opportunity Commission (EEOC) and the Consumer Financial Protection Bureau (CFPB), complemented by active involvement of the Federal Trade Commission (FTC).

The recently issued EO mandates various federal agencies to play a pivotal role in advancing the safe, secure, and trustworthy use of AI. Notably, the National Institute of Standards and Technology (NIST) is poised to play a central role.

Under the October Executive Order (EO), NIST’s responsibilities have been significantly expanded to encompass a broader array of critical initiatives. These tasks now include:

  • Developing guidelines and best practices for consensus industry standards to ensure safe AI systems.
  • Creating companion resources for the AI Risk Management Framework and Secure Software Development Framework, focusing on generative AI and dual-use foundation models.
  • Initiating an effort to provide guidance and benchmarks for auditing potentially harmful AI capabilities.
  • Establishing guidelines and processes for AI red-teaming tests, especially for dual-use foundation models, to ensure safe deployment.
  • Collaborating with industry to develop screening specifications, best practices, and technical guides for synthetic nucleic acid sequence providers.
  • Identifying standards and techniques for content authentication, tracking provenance, labeling synthetic content, and detecting synthetic content, among others.
  • Developing guidelines for evaluating differential-privacy guarantees, including for AI.
  • Creating tools and practices to support agencies in implementing minimum risk-management practices.
  • Coordinating with international partners and standards organizations to promote AI-related consensus standards globally.
  • Guided by NIST’s AI Risk Management Framework and National Standards Strategy, leading efforts to drive responsible AI development.

The broader scope of responsibilities granted to NIST by the EO underscores NIST’s pivotal role in shaping the secure and responsible implementation of AI technologies. NIST will collaborate closely with various federal agencies and adhere to a 270-day timeline for the majority of its EO assignments. 
With proactive initiatives like the  AI Risk Management Framework (AI RMF) already in place, NIST is well-positioned to wield significant influence in the realm of global AI governance and become a valuable partner in shaping the global landscape of AI governance.

With proactive initiatives like the  AI Risk Management Framework (AI RMF) already in place, NIST is well-positioned to wield significant influence in the realm of global AI governance and become a valuable partner in shaping the global landscape of AI governance.

The EU AI Act and NIST AI Risk Management Framework: A Complementary Approach

NIST, which has been actively engaged in promoting the responsible development and use of AI for an extended period, introduced its AI RMF in January 2023. For a deeper dive into the NIST AI RMF, see my guide here.

The primary goal of the NIST AI RMF is to assist organizations in building trust in AI technologies, fostering AI innovation, and adeptly managing risks. The framework is tailored for organizations engaged in AI system design, development, deployment, or utilization. It serves as a practical guideline for AI governance by outlining the core functions of an AI risk management system and providing clear guidelines for organizations to implement their priorities. 

Unlike the forthcoming EU AI Act, which will impose strict legal requirements, the NIST AI RMF is a voluntary guidance document and in that sense akin to self-regulatory best practices, codes of conduct, and AI Impact, Risk, and Safety Assessment soft law tools. 

However, it’s important to recognize that despite its voluntary nature, the NIST AI RMF has the potential to play a vital role in complementing and augmenting the regulatory standards that the EU is preparing to introduce. 

Specifically, in the draft EU AI Act’s Article 9 (version by the European Parliament), there is a requirement for the creation of a risk management system that spans the entire lifecycle of high-risk AI systems. When comparing the proposals put forth by the European Commission, the European Parliament, and the Council Mandate, it becomes evident that there is ongoing negotiation and debate regarding the definition and requirements related to high-risk AI systems.

In general, the classification and enumeration of specific AI applications deemed high risk can be found in Article 6 and Annexes II and III of the drafts, encompassing areas such as critical infrastructure, biometric identification, educational and vocational training, employment, essential public and private services, law enforcement, border control, and the administration of justice and democratic processes.

While the definition of high-risk systems is still under debate, it is foreseeable that a risk management system will be mandated by Article 9 EU AI Act. The risk management system will have to involve several steps, including identifying, estimating, and evaluating both known and reasonably foreseeable risks associated with high-risk AI systems, encompassing potential threats to the health and safety of individuals, their fundamental rights, democracy, rule of law, and the environment, when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse. 

Organizations have the flexibility to create their own AI risk management system, provided it adheres to key principles which entail implementing a continuous iterative process, conducting regular reviews and updates to maintain ongoing effectiveness, and maintaining comprehensive documentation.

In contrast to the focus of Article 9 in the draft EU AI Act, the NIST AI RMF is designed to assist in managing risks related to all types of AI systems, not limited to high-risk ones. 

However, with its structured approach, the NIST AI RMF can certainly facilitate the identification of AI systems leading to unacceptable risks, which are prohibited completely by Article 5 draft EU AI Act, and support the creation of the risk management system for high-risk AI systems across the entire AI lifecycle.

For instance, the NIST AI RMF can assess risks related to health and safety by evaluating the likelihood and severity of AI system failures or errors that could harm individuals. It can address risks to fundamental rights and democracy by examining the impact of biased algorithms on fairness and accountability. Environmental risks and tradeoffs can be assessed by considering the potential consequences of AI systems on resource consumption or pollution.

Furthermore, the NIST Framework’s emphasis on continuous review and update aligns with the EU AI Act’s requirement for ongoing effectiveness. It allows organizations to adapt their risk management processes as new risks emerge or as AI technologies inevitably evolve.

Another area where the utilization of the NIST AI RMF could support fulfilling certain requirements of the EU AI Act is Article 29a of the EP version of the draft EU AI Act. If incorporated into the final text of the EU AI Act, Article 29a would require fundamental rights impact assessments for high-risk AI systems, including outlining the intended purpose, geographic and temporal scope, affected individuals and groups, legal compliance, foreseeable impact on fundamental rights, risks to marginalized or vulnerable groups, environmental impact, and a mitigation plan for identified harms, along with a governance system that includes human oversight, complaint handling, and redress procedures.

It is noteworthy that the general conceptualization of the NIST AI RMF core functions is also already making its mark in recently proposed U.S. state-level bills like California’s AB 331, which would mandate governance programs for “Automated Decision Tools” that “map, measure, manage, and govern” the reasonably foreseeable risks of algorithmic discrimination associated with the use or intended use of an automated decision tool (22756.4.).

Supporting the Risk-Based Approach to AI Governance Globally

In summary, as the EU AI Act will take its place as a cornerstone in international AI governance, the NIST AI RMF has the potential to serve as a complementary framework from the U.S. to support responsible AI practices globally. The synergy of these approaches offers a powerful opportunity for multinational organizations to bolster their AI risk management strategies, demonstrate their commitment to the trustworthy use of AI, and address the most critical issues with limited resources.

The goal of aligning the risk-based approaches to AI was also demonstrated by a Joint Statement of the U.S.-EU Trade and Technology Council from December 2022, which expressed the intention to develop a joint roadmap on evaluation and measurement tools for trustworthy AI and risk management. 

Beyond the EU, the NIST AI Framework will offer significant value to countries worldwide that are adopting a risk-based approach as the cornerstone of their regulatory AI governance initiatives. For example, in May 2023, countries within the Group of Seven (G7) signed a declaration emphasizing the importance of “risk-based” AI regulations that prioritize human-centric, democratic values, protection of human rights, fundamental freedoms, privacy, and personal data.

Some concrete examples of following a risk-based approach to AI include the AI risk toolkit published by the UK earlier this year, designed to provide practical support to reduce the risks to individuals’ rights and freedoms caused by AI systems. In Canada, the proposed Artificial Intelligence and Data Act (AIDA) aims to introduce a risk-based regulatory framework for AI systems, with a specific emphasis on high-impact systems that have the potential to infringe upon human rights or present risks of harm. In Brazil, the proposed comprehensive AI Bill would prohibit certain “excessive risk” AI systems. 

Against this backdrop, the NIST AI RMF is well-positioned to facilitate interoperability with a wide array of core standards, frameworks, and guidelines for AI risk management on a global scale, including those established by the Organization for Economic Co-operation and Development (OECD). 

In contrast, the perspective taken by China differs from approaches based on risk assessment. While risk-based assessments focus on evaluating potential threats and mitigating them to ensure responsible AI development, China’s approach – as taken in its regulation of generative AI, having entered into force on August 23, 2023, – places an emphasis on licensing when providing generative AI services to the public, obligatory security assessment, and ideological alignment.

Embracing the NIST AI Risk Framework as a Strategic Step Towards Responsible AI Governance

As organizations both prepare for the implementation of the EU AI Act, navigate the evolving landscape of state-level and sectoral AI laws in the U.S., and seek to build trust through responsible AI practices, internal stakeholders working with AI systems are encouraged to acquaint themselves with the NIST AI Risk Framework. This includes key AI actors in AI governance such as organizational management, senior leadership, and the Board of Directors, as well as domain experts with their varying risk perspectives.

The holistic approach to AI risk management provided by NIST offers practical guidance to actors at every stage of the AI lifecycle. It can significantly help businesses develop robust governance programs and address AI-related risks in alignment with the emerging global consensus on “risk-based” AI regulations. Therefore, embracing the NIST AI Risk Framework is a strategic step towards responsible AI governance, demonstrating a dedication to responsible AI in practice, and, as a result, mitigating harm and bolstering trust among both regulators and customers.

Subscribe to our newsletter

Don’t miss out on the opportunity to be the first to know about our exciting updates. Simply enter your email address below and hit the subscribe button to join our newsletter today!

Related articles

The Daiki process enables you to use and customize language models responsibly, enhancing the safety, truthfulness, and helpfulness of LLMs.
ISO 42001 is a certifiable framework for an AI Management System (AIMS) that aims to support organizations in the responsible development, delivery, or use of AI systems.
A tale of delicate balances, trade-offs, and the need for pragmatic solutions in regulatory eco-systems
Daiki Logo

Request a demo today

Daiki Logo

Apply today

Daiki Logo

Partner with us

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join waitlist for responsible AI development