Katharina Koerner
Katharina Koerner
PhD, AI Privacy Expert

The NIST AI Risk Management Framework: Your Guide to AI Governance

As the global landscape of AI governance continues to evolve, one thing becomes abundantly clear: responsible AI practices are non-negotiable.

As the global landscape of AI governance continues to evolve, one thing becomes abundantly clear: responsible AI practices are non-negotiable. While the EU AI Act is set to introduce binding legal requirements, another influential player is emerging on the scene: the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF), published in January 2023. What makes this framework a game-changer, and how can it shape the future of responsible AI?

Global Impact of the NIST AI Risk Management Framework on Responsible Innovation

The National Institute of Standards and Technology (NIST), a nonregulatory federal agency within the United States Department of Commerce, is a trusted authority responsible for setting industry standards and guidelines. NIST’s mission is to promote measurement science, standards, and technology to enhance productivity, facilitate trade, and improve the quality of life. As part of its efforts, NIST has developed the NIST AI Risk Management Framework (AI RMF) to provide vital guidance for organizations involved in AI system design, development, deployment, or utilization.

In contrast to the upcoming EU AI Act, a hard law regulation with binding legal requirements, the NIST AI RMF is a guidance document for voluntary use. Its primary objective is to help organizations cultivate trust in AI technologies, promote AI innovation, and effectively mitigate risks. 

The NIST AI RMF’s voluntary approach is evident as it does not include an enforcement mechanism, nor does it mandate certifications or market authorization like the CE-marking procedure expected to be required under the EU AI Act for market entrance of High-Risk AI Systems (Article 49 draft EU AI Act).

The NIST AI RMF is evidently gaining substantial momentum in the U.S., both within the government as well as in the private sector. Notable endorsements from leading tech companies such as Microsoft underscore the self-regulatory soft law Framework’s pivotal role in shaping responsible AI practices and its increasing influence in the evolving AI landscape. 

The growing prominence of the NIST AI RMF in the U.S. is also reflected in the robust support it garners from the U.S. National Artificial Intelligence Advisory Committee (NAIAC), which provides guidance to the U.S. President and the White House National AI Initiative Office. 

In its inaugural report from May this year, NAIAC recommends several pivotal steps regarding the Framework. Firstly, it advocates for widespread adoption of the NIST AI RMF across both public and private sectors. Additionally, it proposes the allocation of funding to bolster NIST’s AI initiatives, guaranteeing its ongoing advancement and relevance. Lastly, NAIAC underscores the importance of internationalizing the NIST AI RMF, positioning it as a universally recognized standard for responsible AI management. 

These recommendations demonstrate a determined effort to extend the reach of NIST’s initiatives beyond national borders. The realm of AI would not be the first where NIST’s work has gained international recognition for developing Risk Management Frameworks tailored to specific technological domains. Notably, the widely recognized NIST Cybersecurity Framework is frequently referenced alongside ISO/IEC 27001’s Information Security Management System, both championing comprehensive cybersecurity practices and shaping the global cybersecurity landscape. 

The recently announced collaboration between Singapore and the U.S. represents another instance of NIST’s internationalization efforts within its AI RMF. Singapore’s Infocomm Media Development Authority (IMDA) and the NIST have successfully synchronized their respective AI frameworks and are planning on an ongoing partnership dedicated to promoting the development of AI innovations that prioritize safety, trustworthiness, and responsibility. A crosswalk, aligning IMDA’s own AI governance testing framework “AI Verify” with NIST’s AI RMF, has been published. 

A Closer Look at the NIST AI Risk Management Framework

The objective of the NIST AI Risk Management Framework is to facilitate the effective management of diverse risks associated with AI in organizations of varying sizes and capabilities engaged in designing, developing, deploying, or utilizing AI systems. 

The ideal outcome of putting the Framework into practice is not only to mitigate risk, but to build trustworthy AI systems in alignment with universal principles of responsible AI, which the Framework refers to as “characteristics of trustworthy AI”. The Framework is based on the premise that characteristics of AI trustworthiness, including reliability, safety, security, accountability and transparency, explainability and interpretability, privacy-enhancement and fairness with harmful bias managed, can reduce negative AI risks. 

Providing a crosswalk to other established governance sources of responsible AI principles, the Framework offers guidance for addressing them, thereby supporting the development and implementation of responsible AI programs.

Unpacking the Core Components of the NIST AI Risk Management Framework

The NIST AI RMF comprises two main sections:

The first part explains how organizations can identify AI-related risks and describes the qualities of trustworthy AI systems

Risk depends on two pivotal factors: the extent of harm that would occur as the result of a particular event and the likelihood of that event happening. Harm, as understood by the Framework, can affect individuals, groups, communities, organizations, society, the environment, and even the planet at large.

NIST acknowledges that there are plenty of challenges for AI Risk Management which need to be taken into account. 

Among others, there are challenges in risk measurement related to third-party software, hardware, and data, difficulties in tracking emergent risks, the lack of reliable metrics, measuring risk at different stages of the AI lifecycle, and varying results when measuring risk in real-world vs. controlled settings. 

Further, the Framework emphasizes that while it can be used to prioritize risk, it does not prescribe risk tolerance.

The second section entails the core of the Framework and centers around the four essential functions of governance, mapping, measurement, and management, with the “govern” function exerting influence over the others. 

The functions are supposed to be adjusted for specific situations and used at different points during the AI lifecycle:

1. Govern: This function emphasizes strong and good governance as a cornerstone for effective risk management within organizations. Put into practice, it establishes accountability structures, processes, encourages workplace diversity and accessibility, and fosters a culture of safety-first AI practices.

2. Map: The map function serves as a valuable tool for organizations to contextualize and proactively manage risks associated with AI systems. Through categorizing AI systems according to their capabilities, targeted usage, goals, expected benefits and costs, understanding the AI systems’ context, and identifying their potential impacts on individuals and groups, organizations can gain critical insights to anticipate, assess, and address these risks more effectively.

3. Measure: The measure function supports the analysis, assessment, benchmarking and monitoring of AI risks. It requires identifying and applying appropriate methods and metrics such as quantitative, qualitative or mixed tools, and stipulates trustworthiness characteristics. It emphasizes monitoring identified risks over time and gathering feedback based on test, evaluation, verification, and validation (TEVV) processes. 

4. Manage: The manage function utilizes the outcomes from the map and measure functions and calls for risk prioritization, allocating resources, and establishing regular monitoring and improvement mechanisms. It involves strategizing to maximize AI benefits and minimize harms, particularly from third-party sources.

Customizing AI RMF Functions: NIST’s Playbook for Practical Guidance

Each of the four functions comprises several categories and subcategories that provide detailed descriptions and practical guidance on how organizations can effectively manage the risks associated with AI systems and make informed decisions around the implementation of responsible AI practices that align with their specific needs and objectives.

For instance, within the core function “Map,” there are five categories with several subcategories each. 

Category 1 is called “MAP 1: Context is established and understood,” and has six subcategories. In MAP 1.1 to 1.6, the focus is on understanding and documenting various aspects, including the intended purposes and potential impacts of AI systems, interdisciplinary collaboration, organizational goals, business context, risk tolerances, and system requirements, all aimed at ensuring responsible AI development and implementation.

The AI RMF functions, categories, and subcategories are meant to be customized by organizations to fit a specific setting or application in alignment with their business objectives, legal requirements, resources, and risk management priorities. 

To assist in this customization process, NIST has developed a comprehensive Playbook that supplements the main Framework, offering additional guidance and practical recommendations in conjunction with the provided categories and subcategories.

The Playbook describes the meaning of every subcategory in detail and offers specific actions to cover the aspect of the Risk Management Framework in question, permitting organizations to selectively adopt recommendations aligned with their industry or interests.

For instance, in the case of the subcategory MAP 1.5, which pertains to “Determining and Documenting Organizational Risk Tolerances,” the Playbook recommends that organizations establish and formally record the level and type of risk they are willing to accept while pursuing their mission and strategy. These risk tolerances may align with existing regulations, guidelines, or industry standards. These defined risk tolerances play a crucial role in making “go/no-go” decisions regarding the development or deployment of AI systems. In this stage, the pyramid of criticality that is often used to depict low risk at the bottom and unacceptable risk at the top, could be utilized. 

This is one example how the NIST AI Risk Management Framework and the Playbook provide organizations with a very practical, comprehensive and customizable roadmap to navigate the complexities of AI risk management, and facilitate informed and responsible decision-making at every stage of the AI lifecycle.

Tackling Responsible AI Practices: A Practical Path Forward with the NIST AI RMF

To promote and establish responsible AI practices, embracing and working with the NIST AI Risk Framework presents an impactful opportunity. As an initial step, internal stakeholders, including board members, legal experts, engineers, and data scientists, are encouraged to familiarize themselves with the framework to harness its potential benefits. 

A thorough examination of the framework’s core functions, categories, and subcategories will provide an initial understanding of potential gaps in essential elements crucial for effective AI risk management and to identify and prioritize starting points. Furthermore, the actionable steps suggested by the Playbook can ideally translate into direct discussions within the context of specific AI/ML projects, accompanied by documentation, planning, improvement, and monitoring the processes. Additionally, NIST is publishing best practice examples of implementation efforts that can help organizations learn from successful AI RMF integrations and navigate the complexities of responsible AI adoption more effectively.

Opting for a pragmatic and hands-on approach to bolster organizational AI governance is a strategic move that can offer a competitive advantage. By adhering to the principles and guidance offered by frameworks like the NIST AI RMF, organizations not only fosters trust among stakeholders and mitigates the risk of potential legal challenges in the future, safeguarding the organization’s reputation and success, but also position themselves as leaders in the responsible and ethical use of AI technologies. 

Subscribe to our newsletter

Don’t miss out on the opportunity to be the first to know about our exciting updates. Simply enter your email address below and hit the subscribe button to join our newsletter today!

Related articles

The Daiki process enables you to use and customize language models responsibly, enhancing the safety, truthfulness, and helpfulness of LLMs.
ISO 42001 is a certifiable framework for an AI Management System (AIMS) that aims to support organizations in the responsible development, delivery, or use of AI systems.
A tale of delicate balances, trade-offs, and the need for pragmatic solutions in regulatory eco-systems
Daiki Logo

Request a demo today

Daiki Logo

Apply today

Daiki Logo

Partner with us

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join waitlist for responsible AI development