🤖 AI Compliance

NIST AI RMF: What It Is and How to Implement It

The practical guide to the NIST AI Risk Management Framework — how it works, what the four core functions mean, and how to put it into practice.

⏱ 12 min read 🎯 AI builders & deployers

What Is the NIST AI RMF?

The NIST AI Risk Management Framework (AI RMF 1.0) is a voluntary framework published by the National Institute of Standards and Technology in January 2023. It gives organizations a structured, flexible way to identify, assess, and manage risks across the full lifecycle of AI systems — from design through deployment and monitoring.

Unlike a checklist or a certification standard, the AI RMF is a framework — a set of principles, practices, and structures you adapt to your context. NIST developed it through extensive collaboration with industry, academia, and civil society over two years. The result is something that applies equally to a startup deploying a recommendation algorithm and a federal agency using automated decision systems.

The framework is built around four core functions: GOVERN, MAP, MEASURE, and MANAGE. Each function has categories and subcategories that break down into specific outcomes you work toward.

Voluntary, but increasingly expected. The AI RMF is not legally required in the U.S. today. But federal agencies are already referencing it in procurement requirements, several states are incorporating it into AI legislation, and enterprise customers increasingly ask vendors if they follow it. Early adoption pays dividends.

Who Should Implement the NIST AI RMF?

NIST designed the framework to be useful across the entire AI supply chain. That includes:

If you're building or using AI in any meaningful way, the framework has something for you. This is particularly true if you're also navigating the EU AI Act compliance guide — the two frameworks are highly complementary and implementing one makes the other significantly easier.

Function 1: GOVERN

GOVERN Build the foundation for responsible AI

GOVERN is the only function that runs continuously across the entire framework — it's what makes the other three functions possible. Without governance, MAP, MEASURE, and MANAGE become ad hoc exercises that don't stick.

GOVERN covers organizational accountability structures, policies, roles, culture, and processes for AI risk management. It's less about technical controls and more about the human and institutional infrastructure required to manage AI responsibly.

What GOVERN requires in practice

To implement GOVERN effectively, your organization needs to:

Common GOVERN gaps

Organizations often have policies on paper but no real ownership. A VP signs off on an "AI Ethics Policy" and it sits in a SharePoint folder. NIST's GOVERN function isn't satisfied by documentation — it requires demonstrable practice. Ask: if an AI incident happened today, who would own the response? If the answer is unclear, your governance is incomplete.

Function 2: MAP

MAP Understand your AI risks in context

MAP is where you establish the context for each AI system and identify what risks are relevant. Before you can measure or manage risks, you have to know what you're dealing with. MAP forces you to think systematically about who is affected by your AI, how it could fail, and where harms could emerge.

What MAP requires in practice

MAP activities happen at the system level — you do MAP for each AI system you build or deploy. Key activities include:

The MAP mindset

MAP is fundamentally an exercise in intellectual honesty. It requires asking uncomfortable questions about your own system: where could it discriminate? where could it be manipulated? where could it fail in ways that harm users? Teams that rush through MAP or treat it as a compliance checkbox end up with MEASURE and MANAGE processes that miss the most important risks.

Function 3: MEASURE

MEASURE Analyze and quantify AI risks

MEASURE takes what you identified in MAP and subjects it to rigorous analysis. The goal is to go from "we think this could be a problem" to "here's what we've tested, what we've found, and what the data shows." MEASURE is where AI risk management gets technical.

What MEASURE requires in practice

MEASURE in LLM contexts

If you're deploying large language models, MEASURE requires extra thought. Traditional accuracy metrics don't capture hallucination rates, prompt injection vulnerabilities, or harmful output tendencies. You'll need red teaming, adversarial testing, and output monitoring pipelines specifically designed for generative AI.

Function 4: MANAGE

MANAGE Treat, respond, and continuously improve

MANAGE is where risk knowledge becomes risk action. Using what you learned in MAP and MEASURE, MANAGE involves prioritizing risks, implementing mitigations, and creating response processes for when things go wrong. It's also where you build the organizational muscle for continuous improvement over time.

What MANAGE requires in practice

AI RMF Profiles and the Playbook

NIST introduces two additional concepts that make the framework practical: Profiles and the AI RMF Playbook.

Profiles

A Profile is a customized prioritization of the framework's outcomes for a specific AI system or organizational context. You create a "Current Profile" (where you are today) and a "Target Profile" (where you want to be). The gap between them becomes your implementation roadmap.

Creating Your First Profile

A basic profile documents, for each of the four functions:

  • Which categories are relevant to this AI system?
  • What is our current maturity level for each category (1–4 scale)?
  • What is our target maturity level?
  • What actions close the gap?

Start with your highest-risk AI system. Even a rough first profile surfaces the most important gaps.

The AI RMF Playbook

The NIST AI RMF Playbook is a free companion document that translates each framework category into specific suggested actions. It's the most practical starting point for teams implementing the framework, because it moves from "you should have governance" to "here are 12 specific things you can do to build AI governance."

NIST AI RMF vs. the EU AI Act

If you're building AI that touches European users, you're likely thinking about both the EU AI Act and the NIST AI RMF. Here's how they relate:

Dimension NIST AI RMF EU AI Act
Nature Voluntary framework Binding regulation
Geographic scope U.S.-origin, globally applicable EU/EEA + any company serving EU users
Risk approach Organization-defined risk tiers Prescriptive risk categories (Unacceptable, High, Limited, Minimal)
Requirements Principles and practices Specific conformity obligations by risk tier
Enforcement Market/reputational Fines up to €35M or 7% of global revenue
Certification No formal certification Conformity assessment required for high-risk AI

The good news: NIST AI RMF implementation does significant heavy lifting toward EU AI Act compliance. The GOVERN function maps well to the Act's requirements for quality management systems and oversight. The MEASURE function overlaps with its testing and validation requirements. Organizations implementing the RMF thoughtfully are rarely starting from scratch when they approach EU AI Act compliance.

How to Get Started

Implementing the NIST AI RMF doesn't require a multi-year program. Here's a practical sequencing for most organizations:

  1. Take stock of your AI systems Build a simple inventory: what AI systems does your organization build or use? For each, note the use case, affected populations, and whether you've done any formal risk assessment.
  2. Run a gap assessment Evaluate your current state against each of the four functions. Our free gap assessment tool can help surface the biggest gaps quickly.
  3. Prioritize your highest-risk system Pick the AI system that poses the most risk — either because of its use case or because it's most central to your business — and start your first Profile there.
  4. Establish GOVERN foundations Define ownership, document your risk tolerance, and set up a basic AI incident response process. These pay dividends across every AI system you operate.
  5. Work through MAP and MEASURE for your priority system Conduct a structured risk identification exercise, then test the hypotheses it generates. Document your findings and close critical gaps.
  6. Build MANAGE processes Establish monitoring, incident response, and a regular cadence for risk review. Set a calendar reminder to revisit your Profile quarterly.
  7. Expand to additional AI systems With governance foundations in place, extend the framework to your other AI systems. Leverage the Playbook to guide each new Profile.

Know Your AI Governance Gaps

Our free assessment benchmarks your current AI risk management practices against the NIST AI RMF and flags the highest-priority gaps in minutes.

Start Free Assessment →

Frequently Asked Questions

Is the NIST AI RMF mandatory?

No. The framework is voluntary for private-sector organizations. However, it is increasingly referenced in federal procurement requirements, state AI legislation, and enterprise customer due diligence. It's also being cited in international AI governance standards, so voluntary today doesn't mean voluntary forever.

Does implementing the NIST AI RMF lead to a certification?

There is no official NIST AI RMF certification. The framework is a self-directed risk management tool. That said, NIST is developing supporting infrastructure, and third-party assurance offerings are emerging in the market. Some organizations document their implementation in a formal attestation they share with customers.

What's the difference between the AI RMF and NIST's Cybersecurity Framework?

The NIST Cybersecurity Framework (CSF) addresses information security risks — protecting confidentiality, integrity, and availability of data and systems. The AI RMF addresses AI-specific risks: bias, explainability, reliability failures, misuse, and the broader societal impacts of AI systems. Many AI systems need both frameworks: CSF for the security controls, AI RMF for the AI-specific governance layer.

How does the NIST AI RMF apply to generative AI?

NIST has released supplementary guidance specifically for generative AI (NIST AI 100-1 and subsequent publications). Generative AI introduces unique risks — hallucination, harmful content generation, prompt injection, and intellectual property concerns — that require tailored approaches within the MAP and MEASURE functions. The core framework still applies; you adapt the practices for the generative AI context.

Can small companies implement the NIST AI RMF?

Yes, and NIST designed it to be scalable. A startup with one AI product can do a meaningful first implementation in a few weeks. You don't need a dedicated AI risk team or an enterprise GRC platform. A documented Profile, clear ownership, and a testing protocol is a solid foundation — and a competitive differentiator when enterprise customers come asking about your AI governance.

How does the NIST AI RMF relate to ISO 42001?

ISO/IEC 42001 is the international standard for AI management systems, published in 2023. It's more prescriptive than the NIST AI RMF and follows the ISO management system structure (similar to ISO 27001). Organizations operating globally may find ISO 42001 compliance useful for certification purposes, while using the NIST AI RMF as a complementary risk identification and management tool. The two frameworks are broadly compatible.