What Is the NIST AI RMF?
The NIST AI Risk Management Framework (AI RMF 1.0) is a voluntary framework published by the National Institute of Standards and Technology in January 2023. It gives organizations a structured, flexible way to identify, assess, and manage risks across the full lifecycle of AI systems — from design through deployment and monitoring.
Unlike a checklist or a certification standard, the AI RMF is a framework — a set of principles, practices, and structures you adapt to your context. NIST developed it through extensive collaboration with industry, academia, and civil society over two years. The result is something that applies equally to a startup deploying a recommendation algorithm and a federal agency using automated decision systems.
The framework is built around four core functions: GOVERN, MAP, MEASURE, and MANAGE. Each function has categories and subcategories that break down into specific outcomes you work toward.
Voluntary, but increasingly expected. The AI RMF is not legally required in the U.S. today. But federal agencies are already referencing it in procurement requirements, several states are incorporating it into AI legislation, and enterprise customers increasingly ask vendors if they follow it. Early adoption pays dividends.
Who Should Implement the NIST AI RMF?
NIST designed the framework to be useful across the entire AI supply chain. That includes:
- AI developers — companies building AI models, systems, or infrastructure
- AI deployers — organizations integrating third-party AI into their products or operations
- AI operators — teams running AI systems in production
- Procurers — enterprises and agencies buying AI solutions
If you're building or using AI in any meaningful way, the framework has something for you. This is particularly true if you're also navigating the EU AI Act compliance guide — the two frameworks are highly complementary and implementing one makes the other significantly easier.
Function 1: GOVERN
GOVERN is the only function that runs continuously across the entire framework — it's what makes the other three functions possible. Without governance, MAP, MEASURE, and MANAGE become ad hoc exercises that don't stick.
GOVERN covers organizational accountability structures, policies, roles, culture, and processes for AI risk management. It's less about technical controls and more about the human and institutional infrastructure required to manage AI responsibly.
What GOVERN requires in practice
To implement GOVERN effectively, your organization needs to:
- Define AI risk tolerance. What level of risk is acceptable for your AI systems? This should connect to your broader organizational risk appetite and vary by use case (a content recommendation system vs. an AI-assisted medical diagnosis tool have very different thresholds).
- Assign clear ownership. Who is responsible for AI risk decisions? Who is accountable when something goes wrong? NIST recommends naming a senior AI risk function — not just delegating it to engineering.
- Establish policies and processes. Document how you design, test, deploy, monitor, and retire AI systems. These don't need to be long — they need to be real and followed.
- Create feedback channels. Build mechanisms for internal teams and external stakeholders to raise AI-related concerns without fear of retaliation.
- Promote AI risk literacy. Train employees at all levels — not just technical staff — to understand AI risks relevant to their roles.
Common GOVERN gaps
Organizations often have policies on paper but no real ownership. A VP signs off on an "AI Ethics Policy" and it sits in a SharePoint folder. NIST's GOVERN function isn't satisfied by documentation — it requires demonstrable practice. Ask: if an AI incident happened today, who would own the response? If the answer is unclear, your governance is incomplete.
Function 2: MAP
MAP is where you establish the context for each AI system and identify what risks are relevant. Before you can measure or manage risks, you have to know what you're dealing with. MAP forces you to think systematically about who is affected by your AI, how it could fail, and where harms could emerge.
What MAP requires in practice
MAP activities happen at the system level — you do MAP for each AI system you build or deploy. Key activities include:
- Define the intended purpose and context. What is the AI system designed to do? In what settings will it operate? What population does it affect?
- Identify stakeholders and affected groups. Who benefits from this system? Who could be harmed by errors or misuse? Include people who interact with the system indirectly.
- Catalog potential harms. Think broadly — technical failures, bias and discrimination, privacy violations, security vulnerabilities, misuse, and broader societal impacts. NIST uses the term "negative impacts" broadly.
- Assess AI system categorization. Is this a high-risk system? What regulatory or legal frameworks apply? Map your system against risk tiers.
- Understand the supply chain. If you're using third-party models, data, or infrastructure, what do you know about their provenance and risk posture?
The MAP mindset
MAP is fundamentally an exercise in intellectual honesty. It requires asking uncomfortable questions about your own system: where could it discriminate? where could it be manipulated? where could it fail in ways that harm users? Teams that rush through MAP or treat it as a compliance checkbox end up with MEASURE and MANAGE processes that miss the most important risks.
Function 3: MEASURE
MEASURE takes what you identified in MAP and subjects it to rigorous analysis. The goal is to go from "we think this could be a problem" to "here's what we've tested, what we've found, and what the data shows." MEASURE is where AI risk management gets technical.
What MEASURE requires in practice
- Define metrics for trustworthiness. NIST identifies several trustworthiness dimensions: accuracy, reliability, interpretability, privacy, security, fairness, and accountability. Define measurable proxies for each that matters to your system.
- Test before deployment. Evaluate AI performance on diverse datasets, adversarial inputs, edge cases, and population subgroups. Document your methodology and results.
- Evaluate bias and fairness. Run disaggregated analysis on performance metrics across demographic groups relevant to your use case. "The model is 95% accurate" is meaningless if it's 99% accurate for one group and 85% for another.
- Assess robustness and security. Test how the system behaves under distribution shift, data poisoning, adversarial prompts, or unusual inputs. AI systems often fail in unexpected ways.
- Monitor in production. Measurement doesn't stop at launch. Implement ongoing monitoring for model drift, performance degradation, and emerging failure modes.
MEASURE in LLM contexts
If you're deploying large language models, MEASURE requires extra thought. Traditional accuracy metrics don't capture hallucination rates, prompt injection vulnerabilities, or harmful output tendencies. You'll need red teaming, adversarial testing, and output monitoring pipelines specifically designed for generative AI.
Function 4: MANAGE
MANAGE is where risk knowledge becomes risk action. Using what you learned in MAP and MEASURE, MANAGE involves prioritizing risks, implementing mitigations, and creating response processes for when things go wrong. It's also where you build the organizational muscle for continuous improvement over time.
What MANAGE requires in practice
- Prioritize risks. You can't fix everything at once. Use the severity and likelihood data from MEASURE to rank risks and allocate resources rationally.
- Implement mitigations. Technical controls (output filtering, human-in-the-loop, rate limiting), process controls (approval workflows, audit logging), and monitoring alerts. Document what you did and why.
- Define incident response procedures. What happens when an AI system causes harm? Who is notified? How is the system suspended or modified? Who communicates with affected parties?
- Plan for decommissioning. AI systems don't run forever. Plan how you will retire them safely, including data handling, user notification, and knowledge transfer.
- Feed lessons back into GOVERN. After incidents or reviews, update your policies, training, and risk tolerances. MANAGE feeds GOVERN, which improves MAP and MEASURE — this is a cycle, not a one-time project.
AI RMF Profiles and the Playbook
NIST introduces two additional concepts that make the framework practical: Profiles and the AI RMF Playbook.
Profiles
A Profile is a customized prioritization of the framework's outcomes for a specific AI system or organizational context. You create a "Current Profile" (where you are today) and a "Target Profile" (where you want to be). The gap between them becomes your implementation roadmap.
Creating Your First Profile
A basic profile documents, for each of the four functions:
- Which categories are relevant to this AI system?
- What is our current maturity level for each category (1–4 scale)?
- What is our target maturity level?
- What actions close the gap?
Start with your highest-risk AI system. Even a rough first profile surfaces the most important gaps.
The AI RMF Playbook
The NIST AI RMF Playbook is a free companion document that translates each framework category into specific suggested actions. It's the most practical starting point for teams implementing the framework, because it moves from "you should have governance" to "here are 12 specific things you can do to build AI governance."
NIST AI RMF vs. the EU AI Act
If you're building AI that touches European users, you're likely thinking about both the EU AI Act and the NIST AI RMF. Here's how they relate:
| Dimension | NIST AI RMF | EU AI Act |
|---|---|---|
| Nature | Voluntary framework | Binding regulation |
| Geographic scope | U.S.-origin, globally applicable | EU/EEA + any company serving EU users |
| Risk approach | Organization-defined risk tiers | Prescriptive risk categories (Unacceptable, High, Limited, Minimal) |
| Requirements | Principles and practices | Specific conformity obligations by risk tier |
| Enforcement | Market/reputational | Fines up to €35M or 7% of global revenue |
| Certification | No formal certification | Conformity assessment required for high-risk AI |
The good news: NIST AI RMF implementation does significant heavy lifting toward EU AI Act compliance. The GOVERN function maps well to the Act's requirements for quality management systems and oversight. The MEASURE function overlaps with its testing and validation requirements. Organizations implementing the RMF thoughtfully are rarely starting from scratch when they approach EU AI Act compliance.
How to Get Started
Implementing the NIST AI RMF doesn't require a multi-year program. Here's a practical sequencing for most organizations:
-
Take stock of your AI systems Build a simple inventory: what AI systems does your organization build or use? For each, note the use case, affected populations, and whether you've done any formal risk assessment.
-
Run a gap assessment Evaluate your current state against each of the four functions. Our free gap assessment tool can help surface the biggest gaps quickly.
-
Prioritize your highest-risk system Pick the AI system that poses the most risk — either because of its use case or because it's most central to your business — and start your first Profile there.
-
Establish GOVERN foundations Define ownership, document your risk tolerance, and set up a basic AI incident response process. These pay dividends across every AI system you operate.
-
Work through MAP and MEASURE for your priority system Conduct a structured risk identification exercise, then test the hypotheses it generates. Document your findings and close critical gaps.
-
Build MANAGE processes Establish monitoring, incident response, and a regular cadence for risk review. Set a calendar reminder to revisit your Profile quarterly.
-
Expand to additional AI systems With governance foundations in place, extend the framework to your other AI systems. Leverage the Playbook to guide each new Profile.
Know Your AI Governance Gaps
Our free assessment benchmarks your current AI risk management practices against the NIST AI RMF and flags the highest-priority gaps in minutes.
Start Free Assessment →Frequently Asked Questions
Is the NIST AI RMF mandatory?
No. The framework is voluntary for private-sector organizations. However, it is increasingly referenced in federal procurement requirements, state AI legislation, and enterprise customer due diligence. It's also being cited in international AI governance standards, so voluntary today doesn't mean voluntary forever.
Does implementing the NIST AI RMF lead to a certification?
There is no official NIST AI RMF certification. The framework is a self-directed risk management tool. That said, NIST is developing supporting infrastructure, and third-party assurance offerings are emerging in the market. Some organizations document their implementation in a formal attestation they share with customers.
What's the difference between the AI RMF and NIST's Cybersecurity Framework?
The NIST Cybersecurity Framework (CSF) addresses information security risks — protecting confidentiality, integrity, and availability of data and systems. The AI RMF addresses AI-specific risks: bias, explainability, reliability failures, misuse, and the broader societal impacts of AI systems. Many AI systems need both frameworks: CSF for the security controls, AI RMF for the AI-specific governance layer.
How does the NIST AI RMF apply to generative AI?
NIST has released supplementary guidance specifically for generative AI (NIST AI 100-1 and subsequent publications). Generative AI introduces unique risks — hallucination, harmful content generation, prompt injection, and intellectual property concerns — that require tailored approaches within the MAP and MEASURE functions. The core framework still applies; you adapt the practices for the generative AI context.
Can small companies implement the NIST AI RMF?
Yes, and NIST designed it to be scalable. A startup with one AI product can do a meaningful first implementation in a few weeks. You don't need a dedicated AI risk team or an enterprise GRC platform. A documented Profile, clear ownership, and a testing protocol is a solid foundation — and a competitive differentiator when enterprise customers come asking about your AI governance.
How does the NIST AI RMF relate to ISO 42001?
ISO/IEC 42001 is the international standard for AI management systems, published in 2023. It's more prescriptive than the NIST AI RMF and follows the ISO management system structure (similar to ISO 27001). Organizations operating globally may find ISO 42001 compliance useful for certification purposes, while using the NIST AI RMF as a complementary risk identification and management tool. The two frameworks are broadly compatible.