🤖 AI Compliance

AI Governance Controls for SOC 2 and ISO 27001

Your team is using AI. Your auditor is going to ask about it. Here's exactly what to add to your existing compliance program — and what auditors actually look for.

⏱ 12 min read 🎯 Security leads, compliance teams, CTOs

The problem with AI and existing compliance programs

Most SOC 2 and ISO 27001 programs were built before AI became a daily operational reality. The policies reference acceptable use of company systems. The risk assessments cover phishing, access control, and vendor risk. The controls address encryption, logging, and incident response.

None of that is wrong. But it's increasingly incomplete. Staff are using ChatGPT, Claude, Copilot, and dozens of other AI tools to do their jobs — often without any formal policy, risk assessment, or oversight. Customer data is being pasted into public LLM prompts. Code is being generated by AI and shipped without AI-specific review. Decisions are being influenced by AI outputs with no accountability trail.

When your auditor asks "how do you govern AI usage within your organization?" — and in 2026 they will — your existing program may have nothing to say.

The good news: you don't need a separate AI compliance program. You need to extend the one you have. A handful of policy additions, a few new controls, and an updated risk register covers the vast majority of what auditors expect to see.

The data is stark: IBM's 2025 Cost of a Data Breach Report found that among organizations reporting an AI-related security incident, 97% lacked proper AI access controls and 63% lacked AI governance policies entirely. This is not a theoretical risk.

What auditors are actually asking in 2026

Neither SOC 2 nor ISO 27001 have AI-specific criteria — but both frameworks are broad enough to catch AI risks under existing requirements. Here's what's showing up in real audits:

AreaSOC 2 auditor questionsISO 27001 auditor questions
Policy Do you have an acceptable use policy that addresses AI tools? Does your ISMS scope address AI risks? Is AI covered in your information security policy?
Risk Have you assessed risks introduced by AI tools your staff or systems use? Do AI-related risks appear in your risk register with documented treatment decisions?
Vendor Have you assessed your AI service providers? Do you have DPAs or contracts covering AI data handling? Are AI vendors in your supplier inventory? Have you completed Annex A 5.19–5.22 assessments for AI providers?
Access Who has access to AI systems? Are AI API keys treated as secrets under your access control program? Are AI systems included in your asset inventory and access control policy?
Monitoring Do you log AI system usage? How do you detect misuse or anomalous AI behavior? Are AI systems included in your logging and monitoring controls?
Output validation If AI influences decisions or outputs affecting customers, how do you validate accuracy and reliability? How do you manage risks from AI-generated outputs — hallucination, bias, confidentiality breaches?

The auditor framing that matters: SOC 2 auditors don't need a dedicated AI section in your report. They need your existing controls to credibly cover AI systems that are in scope. If an AI system touches customer data, it's in scope — and the evidence requirements are the same as any other system.

Policies to add to your existing program

These are the policy documents that directly address AI governance. Some can be added as standalone documents; others can be incorporated into existing policies as a new section.

Track your progress: Use our interactive AI Governance Checklist to work through every policy, control, risk item, and vendor check in one place — with progress that saves in your browser.

Controls to add or update: SOC 2

SOC 2 doesn't have AI-specific Trust Services Criteria — but existing criteria apply directly to AI systems. Here's where AI governance slots into your existing control structure:

CC6 — Logical Access

AI API Key and Credential Management

AI service API keys are secrets and must be treated as such — stored in secrets management, rotated on schedule, scoped to least privilege, and included in access reviews. Revocation on offboarding is commonly missed for AI tools.

CC6 — Logical Access

AI System Access Controls

If you operate AI models or platforms internally, access to model management, training pipelines, and output systems must follow your existing access control and review cadence. Role-based access, MFA, and quarterly reviews apply.

CC7 — System Operations

AI Usage Logging and Monitoring

Log AI system interactions that touch in-scope data — prompts, outputs, API calls, and access events. Define retention periods. Include AI system logs in your SIEM or log aggregation. Auditors will ask what you can see.

CC9 — Risk Mitigation

AI Vendor Risk Assessment

Third-party AI providers (OpenAI, Anthropic, Google, Azure AI) must be included in your vendor risk program. Review their SOC 2 reports, data processing agreements, and sub-processor lists. Document the assessment and any compensating controls.

PI1 — Processing Integrity

AI Output Validation

If AI outputs influence decisions or deliverables that affect customers, document how you validate accuracy and reliability. This doesn't require exhaustive testing — it requires a documented process and human review where it matters.

C1 — Confidentiality

Data Classification Enforcement for AI Inputs

Your data classification policy must explicitly address what data can be input into which AI tools. Confidential and restricted data must be prohibited from public AI services. This needs to be documented, trained, and technically enforced where possible.

Controls to add or update: ISO 27001

ISO 27001's Annex A controls map well to AI governance. These are the existing controls most directly implicated by AI usage — update them to explicitly address AI, or add AI as a documented risk and treatment in your risk register.

A.5.1 — Policies

Information Security Policy — AI Addendum

Your top-level information security policy should reference AI governance. A brief addendum or new subsection stating that AI tools are subject to the organization's information security requirements and acceptable use policy is sufficient to satisfy auditors.

A.5.9 — Asset Inventory

AI Systems as Information Assets

AI tools, models, and AI service subscriptions are information assets. Add them to your asset inventory with documented ownership, data classification, and risk tier. Auditors reviewing your asset inventory will check whether AI is represented.

A.5.19–5.22 — Supplier Security

AI Vendor Due Diligence

AI providers are suppliers that process your data. Complete supplier security assessments for all material AI vendors. Review their security certifications (SOC 2, ISO 27001), DPA terms, data retention policies, and sub-processor lists. Document findings and risk acceptance.

A.6.3 — Awareness

AI Security Awareness Training

Add an AI module to your annual security awareness program. Cover approved tools, data handling rules for AI inputs, AI-powered social engineering risks, and individual accountability. Document completion — this is one of the most commonly tested controls.

A.8.15 — Logging

AI System Logging

Extend your logging controls to include AI system interactions where sensitive data is processed. Define what is logged, retention periods, and review procedures. This integrates with your existing SIEM or log management without requiring separate infrastructure.

A.8.22 — Web Filtering

Shadow AI Detection and Control

Most organizations underestimate shadow AI usage. Use DNS filtering, web proxy logs, or endpoint tools to detect unapproved AI tool usage. Define a process for staff to request approval for new AI tools. Shadow AI is the most commonly raised observation in ISO 27001 audits touching AI.

Updating your risk register

For ISO 27001, every AI-related risk must appear in your risk register with a documented likelihood, impact, and treatment decision. For SOC 2, AI risks should feed into your risk assessment process and inform control selection.

These are the AI risks that should appear in any risk register updated for 2026:

Practical tip: You don't need to add all of these at once. Add the risks that are most material to your organization first — typically data leakage and shadow AI — then expand the register over subsequent review cycles. A risk register that addresses five AI risks credibly is better than one that lists fifteen superficially.

Use our AI Governance Checklist to track which risks you've documented and what's still outstanding.

AI vendor due diligence: what to actually check

AI service providers deserve the same vendor risk treatment as any other supplier that processes your data. In practice, the questions to answer for each material AI vendor are:

Consumer vs. enterprise tiers matter enormously. ChatGPT Free and ChatGPT Enterprise have completely different data handling terms. Claude.ai (consumer) and Claude API or Workspaces have different protections. Audit which tier your organization is actually using — not which tier you assume you're using.

When to consider ISO 42001

The controls and policies above extend your existing SOC 2 or ISO 27001 program to cover AI governance adequately for most organizations. But if AI is central to your product — if you build AI systems, train models, or deploy AI in high-stakes decisions — you should consider ISO 42001, the dedicated AI management system standard.

Before pursuing ISO 42001, make sure your foundational AI governance documents are in place. The AI acceptable use policy is the starting point every auditor — for SOC 2, ISO 27001, and ISO 42001 alike — will ask for first.

ISO 42001 goes further than what SOC 2 and ISO 27001 require. It adds 38 controls specifically designed for AI governance: impact assessments for AI systems, AI lifecycle controls from design through decommissioning, transparency requirements, bias management, and human oversight mechanisms.

The relationship between the three standards is complementary, not competitive. Many organizations are now pursuing all three: SOC 2 for US customer assurance, ISO 27001 for information security governance, and ISO 42001 for AI-specific governance. If you have ISO 27001, you have a significant head start on ISO 42001 — the management system infrastructure transfers directly.

StandardWhat it governsBest for
SOC 2Security, availability, confidentiality, processing integrity, privacy of your serviceUS enterprise customer assurance
ISO 27001Information security management system — people, processes, technologyGlobal information security governance and certification
ISO 42001AI management system — governance, risk, lifecycle, transparency, oversightOrganizations building, deploying, or operating AI systems at scale

See Where Your AI Governance Gaps Are

Our free gap assessment tool covers AI governance controls alongside SOC 2, ISO 27001, HIPAA, and more — in a single pass. Find out exactly where your program falls short before your next audit.

Start Free Assessment →

Frequently Asked Questions

Does SOC 2 require AI-specific controls?

SOC 2 doesn't have dedicated AI criteria, but existing Trust Services Criteria apply to AI systems in scope. If AI processes customer data or influences security, availability, confidentiality, or processing integrity, auditors expect to see controls. In 2026, auditors are increasingly asking about AI validation, access controls, output monitoring, and AI vendor due diligence as standard parts of the examination.

Does ISO 27001 require AI governance?

ISO 27001 doesn't explicitly mandate AI controls, but if your risk assessment doesn't mention AI and your acceptable use policy is silent on it, auditors may raise observations or nonconformities around risk management and policy coverage. Adding AI to your risk register, asset inventory, and acceptable use policy is relatively straightforward and provides solid audit evidence without a heavyweight program.

What's the biggest AI compliance mistake organizations make?

Underestimating shadow AI. Most organizations assume their staff use one or two approved AI tools. A shadow AI audit typically reveals far more — browser extensions, productivity tools with AI features, consumer AI subscriptions used for work. You can't govern what you haven't inventoried. An AI tool audit is the right first step before writing any policy.

What is an AI acceptable use policy?

An AI acceptable use policy defines which AI tools staff can use, which data classifications are permitted with each tool, prohibited uses (such as inputting customer PII into public LLMs), and accountability for AI-generated outputs. It's the foundational governance document for AI usage and the first thing auditors ask for under both SOC 2 and ISO 27001. We've published a full guide with a ready-to-use template covering every required section.

Can I use the same AI governance controls for SOC 2 and ISO 27001?

Yes — and you should. The AI controls described in this post are designed to satisfy both frameworks simultaneously. An AI acceptable use policy, vendor risk assessment, access controls, and logging program all satisfy requirements under both SOC 2 and ISO 27001. Build once, evidence twice.

How is ISO 42001 different from adding AI controls to ISO 27001?

Adding AI controls to ISO 27001 addresses AI as an information security risk — data leakage, access control, vendor risk. ISO 42001 addresses AI as a governance subject in its own right — fairness, transparency, bias, human oversight, AI system lifecycle, and impact on affected populations. For organizations building or deploying AI products, ISO 42001 is the more comprehensive framework. See our full guide: ISO 42001: What It Is and How to Get Certified.

Do I need a DPA with my AI provider?

If any EU personal data flows through your AI provider — including employee data — you likely need a Data Processing Agreement under GDPR. Most enterprise API tiers of major providers (OpenAI, Anthropic, Google, Microsoft) include DPAs. Consumer and free tiers typically don't. Audit which tier your organization is actually using and get the appropriate agreements in place.