The problem with AI and existing compliance programs
Most SOC 2 and ISO 27001 programs were built before AI became a daily operational reality. The policies reference acceptable use of company systems. The risk assessments cover phishing, access control, and vendor risk. The controls address encryption, logging, and incident response.
None of that is wrong. But it's increasingly incomplete. Staff are using ChatGPT, Claude, Copilot, and dozens of other AI tools to do their jobs — often without any formal policy, risk assessment, or oversight. Customer data is being pasted into public LLM prompts. Code is being generated by AI and shipped without AI-specific review. Decisions are being influenced by AI outputs with no accountability trail.
When your auditor asks "how do you govern AI usage within your organization?" — and in 2026 they will — your existing program may have nothing to say.
The good news: you don't need a separate AI compliance program. You need to extend the one you have. A handful of policy additions, a few new controls, and an updated risk register covers the vast majority of what auditors expect to see.
The data is stark: IBM's 2025 Cost of a Data Breach Report found that among organizations reporting an AI-related security incident, 97% lacked proper AI access controls and 63% lacked AI governance policies entirely. This is not a theoretical risk.
What auditors are actually asking in 2026
Neither SOC 2 nor ISO 27001 have AI-specific criteria — but both frameworks are broad enough to catch AI risks under existing requirements. Here's what's showing up in real audits:
| Area | SOC 2 auditor questions | ISO 27001 auditor questions |
|---|---|---|
| Policy | Do you have an acceptable use policy that addresses AI tools? | Does your ISMS scope address AI risks? Is AI covered in your information security policy? |
| Risk | Have you assessed risks introduced by AI tools your staff or systems use? | Do AI-related risks appear in your risk register with documented treatment decisions? |
| Vendor | Have you assessed your AI service providers? Do you have DPAs or contracts covering AI data handling? | Are AI vendors in your supplier inventory? Have you completed Annex A 5.19–5.22 assessments for AI providers? |
| Access | Who has access to AI systems? Are AI API keys treated as secrets under your access control program? | Are AI systems included in your asset inventory and access control policy? |
| Monitoring | Do you log AI system usage? How do you detect misuse or anomalous AI behavior? | Are AI systems included in your logging and monitoring controls? |
| Output validation | If AI influences decisions or outputs affecting customers, how do you validate accuracy and reliability? | How do you manage risks from AI-generated outputs — hallucination, bias, confidentiality breaches? |
The auditor framing that matters: SOC 2 auditors don't need a dedicated AI section in your report. They need your existing controls to credibly cover AI systems that are in scope. If an AI system touches customer data, it's in scope — and the evidence requirements are the same as any other system.
Policies to add to your existing program
These are the policy documents that directly address AI governance. Some can be added as standalone documents; others can be incorporated into existing policies as a new section.
Track your progress: Use our interactive AI Governance Checklist to work through every policy, control, risk item, and vendor check in one place — with progress that saves in your browser.
-
Both
AI Acceptable Use Policy Defines which AI tools are approved for use, which data classifications are permitted with each tool tier (public vs. enterprise), prohibited uses (no customer PII in public LLMs), and who is accountable for AI-generated outputs. This is the foundational AI governance document and the first thing auditors ask for. See our full AI AUP guide and ready-to-use template.
-
Both
AI Tool Classification and Inventory A register of all AI tools in use across the organization — approved, unapproved (shadow AI), and under evaluation. Classifies each by data handling risk tier. Required for vendor risk assessments and risk register completeness. Surprisingly, most organizations discover more AI tools than expected when they actually audit.
-
ISO 27001
AI Risk Assessment Addendum Extends your existing risk assessment to cover AI-specific risks: data leakage via public LLMs, shadow AI, prompt injection, model output reliability, training data poisoning, and AI-related social engineering. Each risk needs a documented likelihood, impact score, and treatment decision consistent with your existing methodology.
-
SOC 2
AI System Description Addendum If AI systems are in your SOC 2 scope, your system description must address them. Document what AI components do, what data they process, what third-party AI services are used, and how your controls apply. Auditors need to understand the system before they can evaluate controls over it.
-
Both
AI Incident Response Addendum Extends your incident response plan to cover AI-specific scenarios: prompt injection attacks, sensitive data exposure via AI outputs, AI-generated phishing at scale, and model or API compromise. Response procedures and escalation paths should mirror your existing IR structure.
-
Both
Security Awareness Training Update Your existing security awareness training needs an AI module. Cover: approved vs. unapproved tools, data classification rules for AI inputs, recognition that AI-powered phishing no longer has poor grammar, and individual accountability for AI-generated work products. This satisfies both SOC 2 CC1.4 and ISO 27001 Clause 7.2/6.3 training requirements.
Controls to add or update: SOC 2
SOC 2 doesn't have AI-specific Trust Services Criteria — but existing criteria apply directly to AI systems. Here's where AI governance slots into your existing control structure:
AI API Key and Credential Management
AI service API keys are secrets and must be treated as such — stored in secrets management, rotated on schedule, scoped to least privilege, and included in access reviews. Revocation on offboarding is commonly missed for AI tools.
AI System Access Controls
If you operate AI models or platforms internally, access to model management, training pipelines, and output systems must follow your existing access control and review cadence. Role-based access, MFA, and quarterly reviews apply.
AI Usage Logging and Monitoring
Log AI system interactions that touch in-scope data — prompts, outputs, API calls, and access events. Define retention periods. Include AI system logs in your SIEM or log aggregation. Auditors will ask what you can see.
AI Vendor Risk Assessment
Third-party AI providers (OpenAI, Anthropic, Google, Azure AI) must be included in your vendor risk program. Review their SOC 2 reports, data processing agreements, and sub-processor lists. Document the assessment and any compensating controls.
AI Output Validation
If AI outputs influence decisions or deliverables that affect customers, document how you validate accuracy and reliability. This doesn't require exhaustive testing — it requires a documented process and human review where it matters.
Data Classification Enforcement for AI Inputs
Your data classification policy must explicitly address what data can be input into which AI tools. Confidential and restricted data must be prohibited from public AI services. This needs to be documented, trained, and technically enforced where possible.
Controls to add or update: ISO 27001
ISO 27001's Annex A controls map well to AI governance. These are the existing controls most directly implicated by AI usage — update them to explicitly address AI, or add AI as a documented risk and treatment in your risk register.
Information Security Policy — AI Addendum
Your top-level information security policy should reference AI governance. A brief addendum or new subsection stating that AI tools are subject to the organization's information security requirements and acceptable use policy is sufficient to satisfy auditors.
AI Systems as Information Assets
AI tools, models, and AI service subscriptions are information assets. Add them to your asset inventory with documented ownership, data classification, and risk tier. Auditors reviewing your asset inventory will check whether AI is represented.
AI Vendor Due Diligence
AI providers are suppliers that process your data. Complete supplier security assessments for all material AI vendors. Review their security certifications (SOC 2, ISO 27001), DPA terms, data retention policies, and sub-processor lists. Document findings and risk acceptance.
AI Security Awareness Training
Add an AI module to your annual security awareness program. Cover approved tools, data handling rules for AI inputs, AI-powered social engineering risks, and individual accountability. Document completion — this is one of the most commonly tested controls.
AI System Logging
Extend your logging controls to include AI system interactions where sensitive data is processed. Define what is logged, retention periods, and review procedures. This integrates with your existing SIEM or log management without requiring separate infrastructure.
Shadow AI Detection and Control
Most organizations underestimate shadow AI usage. Use DNS filtering, web proxy logs, or endpoint tools to detect unapproved AI tool usage. Define a process for staff to request approval for new AI tools. Shadow AI is the most commonly raised observation in ISO 27001 audits touching AI.
Updating your risk register
For ISO 27001, every AI-related risk must appear in your risk register with a documented likelihood, impact, and treatment decision. For SOC 2, AI risks should feed into your risk assessment process and inform control selection.
These are the AI risks that should appear in any risk register updated for 2026:
- Data leakage via public AI tools — staff inputting confidential or personal data into public LLMs where data may be used for training or exposed. Likelihood: high. Treatment: acceptable use policy, technical controls, training.
- Shadow AI usage — unapproved AI tools in use across the organization without security review. Likelihood: high. Treatment: detection controls, approval process, awareness training.
- Prompt injection attacks — adversarial inputs to AI systems that manipulate outputs or extract data. Likelihood: medium for organizations with customer-facing AI. Treatment: input validation, output monitoring, security testing.
- AI vendor supply chain risk — third-party AI providers suffering breaches, outages, or policy changes that affect your data or operations. Likelihood: medium. Treatment: vendor assessments, contractual protections, backup procedures.
- AI-generated phishing at scale — attackers using AI to generate highly convincing, personalized phishing at volume. Likelihood: high and rising. Treatment: updated security awareness training, technical email controls.
- AI output reliability failure — AI hallucinations or biased outputs influencing decisions or customer-facing content. Likelihood: medium. Treatment: human review processes, output validation controls, documented accountability.
- Regulatory risk from AI-assisted decisions — AI outputs influencing decisions about individuals in ways that implicate GDPR, anti-discrimination law, or sector-specific regulations. Likelihood: context-dependent. Treatment: human oversight, impact assessment, legal review.
Practical tip: You don't need to add all of these at once. Add the risks that are most material to your organization first — typically data leakage and shadow AI — then expand the register over subsequent review cycles. A risk register that addresses five AI risks credibly is better than one that lists fifteen superficially.
Use our AI Governance Checklist to track which risks you've documented and what's still outstanding.
AI vendor due diligence: what to actually check
AI service providers deserve the same vendor risk treatment as any other supplier that processes your data. In practice, the questions to answer for each material AI vendor are:
- Does the vendor have a SOC 2 Type II report or ISO 27001 certificate? Enterprise tiers of major providers (OpenAI, Anthropic, Google, Azure, AWS) typically do. Free and consumer tiers often don't have the same contractual protections.
- Do you have a Data Processing Agreement in place? Required under GDPR if any EU personal data flows through the system. Most enterprise AI tiers include DPAs — consumer tiers typically don't.
- Does the vendor use your data to train models? Most enterprise API tiers explicitly opt out of training data use. Verify this in the contract and document it.
- Who are the sub-processors? Large AI providers use cloud infrastructure sub-processors. Know who they are and whether your contractual protections flow down.
- What is the data retention period? Understand how long prompt and completion data is retained and under what conditions it may be reviewed by vendor staff.
- What is the incident notification commitment? Your vendor contracts should require breach notification within a defined timeframe — typically 72 hours to align with GDPR obligations.
Consumer vs. enterprise tiers matter enormously. ChatGPT Free and ChatGPT Enterprise have completely different data handling terms. Claude.ai (consumer) and Claude API or Workspaces have different protections. Audit which tier your organization is actually using — not which tier you assume you're using.
When to consider ISO 42001
The controls and policies above extend your existing SOC 2 or ISO 27001 program to cover AI governance adequately for most organizations. But if AI is central to your product — if you build AI systems, train models, or deploy AI in high-stakes decisions — you should consider ISO 42001, the dedicated AI management system standard.
Before pursuing ISO 42001, make sure your foundational AI governance documents are in place. The AI acceptable use policy is the starting point every auditor — for SOC 2, ISO 27001, and ISO 42001 alike — will ask for first.
ISO 42001 goes further than what SOC 2 and ISO 27001 require. It adds 38 controls specifically designed for AI governance: impact assessments for AI systems, AI lifecycle controls from design through decommissioning, transparency requirements, bias management, and human oversight mechanisms.
The relationship between the three standards is complementary, not competitive. Many organizations are now pursuing all three: SOC 2 for US customer assurance, ISO 27001 for information security governance, and ISO 42001 for AI-specific governance. If you have ISO 27001, you have a significant head start on ISO 42001 — the management system infrastructure transfers directly.
| Standard | What it governs | Best for |
|---|---|---|
| SOC 2 | Security, availability, confidentiality, processing integrity, privacy of your service | US enterprise customer assurance |
| ISO 27001 | Information security management system — people, processes, technology | Global information security governance and certification |
| ISO 42001 | AI management system — governance, risk, lifecycle, transparency, oversight | Organizations building, deploying, or operating AI systems at scale |
See Where Your AI Governance Gaps Are
Our free gap assessment tool covers AI governance controls alongside SOC 2, ISO 27001, HIPAA, and more — in a single pass. Find out exactly where your program falls short before your next audit.
Start Free Assessment →Frequently Asked Questions
Does SOC 2 require AI-specific controls?
SOC 2 doesn't have dedicated AI criteria, but existing Trust Services Criteria apply to AI systems in scope. If AI processes customer data or influences security, availability, confidentiality, or processing integrity, auditors expect to see controls. In 2026, auditors are increasingly asking about AI validation, access controls, output monitoring, and AI vendor due diligence as standard parts of the examination.
Does ISO 27001 require AI governance?
ISO 27001 doesn't explicitly mandate AI controls, but if your risk assessment doesn't mention AI and your acceptable use policy is silent on it, auditors may raise observations or nonconformities around risk management and policy coverage. Adding AI to your risk register, asset inventory, and acceptable use policy is relatively straightforward and provides solid audit evidence without a heavyweight program.
What's the biggest AI compliance mistake organizations make?
Underestimating shadow AI. Most organizations assume their staff use one or two approved AI tools. A shadow AI audit typically reveals far more — browser extensions, productivity tools with AI features, consumer AI subscriptions used for work. You can't govern what you haven't inventoried. An AI tool audit is the right first step before writing any policy.
What is an AI acceptable use policy?
An AI acceptable use policy defines which AI tools staff can use, which data classifications are permitted with each tool, prohibited uses (such as inputting customer PII into public LLMs), and accountability for AI-generated outputs. It's the foundational governance document for AI usage and the first thing auditors ask for under both SOC 2 and ISO 27001. We've published a full guide with a ready-to-use template covering every required section.
Can I use the same AI governance controls for SOC 2 and ISO 27001?
Yes — and you should. The AI controls described in this post are designed to satisfy both frameworks simultaneously. An AI acceptable use policy, vendor risk assessment, access controls, and logging program all satisfy requirements under both SOC 2 and ISO 27001. Build once, evidence twice.
How is ISO 42001 different from adding AI controls to ISO 27001?
Adding AI controls to ISO 27001 addresses AI as an information security risk — data leakage, access control, vendor risk. ISO 42001 addresses AI as a governance subject in its own right — fairness, transparency, bias, human oversight, AI system lifecycle, and impact on affected populations. For organizations building or deploying AI products, ISO 42001 is the more comprehensive framework. See our full guide: ISO 42001: What It Is and How to Get Certified.
Do I need a DPA with my AI provider?
If any EU personal data flows through your AI provider — including employee data — you likely need a Data Processing Agreement under GDPR. Most enterprise API tiers of major providers (OpenAI, Anthropic, Google, Microsoft) include DPAs. Consumer and free tiers typically don't. Audit which tier your organization is actually using and get the appropriate agreements in place.