๐Ÿค– AI Compliance

AI Governance Checklist

Every policy, control, risk item, and vendor check you need to add to your SOC 2 or ISO 27001 program. Tick items off as you go โ€” progress saves in your browser.

Overall progress 0 of 42 complete
๐Ÿ“‹
0/10 โ–ผ
๐Ÿ“–
Deep dive: See our full AI Acceptable Use Policy guide for a complete template covering all required sections, and AI governance controls for SOC 2 and ISO 27001 for the broader program context.
AI Acceptable Use Policy (AI AUP) is documented
Covers approved tools, prohibited uses, data classification rules, output accountability, and enforcement. The first document auditors request.
SOC 2 + ISO 27001Must have
AI AUP scope includes contractors and third parties โ€” not employees only
A scope limited to employees misses a common source of AI data leakage.
SOC 2 + ISO 27001Must have
AI tool tier classification system defined (Approved Enterprise / Approved General / Under Evaluation / Unapproved)
Tier-based policies stay current as the AI tool landscape changes โ€” avoid naming specific products in policy text.
SOC 2 + ISO 27001Must have
AI Tool Register maintained and accessible to staff
Lists every approved, under-evaluation, and denied AI tool with tier assignment, assessment date, and owner. Auditors will ask to see this alongside the AUP.
SOC 2 + ISO 27001Must have
AI tool approval process defined with documented SLA (target: 5โ€“10 business days)
A slow or missing approval process drives shadow AI adoption. Staff need a fast, clear path to get tools approved.
ISO 27001
AI Incident Response Addendum added to existing IR plan
Covers AI-specific scenarios: data leakage via prompts, prompt injection, AI-generated phishing, model or API compromise. Mirrors existing IR structure.
SOC 2 + ISO 27001Must have
Information security policy references AI governance (addendum or subsection)
ISO 27001 Annex A 5.1 requires the top-level policy to set the frame. A brief AI reference is sufficient.
ISO 27001 A.5.1
AI System Description Addendum added to SOC 2 system description
If AI is in scope, the system description must explain what AI components do, what data they process, and what third-party AI services are used.
SOC 2
AI AUP reviewed within the last 12 months
Auditors note when policies are stale. Given AI landscape velocity, annual minimum โ€” semi-annual for high AI exposure organizations.
SOC 2 + ISO 27001
Policy owner assigned with documented accountability
ISO 27001 Clause 5.3 requires roles and responsibilities to be assigned and documented. Typically CISO or Head of Security.
ISO 27001 Clause 5.3
โš ๏ธ
0/8 โ–ผ
Data leakage via public AI tools โ€” documented in risk register
Staff inputting confidential or personal data into public LLMs. Likelihood: high. Treatment: AUP, technical controls, training.
SOC 2 + ISO 27001Must have
Shadow AI usage โ€” documented in risk register
Unapproved tools in use without security review. Likelihood: high. Treatment: detection controls, approval process, awareness training.
SOC 2 + ISO 27001Must have
Prompt injection attacks โ€” documented in risk register
Adversarial inputs that manipulate AI outputs or extract data. Likelihood: medium for customer-facing AI. Treatment: input validation, output monitoring, security testing.
SOC 2 + ISO 27001
AI vendor supply chain risk โ€” documented in risk register
Third-party AI providers suffering breaches, outages, or policy changes. Likelihood: medium. Treatment: vendor assessments, contractual protections.
SOC 2 + ISO 27001
AI-generated phishing at scale โ€” documented in risk register
Attackers using AI to generate highly convincing, personalized phishing at volume. Likelihood: high and rising. Treatment: updated awareness training, email controls.
SOC 2 + ISO 27001High priority
AI output reliability failure โ€” documented in risk register
Hallucinations or biased outputs influencing decisions or customer-facing content. Likelihood: medium. Treatment: human review, output validation, documented accountability.
SOC 2 PI1ISO 27001
Regulatory risk from AI-assisted decisions โ€” documented in risk register
AI outputs influencing decisions about individuals implicating GDPR, anti-discrimination law, or sector regulations. Treatment: human oversight, impact assessment, legal review.
SOC 2 + ISO 27001
Each AI risk has documented likelihood, impact score, and treatment decision
ISO 27001 requires this structure for every risk. SOC 2 auditors expect to see AI risks feeding into your risk assessment process.
ISO 27001 Clause 6.1Must have
๐Ÿ”’
0/10 โ–ผ
AI API keys and credentials stored in secrets management (not hardcoded or in shared docs)
AI service API keys are secrets โ€” must be stored, rotated, scoped to least privilege, and revoked on offboarding.
SOC 2 CC6Must have
AI API keys included in access reviews and offboarding procedures
Offboarding revocation of AI tool access is the most commonly missed gap in access control programs.
SOC 2 CC6ISO 27001 A.5.18
AI systems included in asset inventory with owner, classification, and risk tier
AI tools, models, and SaaS subscriptions are information assets. Missing from asset inventory = missing from audit evidence.
ISO 27001 A.5.9Must have
AI system interactions logged where sensitive data is processed (prompts, outputs, API calls)
Retention periods defined, logs included in SIEM or log aggregation. Auditors will ask what you can see.
SOC 2 CC7ISO 27001 A.8.15
Data classification policy explicitly addresses which data can be input into which AI tool tiers
Must be documented, trained, and technically enforced where possible. Confidential and restricted data prohibited from public AI services.
SOC 2 C1ISO 27001 A.5.12Must have
AI output validation process documented for customer-facing or regulated use cases
Human review process defined and documented โ€” doesn't require exhaustive testing, requires a documented accountability chain.
SOC 2 PI1
Shadow AI detection controls in place (DNS filtering, web proxy logs, or endpoint tools)
Most organizations underestimate shadow AI. You can't govern what you haven't detected.
ISO 27001 A.8.22High priority
Role-based access controls applied to internally operated AI systems and model management
If you operate AI models or platforms internally, access to training pipelines and output systems must follow existing access control cadence.
SOC 2 CC6ISO 27001 A.5.15
AI systems included in vulnerability management and security testing scope
Prompt injection and model manipulation are testable attack surfaces โ€” include them in penetration testing scope where applicable.
SOC 2 CC7ISO 27001 A.8.8
AI-specific incident reporting channel communicated to all staff
Staff must know what constitutes an AI incident and where to report it. Link to existing IR procedure rather than creating a parallel process.
SOC 2 + ISO 27001
๐Ÿข
0/8 โ–ผ
All material AI service providers included in vendor risk register
AI providers are suppliers that process your data. ISO 27001 Annex A 5.19โ€“5.22 applies. SOC 2 CC9.2 applies.
SOC 2 CC9 ยท ISO 27001 A.5.19Must have
SOC 2 Type II report or ISO 27001 certificate reviewed for each material AI vendor
Enterprise tiers of major providers (OpenAI, Anthropic, Google, Azure, AWS) typically have these. Consumer tiers often don't โ€” verify which tier you're actually using.
SOC 2 + ISO 27001Must have
Data Processing Agreement (DPA) in place for any AI vendor processing EU personal data
Required under GDPR. Most enterprise API tiers include DPAs โ€” consumer and free tiers typically don't. Audit which tier is actually in use.
GDPR ยท SOC 2 + ISO 27001Must have
Confirmed in contract that vendor does not use your data to train models
Most enterprise API tiers explicitly opt out of training data use. Verify this contractually and document it โ€” don't rely on default assumptions.
SOC 2 + ISO 27001Must have
Sub-processor list reviewed and documented for each material AI vendor
Large AI providers use cloud infrastructure sub-processors. Know who they are and whether contractual protections flow down.
ISO 27001 A.5.19
Data retention periods documented for each AI vendor (how long prompt and completion data is held)
Understand what data is retained, for how long, and under what conditions it may be reviewed by vendor staff.
SOC 2 + ISO 27001
Breach notification SLA confirmed in vendor contract (typically 72 hours)
Aligns with GDPR notification obligations and SOC 2 incident response evidence requirements.
SOC 2 + ISO 27001
Consumer vs. enterprise tier confirmed in use for each AI service (not assumed)
ChatGPT Free and ChatGPT Enterprise have completely different data handling terms. Audit actual tier in use โ€” not which tier you assume is in use.
SOC 2 + ISO 27001High priority
๐ŸŽ“
0/6 โ–ผ
AI security awareness module added to annual security training program
Covers: approved vs. unapproved tools, data classification rules for AI inputs, AI-powered social engineering, individual accountability for AI outputs.
SOC 2 CC1.4ISO 27001 A.6.3Must have
Training completion records maintained with timestamps
Auditors need evidence the training happened โ€” completion logs in your HR or LMS system. Without records, the control can't be evidenced.
SOC 2 + ISO 27001Must have
All in-scope personnel have acknowledged the AI AUP (signed or electronic confirmation)
The most common SOC 2 finding: a solid policy, no evidence anyone acknowledged it. Collect acknowledgments at onboarding and annually.
SOC 2 + ISO 27001Must have
New hire onboarding includes AI AUP acknowledgment and AI tool training
Don't wait for annual training cycle โ€” new staff need AI governance context from day one.
SOC 2 + ISO 27001
Training covers concrete examples โ€” not just policy text
Permitted vs. prohibited input examples (code with credentials, customer PII, support tickets) are more effective than abstract policy statements.
ISO 27001 A.6.3
Training updated to reflect AI-powered phishing (no longer identifiable by poor grammar)
AI-generated phishing is highly convincing and personalized. Legacy phishing training that teaches "look for typos" is no longer sufficient.
SOC 2 + ISO 27001High priority
๐Ÿ‘ป
0/6 โ–ผ
Shadow AI discovery exercise completed (DNS logs, web proxy, or endpoint tools)
Run this before publishing any AI policy. You can't govern what you haven't found โ€” and most organizations find more tools than expected.
ISO 27001 A.8.22Do this first
All discovered tools classified into AI Tool Register (approved, denied, or under evaluation)
Tools found in shadow AI audit need a formal disposition โ€” not just an email telling staff to stop using them.
SOC 2 + ISO 27001Must have
Widely-used shadow AI tools fast-tracked for approval review (don't just ban popular tools)
Banning tools staff depend on without offering an approved alternative drives underground usage rather than eliminating it.
SOC 2 + ISO 27001
Browser extensions with AI features included in shadow AI scope
AI browser extensions often have access to page content including customer data and are frequently missed in shadow AI discovery exercises.
SOC 2 + ISO 27001Commonly missed
Productivity tools with embedded AI features reviewed (Notion AI, Grammarly, Copilot in Office, etc.)
AI features embedded in approved productivity tools may process data under different terms than the base product. Review separately.
SOC 2 + ISO 27001Commonly missed
Shadow AI detection controls running continuously (not just a one-time audit)
New AI tools launch constantly. One-time discovery becomes stale quickly โ€” ongoing monitoring is needed to catch new shadow adoption.
ISO 27001 A.8.22

๐ŸŽ‰ AI Governance Checklist Complete

Your AI governance program covers policies, controls, risk register, vendor due diligence, training, and shadow AI. Run a full gap assessment to see how your broader compliance program measures up.

Start Free Gap Assessment โ†’

See Your Full Compliance Posture

AI governance is one piece. Our free gap assessment covers SOC 2, ISO 27001, HIPAA, CMMC, and AI in a single pass โ€” with a scored readiness report.

Start Free Assessment โ†’

For deeper context on any section, see: AI Governance Controls ยท AI Acceptable Use Policy Template ยท ISO 42001 ยท NIST AI RMF