Why the AI AUP is the foundation of AI governance
Every AI governance conversation — with auditors, with customers, with regulators — eventually circles back to the same question: do you have a policy? Not a risk register entry. Not a control. A policy that says what your staff can and cannot do with AI tools, what data is and isn't permitted to flow through them, and who is accountable when something goes wrong.
In real SOC 2 and ISO 27001 audits in 2025 and 2026, the AI acceptable use policy (AI AUP) has become the baseline expectation. If you have one and can show evidence of staff acknowledgment, you're ahead of most organizations. If you don't have one, auditors will note the gap regardless of what else your AI governance controls program looks like.
The reason it matters so much is structural: policies are the foundation everything else rests on. Access controls, logging, vendor due diligence — auditors assess all of these against the policy. Without the policy, there's no baseline to measure controls against, no standard employees are expected to meet, and no documented accountability chain.
The audit sequence matters: Auditors typically request policy documents before examining controls. An AI AUP that predates your audit — with evidence of staff training and sign-off — shows intent and maturity. A policy written after the auditor asks for it shows reactivity. Both satisfy the requirement; only one signals a functioning program.
Standalone document vs. policy section: which approach is right?
You have two structural choices: a standalone AI acceptable use policy, or an AI section added to your existing acceptable use policy. Both satisfy auditors. The right choice depends on your organization's size, AI exposure, and document management preferences.
| Approach | Best for | Advantages | Watch out for |
|---|---|---|---|
| Standalone AI AUP | Organizations with significant AI usage, AI-forward products, or multiple AI tools across functions | Easier to keep current as AI landscape evolves; targeted training artifact; clearer ownership | Document sprawl; policy library becomes harder to maintain; duplication with existing AUP |
| Section in existing AUP | Organizations with limited AI footprint, tight document management, or early-stage programs | Fewer documents to maintain; no duplication; existing review cadence applies | AI section can get lost in a long document; harder to surface for targeted training; may lag the pace of AI change |
For most organizations starting their AI governance journey, a dedicated section in the existing AUP is the faster path. Once AI usage grows to the point where the section is longer than the rest of the policy — typically 18–24 months in for most tech companies — a standalone document makes more sense.
This post uses the standalone format for the template section. If you're adding a section to an existing policy, the content requirements are identical — just embedded in your existing document structure.
Defining AI tool tiers: the cornerstone of a workable policy
The most common failure mode in AI acceptable use policies is trying to name specific tools: "ChatGPT is approved, Gemini is not." The AI landscape changes faster than any policy review cycle. A tool-specific policy is outdated before the ink is dry.
The better approach — and the one most mature programs use — is a tiered classification system. You define the tiers once; you classify tools into tiers as they're evaluated. The policy governs behavior by tier, not by product name.
| Tier | Description | Public data | Internal data | Confidential data | Customer data |
|---|---|---|---|---|---|
| Tier 1 — Approved Enterprise | Vendor-assessed, DPA in place, enterprise API or workspace subscription confirmed, security review complete | ✓ Permitted | ✓ Permitted | ⚠ With approval | ⚠ With DPA review |
| Tier 2 — Approved General | Approved for use but without full enterprise controls — consumer or professional tiers without completed DPA or security assessment | ✓ Permitted | ⚠ Non-sensitive only | ✗ Prohibited | ✗ Prohibited |
| Tier 3 — Under Evaluation | Requested by staff, security review in progress, not yet formally approved or denied | ⚠ With manager approval | ✗ Prohibited | ✗ Prohibited | ✗ Prohibited |
| Tier 4 — Unapproved / Shadow AI | Not reviewed, not approved, discovered in use or under consideration without IT/security submission | ✗ Prohibited | ✗ Prohibited | ✗ Prohibited | ✗ Prohibited |
Maintain a living AI tool register alongside this policy. The policy defines the tiers; a separate register (a spreadsheet or your GRC tool) lists each tool with its assigned tier, assessment date, and reviewing owner. Auditors will ask to see both — the policy tells them what the rules are; the register shows them you're actively managing the tool landscape.
What every AI acceptable use policy must include
The sections below are the minimum required content for an AI AUP that satisfies both SOC 2 and ISO 27001 auditor expectations. Each is marked with which framework it specifically addresses — though in practice, every section is relevant to both.
States why the policy exists, who it applies to, and when it takes effect. Scope must be explicit: all employees, contractors, and third parties with access to company systems. Auditors check this to confirm the policy applies to the relevant population.
Include the policy version number and review cadence here. Annual review is the minimum; quarterly review is best practice for organizations with high AI exposure.
Define key terms precisely so the policy is enforceable and unambiguous. Minimum definitions required:
- AI tool / AI system — any software using machine learning, large language models, or generative AI to produce outputs, recommendations, or decisions
- Generative AI — AI capable of producing text, code, images, audio, or other content in response to prompts
- AI-generated output — any content, code, recommendation, or decision produced by an AI tool
- Shadow AI — any AI tool used for company work without formal approval through the process defined in this policy
- Data classification tiers — reference your existing data classification policy here, or define Public / Internal / Confidential / Restricted if you don't have one
Describes how staff request approval for new AI tools and how IT/security evaluates them. This is the mechanism that converts shadow AI into managed AI. Without a documented approval process, even well-intentioned staff have no path to get tools approved.
The process should take no longer than 5–10 business days for standard tools — a slow process drives shadow AI usage just as much as a missing one. Define who owns the approval decision (typically the CISO or IT security lead) and where the tool register is maintained.
The operational core of the policy. Defines explicitly what is and is not allowed. Permitted uses are best framed as "approved for [tier] tools with [data classification]." Prohibited uses should be concrete, not abstract.
Prohibited uses that must be named explicitly:
- Inputting customer personal data, financial data, or health data into Tier 2 or unapproved AI tools
- Inputting authentication credentials, API keys, or secrets into any AI tool
- Using AI tools to generate content that will be represented as purely human-authored in contexts where that distinction is material (legal, regulatory, certain client deliverables)
- Using AI tools to make final decisions about individuals without human review in regulated contexts
- Using consumer-tier AI accounts for work when an enterprise equivalent is available
- Sharing AI-generated outputs externally without review against accuracy and confidentiality requirements
The tier table from the previous section belongs here. This is the day-to-day reference section staff will return to: "I have this type of data and want to use this tool — is that allowed?" The answer should be findable in under 30 seconds.
Explicitly address the most common edge cases auditors have seen cause incidents:
- Screenshots or document uploads containing embedded confidential data
- Code that contains environment variables, connection strings, or API keys
- Support ticket content that includes customer identifiers or usage data
- Prompt templates stored in shared tools that may contain sensitive context
AI outputs require human review before use in contexts that affect customers, compliance obligations, or external communications. This section defines the accountability principle: the individual who uses an AI-generated output is responsible for its accuracy and appropriateness — not the AI tool.
Define the review requirement by risk tier:
- High risk (customer-facing, legal, regulatory, financial) — mandatory human review and approval before use
- Medium risk (internal communications, internal documentation) — review recommended, individual accountability applies
- Lower risk (drafts, brainstorming, personal productivity) — individual judgment applies
Staff must know what constitutes an AI-related incident and how to report it. Link to your existing incident response procedure rather than duplicating it. Define AI-specific reportable events:
- Accidental input of confidential or personal data into an unapproved or Tier 2 tool
- Discovery of a colleague using an unapproved AI tool for work purposes
- AI-generated output containing data that should not have been accessible (potential data leakage)
- Suspected prompt injection or manipulation of an AI system in your product
- AI output distributed externally that was later found to be materially inaccurate
Policies without enforcement teeth are observations waiting to happen. Reference your existing disciplinary procedures by name — this links the AI AUP into your broader HR and security policy framework and shows auditors that violations have consequences.
State clearly that violations will be handled consistently with other information security policy violations, scaled to severity. This is not about punishing people for honest mistakes — it's about creating a documented accountability structure that auditors can point to.
ISO 27001 specifically requires documented roles and responsibilities for information security controls. This section satisfies that requirement for AI governance. Minimum roles to define:
- Policy owner — responsible for maintaining and reviewing this policy (typically CISO or Head of Security)
- AI tool approver — responsible for evaluating and approving tool requests
- All employees and contractors — responsible for reading, acknowledging, and complying with this policy
- Managers — responsible for ensuring their teams complete AI awareness training and use only approved tools
Find Your AI Governance Gaps Before Your Auditor Does
Our free gap assessment covers AI governance controls alongside your full SOC 2 compliance program and ISO 27001 program — in a single pass.
Start Free Assessment →Full policy template
The following is a ready-to-adapt template. Replace all bracketed placeholders with your organization's specifics. This template is designed to satisfy both SOC 2 and ISO 27001 requirements simultaneously.
§1 Purpose and Scope
Replace with your organization name and specifics.
This policy governs the use of artificial intelligence tools and systems by [Organization Name] personnel. It applies to all employees, contractors, consultants, and third parties who use company systems or perform work on behalf of [Organization Name], regardless of location.
The purpose of this policy is to ensure that AI tools are used in a manner consistent with [Organization Name]'s information security requirements, data protection obligations, and applicable laws and regulations — and to protect customers, employees, and the organization from risks that arise from unmanaged AI usage.
Effective date: [Date] | Version: [1.0] | Review cycle: Annual
§2 Definitions
- AI tool / AI system: Any software application, platform, or API that uses machine learning, large language models, or generative AI techniques to produce outputs, recommendations, or automated decisions in response to inputs.
- Generative AI: AI systems capable of generating text, code, images, audio, video, or other content in response to prompts or other inputs.
- AI-generated output: Any content, code, recommendation, decision, or other artifact produced by an AI tool, whether used directly or incorporated into a larger work product.
- Shadow AI: Any AI tool used for work purposes without having been submitted for and granted approval through the AI tool request process described in §3.
- Approved AI tool: An AI tool that has been evaluated and formally approved by [IT Security / CISO] and listed in the [Organization Name] AI Tool Register.
- Data classification: The sensitivity tier assigned to information under [Organization Name]'s Data Classification Policy — Public, Internal, Confidential, or Restricted.
§3 AI Tool Approval Process
All AI tools used for work purposes must be approved prior to use. The approval process is as follows:
- Submit an AI tool request via [ticketing system / email to security@] including: tool name, intended use, data types that will be processed, subscription tier, and vendor security documentation if available.
- [IT Security / CISO] will complete evaluation within [5–10] business days and assign the tool to an approval tier.
- Approved tools are listed in the AI Tool Register maintained by [IT Security] and accessible at [internal link].
- Using a tool while it is under evaluation is prohibited unless explicitly authorized in writing by [CISO / IT Security Lead].
- Previously approved tools may be downgraded or removed from the approved list if vendor terms change materially, a security incident occurs, or on annual re-review. Staff will be notified within [5] business days of any such change.
§4 Permitted Uses
AI tools may be used for work purposes subject to the data handling rules in §5. Approved uses include but are not limited to: drafting and editing internal and external communications, writing and reviewing code, summarizing internal documents, research and analysis using public information, generating templates and frameworks, and productivity tasks not involving sensitive data.
All AI tool use must be consistent with the tier and data classification rules in §5. Any use case not clearly covered by this policy requires advance written approval from [CISO / IT Security Lead].
§5 Data Classification Rules for AI Inputs
The following rules govern what data may be input into AI tools based on the tool's approval tier:
- Tier 1 (Approved Enterprise): Public, Internal, and Confidential data permitted. Restricted and customer personal data permitted only with written approval from [CISO] and confirmation that a Data Processing Agreement is in place.
- Tier 2 (Approved General): Public and non-sensitive Internal data only. Confidential, Restricted, and all customer or personal data strictly prohibited.
- Tier 3 (Under Evaluation): Public data only, with manager approval. All other data prohibited.
- Unapproved / Shadow AI: No use permitted. Discovery of shadow AI usage must be reported per §7.
Regardless of tool tier, the following are prohibited in all AI tool inputs: authentication credentials, API keys, and secrets; source code containing embedded credentials; personal health information; payment card data; and government-issued identification numbers.
§6 Output Review and Accountability
AI tools produce outputs that may be inaccurate, biased, confidential, or legally problematic. The individual using the AI tool is solely responsible for reviewing and validating AI-generated outputs before use.
AI-generated outputs used in customer-facing deliverables, legal or regulatory filings, financial reporting, or external communications must be reviewed and approved by a qualified human reviewer before distribution. AI authorship does not transfer or reduce individual accountability for the accuracy, appropriateness, or consequences of any work product.
[Organization Name] does not represent AI-generated content as purely human-authored in contexts where that distinction is material to the recipient.
§7 Incident Reporting
The following events must be reported immediately via [incident reporting channel]:
- Accidental input of Confidential, Restricted, or personal data into a Tier 2 or unapproved AI tool
- Discovery of shadow AI usage within the organization
- AI-generated output that appears to contain data not intentionally provided as input (potential data leakage or injection)
- Suspected prompt injection, manipulation, or compromise of an AI system used in a product or service
- External distribution of AI-generated content later found to be materially inaccurate or to contain confidential information
Incident response procedures are governed by the [Organization Name] Incident Response Policy. AI-related incidents are in scope for that policy and subject to the same severity classification, escalation, and notification requirements.
§8 Enforcement
Violations of this policy are subject to disciplinary action consistent with [Organization Name]'s [HR / Employee Conduct Policy]. The severity of the response will be proportionate to the nature and impact of the violation, ranging from mandatory remedial training for unintentional minor violations to termination and legal action for willful or serious breaches.
This policy applies equally to all personnel within scope regardless of seniority or tenure.
§9 Roles and Responsibilities
- Policy Owner ([CISO / Head of Security]): Maintains, reviews, and updates this policy on the defined review cycle; approves exceptions; owns the AI Tool Register.
- IT Security Team: Evaluates AI tool requests; maintains the AI Tool Register; monitors for shadow AI usage; provides technical guidance on data handling requirements.
- All Employees and Contractors: Read and acknowledge this policy; complete AI security awareness training; use only approved tools; report incidents and suspected violations promptly.
- People Managers: Ensure direct reports complete required training; enforce compliance within their teams; escalate observed policy violations.
§10 Policy Review and Acknowledgment
This policy is reviewed annually by the Policy Owner. Out-of-cycle reviews are triggered by material changes to [Organization Name]'s AI tool landscape, significant changes to vendor terms, applicable regulatory changes, or a serious AI-related security incident.
All personnel within scope must acknowledge this policy upon joining [Organization Name] and annually thereafter. Acknowledgment records are maintained by [HR / IT Security] and constitute evidence of policy distribution for audit purposes.
Framework mapping: where the AI AUP fits in SOC 2 and ISO 27001
The AI AUP doesn't live in isolation — it feeds into and satisfies specific requirements across both frameworks. Here's how each policy section maps to audit evidence:
| Policy section | SOC 2 criteria satisfied | ISO 27001 controls satisfied |
|---|---|---|
| Purpose and Scope (§1) | CC1.1 — COSO principle of demonstrating commitment to integrity | Clause 5.1 — Leadership and commitment; Clause 4.3 — Scope |
| Tool Approval Process (§3) | CC6.7 — Authorized use of infrastructure; CC9.1 — Vendor risk management | A.5.9 — Inventory of assets; A.5.10 — Acceptable use of assets; A.5.19 — Supplier security |
| Data Classification Rules (§5) | C1.1 — Confidentiality commitments; CC6.1 — Logical access security | A.5.12 — Classification of information; A.5.13 — Labelling of information; A.5.10 — Acceptable use |
| Output Review and Accountability (§6) | PI1.1 — Processing integrity; CC4.1 — Monitoring controls | A.8.16 — Monitoring activities; Clause 8.1 — Operational planning and control |
| Incident Reporting (§7) | CC7.3 — Incident response procedures | A.5.26 — Response to information security incidents; Clause 10.1 — Nonconformity and corrective action |
| Roles and Responsibilities (§9) | CC1.3 — Accountability for performance; CC2.2 — Information security communication | Clause 5.3 — Organizational roles, responsibilities, and authorities; A.6.3 — Information security awareness |
| Acknowledgment records (§10) | CC1.4 — Competence and accountability | Clause 7.2 — Competence; Clause 7.3 — Awareness; A.6.3 — Training |
Rolling out the AI AUP: making it stick
Writing the policy is the easy part. Getting genuine compliance — the kind that holds up in an audit and actually protects the organization — requires a rollout that goes beyond "email it to everyone and log who opens it."
-
Conduct a shadow AI audit first Before publishing the policy, run a 2-week shadow AI discovery exercise using DNS logs, web proxy data, or your endpoint security tool. The results tell you which tools are actually in use, which need to be fast-tracked into Tier 1 or Tier 2, and where the highest-risk behaviors are happening. A policy that ignores the tools people are already using will generate immediate workarounds.
-
Classify your existing AI tools before launch Populate the AI Tool Register with every tool discovered in the audit, assigned to a tier. On day one of the policy, staff should be able to look up any tool they're already using and know its status. A blank register on launch day undermines credibility.
-
Build a fast-path approval process The approval process in §3 is only effective if it's actually faster than just using the tool and hoping no one notices. Target 3–5 business days for standard consumer tools and 7–10 days for enterprise evaluation. Announce the SLA and hold to it.
-
Deliver targeted training, not a policy link Staff need to understand the why behind the rules — specifically, that inputting customer data into a consumer AI tool may put that data in a training dataset. A 10-minute training video with concrete examples of permitted vs. prohibited inputs is more effective than a policy PDF. Track completion as audit evidence.
-
Collect acknowledgment signatures Every person in scope must sign (electronically is fine) acknowledging they've read the policy. This is audit evidence. Store acknowledgment records in your HR system or GRC tool with timestamps. Do this at onboarding and annually thereafter.
-
Review and update at least annually Set a calendar reminder. The AI landscape in 12 months will look different. New tools will have been adopted, old tools may have changed their data terms, and your own product may have evolved. An outdated policy is almost as bad as no policy — auditors check version dates.
Track your full program: The AUP is step one. Our interactive AI Governance Checklist covers all 48 items across policies, controls, risk register, vendor due diligence, training, and shadow AI — with progress that saves in your browser.
Common mistakes that create audit findings
These are the AI AUP failures that show up most frequently in real audit examinations. All are avoidable with the right policy structure.
-
Policy scope that excludes contractors. AI data leakage via contractor usage is as damaging as via employees. If your scope says "employees" only, you have a gap. Make sure the scope explicitly covers all personnel and third parties with system access.
-
Naming specific tools instead of defining tiers. "ChatGPT is approved" becomes an observation when Anthropic releases Claude and your team starts using it. Tier-based policies stay current without requiring constant policy revisions.
-
No acknowledgment records. The most common SOC 2 finding: a solid policy, no evidence it was distributed or acknowledged. Auditors need signatures or training completion records to satisfy the awareness criteria. A policy no one has acknowledged is a control that can't be evidenced.
-
Policy date is older than 12 months. Auditors note when policies haven't been reviewed in over a year. In a space moving as fast as AI, a 2023 policy in a 2026 audit signals that AI governance is not being actively managed.
-
No AI Tool Register to accompany the policy. The policy defines the tier system; auditors will ask to see the register. A policy with no supporting register is a framework with no implementation evidence.
-
Prohibited uses are too abstract. "Do not use AI in ways that violate data protection obligations" is unenforceable. Staff need concrete examples: no customer PII in Tier 2 tools, no credentials in any AI tool, no unreviewed AI content in regulatory filings.
-
No connection to the incident response policy. AI incidents — data leakage via prompt, shadow AI discovered in use — need a clear reporting path. If your AI AUP doesn't reference your IR policy, staff don't know what to do when something goes wrong, and auditors see a gap in your incident management chain.
Related reading: The AI AUP is one piece of a broader AI governance program. For the full picture of controls, risk register requirements, and vendor due diligence, see our guide to AI governance controls for SOC 2 and ISO 27001. For organizations building AI products, ISO 42001 and the NIST AI RMF add governance layers beyond what an AUP alone covers. Use the AI Governance Checklist to track your full program in one place.
If your organization is subject to the EU AI Act: An AI AUP is necessary but not sufficient. The EU AI Act imposes additional requirements around high-risk AI systems, transparency, and documentation that go beyond acceptable use governance. Your AUP should be consistent with — but does not replace — EU AI Act compliance obligations.
Check Your Full AI Compliance Posture
The AI AUP is the foundation. Our free gap assessment checks it alongside your complete AI governance controls, SOC 2 program, and ISO 27001 program.
Start Free Assessment →Frequently Asked Questions
Is an AI acceptable use policy required for SOC 2?
SOC 2 doesn't require an AI AUP by name, but if AI tools touch in-scope systems or data, auditors expect policies to cover them. The AI AUP satisfies CC1.4 (accountability), CC6.1 (logical access policies), and C1.1 (confidentiality commitments) when AI systems are in scope. In 2026, most SOC 2 auditors are explicitly asking for it as part of examination fieldwork.
Is an AI acceptable use policy required for ISO 27001?
ISO 27001 Annex A 5.10 requires acceptable use of information assets to be documented. AI tools are information assets. If your acceptable use policy doesn't address them, you're technically out of compliance with A.5.10 and likely also A.5.9 (asset inventory) and A.5.12 (information classification). A dedicated AI AUP or a clear AI section in your existing AUP resolves this.
What's the difference between an AI AUP and an AI governance policy?
An AI AUP governs individual behavior — what you can and cannot do with AI tools. An AI governance policy governs organizational decisions about AI — how AI systems are approved, risk-assessed, monitored, and decommissioned. For SOC 2 and ISO 27001, the AUP is the more immediate gap for most organizations. AI governance policy becomes relevant when you're building or deploying AI systems as products, and especially when pursuing ISO 42001.
Should the AI AUP be standalone or part of our existing AUP?
Either works for auditors. Standalone is easier to keep current and makes a cleaner training artifact. Embedded is simpler to manage if your policy library is already large. Most organizations start embedded and go standalone once AI usage grows significant enough that the AI section outweighs the rest of the policy.
How do we handle staff who are already using unapproved AI tools?
Run a shadow AI audit before publishing the policy. Any tool found in widespread use deserves an expedited review and either fast-track approval or an explicit statement of why it's prohibited. Announcing a policy that bans tools people already depend on — without offering an approved alternative — drives underground usage rather than eliminating it. The goal is managed AI use, not no AI use.
How often should the AI AUP be reviewed?
Annually at minimum. Given how fast the AI vendor landscape changes — and how quickly data handling terms can shift — some organizations with high AI exposure do semi-annual reviews. Any material change to your AI tool stack, a relevant vendor incident, or a new regulatory development should trigger an out-of-cycle review. Always update the version number and date when changes are made.