AI Acceptable Use Policy Template

by Jon Lober | NOC Technology

Set Your Business Up for Acceptable AI Usage

Surveys consistently show that a majority of employees use AI tools at work without telling their managers. They're drafting emails, summarizing documents, and generating reports—often with company data flowing into systems you don't control and may not have vetted.


Most small businesses across the St. Louis area fall into two camps: they've banned AI outright (which employees quietly ignore), or they have no policy at all (which creates real liability). A middle path exists—one that lets your team use these tools productively while keeping sensitive data where it belongs. This guide walks through what a strong AI acceptable use policy should include, with sample language you can customize for your business.


Why You Need an AI Policy (Even If You're Small)


The assumption that "we're too small for formal policies" breaks down quickly when it comes to AI. Here's why even a 15-person company needs written guidelines.


Liability exposure is the most immediate concern. When an employee pastes client financial data into ChatGPT to help with analysis, that data may become part of a training set (on free tiers) or accessible in ways that are hard to predict. If that client's confidential information surfaces elsewhere, your company may bear responsibility—regardless of whether you knew it was happening.


Data exposure from AI happens quietly. There's no breach notification, no incident alarm—just quiet risk accumulation as employees make individual judgment calls about what's appropriate to share with an AI tool.


Regulatory compliance adds another layer. If you're in healthcare, legal, or financial services, specific data handling requirements apply. HIPAA doesn't consider intent when an employee asks an AI to summarize patient notes. Missouri healthcare practices and law firms are especially exposed here because AI's efficiency gains are highest in exactly the document-heavy industries where compliance requirements are strictest.


What Your AI Acceptable Use Policy Should Cover


A solid AI policy doesn't need to be lengthy. It needs to be clear, practical, and specific enough that employees can actually follow it. Here are the core elements.


Approved tools form the foundation. Your policy should specify which AI tools employees may use for work purposes. This ensures you've vetted the tools your team relies on. You might approve Microsoft Copilot under your enterprise agreement while restricting free-tier ChatGPT because of its data training policies.


Prohibited uses need explicit definition. Be specific: "Employees may not input client names, financial data, personally identifiable information, trade secrets, or confidential business information into any AI tool." Vague language like "don't share sensitive data" leaves too much room for interpretation.


Data classification tells employees what falls into each category. Your policy should include concrete examples so the line between "okay to use" and "never use" is clear.


Review requirements establish when human oversight is mandatory before using AI-generated output in client-facing or official communications.


Training expectations make the policy actionable. Require employees to complete training before using AI tools for work. A policy people don't understand is a policy people won't follow.


Sample policy language:

"Employees may use approved AI tools to assist with general research, drafting, summarization, and ideation tasks. AI-generated output must be reviewed for accuracy before use in any client-facing or official communication. Employees must not input confidential company information, client data, personally identifiable information, financial records, or trade secrets into any AI tool without explicit written approval."


Building and Maintaining Your Approved Tools List


Shadow AI is a real problem. Employees find tools that make their jobs easier and use them—whether or not those tools appear on an approved list. Your policy needs to address this reality directly.


When evaluating a new AI tool, consider: Does the enterprise tier handle data differently than the free tier? (Many do—ChatGPT's free tier trains on user input; Team and Enterprise tiers do not.) What are the vendor's data retention and training policies? Does the tool integrate with your existing security controls like SSO and audit logging?


Your approved tools list should be a living document with a clear owner—someone responsible for reviewing new requests and periodically re-evaluating existing approvals as vendor policies change.


Sample language for tool approval:

"Only AI tools appearing on the Company Approved AI Tools List may be used for work purposes. Employees who wish to use a tool not on the list must submit a request for evaluation. Using unapproved tools for work-related tasks is prohibited and may result in disciplinary action."


Data Classification for AI Use


Abstract rules about "sensitive data" don't help in the moment. Your policy needs concrete categories with examples.


Green — Generally Safe for AI Input:

  • Publicly available information (anything on your website, public press releases)
  • General industry questions ("What are common approaches to project management?")
  • Grammar and style checks on non-confidential text
  • Brainstorming and ideation without client specifics


Red — Never Input into AI Tools:

  • Client names, contact information, or any identifying details
  • Personally identifiable information (Social Security numbers, dates of birth, addresses)
  • Financial data (account numbers, transaction records)
  • Trade secrets and proprietary processes
  • Legal documents, contracts, or privileged communications
  • Healthcare information (PHI under HIPAA)
  • Employee HR records or performance data


Yellow — Requires Judgment or Approval:

  • Internal processes not documented publicly
  • Aggregated or anonymized data (verify it's truly anonymized)
  • Vendor communications that may contain contractual details


Sample language:

"Before inputting information into an AI tool, employees must classify the data per Company Data Classification Guidelines. Green-classified information may be used freely. Yellow requires manager approval. Red must never be entered into any AI system regardless of the tool's security claims. When classification is unclear, consult IT or Compliance before proceeding."


Enforcement and Training: Making the Policy Stick


A policy that sits in a handbook unread creates the illusion of protection without the reality. When you introduce the policy, explain the reasoning behind each restriction. Employees who understand that free-tier ChatGPT may train on their inputs will make better decisions than employees who just know "don't use free ChatGPT."


Train employees on the specific tools you've approved, the specific data categories you've defined, and the specific workflows where AI makes sense for your business—not generic ethics modules.


Make compliance easy. If your approved enterprise AI tool is harder to access than the free alternative, employees will default to the free alternative. Approved tools should be readily available, properly configured, and genuinely useful.

Handle violations progressively. First violations—especially when the employee didn't understand the risk—should trigger retraining and a conversation, not termination. Reserve serious action for deliberate disregard or actual data exposure.


Sample enforcement language:

"Violations will be addressed through progressive discipline. Unintentional first-time violations will result in additional training and a documented conversation. Repeated violations or intentional disregard for data protection requirements may result in formal disciplinary action up to and including termination."


Getting Started


Start with what you have. If you already have acceptable use policies for technology, add an AI-specific addendum rather than starting from scratch. Focus on the five elements: approved tools, prohibited uses, data classification, review requirements, and training expectations.


For organizations looking to implement AI governance alongside a broader technology strategy, NOC's Managed Intelligence services include policy development, tool evaluation, and ongoing monitoring. We also cover the security implications of AI tools as part of our cybersecurity services for St. Louis area businesses.


Frequently Asked Questions

Does our company really need an AI policy if we only have 20 employees? +
Yes. Company size doesn't reduce your liability exposure when AI use leads to data leakage or compliance violations. Smaller companies often face greater relative risk because a single incident can damage client relationships representing a larger portion of revenue. A clear policy takes a few hours to implement and can prevent significant problems.
What's the difference between ChatGPT's free tier and enterprise tier for business use? +
The key difference is data training. ChatGPT's free and Plus tiers may use your conversations to train future models. ChatGPT Team and Enterprise tiers explicitly exclude your data from training and offer additional controls like SSO and admin oversight. For any business use involving non-public information, enterprise tiers are the appropriate choice.
Can employees use AI tools for personal tasks on work devices? +
Most organizations allow limited personal use, but AI tools present unique considerations. Even personal AI use on work devices can create data exposure risks if employees aren't careful about context. Your policy should address this explicitly — either prohibiting personal AI use on work devices or requiring the same data protections regardless of task type.
How do we handle employees who have already been using unapproved AI tools? +
Treat policy rollout as a fresh start rather than a retroactive enforcement moment. Announce the policy, provide training, and give employees time to transition to approved tools. For past usage, focus on understanding what data may have been exposed rather than assigning blame. The goal is compliance going forward.
Are there specific AI policy requirements for HIPAA-covered entities? +
HIPAA doesn't specifically address AI, but its existing requirements apply directly to AI use. Protected health information cannot be entered into AI tools without a Business Associate Agreement with the vendor and appropriate technical safeguards. For most small healthcare practices, this means PHI should not go into general-purpose AI tools.
How often should we update our AI acceptable use policy? +
Review your policy at least annually, and also when major AI tools change their terms of service, when you adopt new AI capabilities, or when your industry issues new guidance. The AI landscape changes quickly — assign specific ownership for policy maintenance rather than leaving it to "someone in IT."
data backup
By Jon Lober April 30, 2026
HIPAA requires more than just having a backup. Learn what the 3-2-1-1-0 rule means for healthcare practices, what the 2026 Security Rule updates require, and how to document compliance.
tax preparation data safety
By Jon Lober April 29, 2026
Every Missouri tax preparer must meet IRS Publication 4557 and FTC Safeguards Rule requirements. Here's what the Security Six and a WISP actually require.
Healthcare HIPAA compliant email
By Jon Lober April 28, 2026
Standard Gmail and Outlook are not HIPAA compliant. Learn what healthcare practices in St. Louis need to configure for compliant email, including proper setup.
More Articles