As I observe my clients navigating the intense push for AI innovation, I’m seeing its impact on virtually every area of my core business in audit and risk management. From documentation to governance, internal controls, internal audit, and even board-level decision making, leaders need to figure out how to win and manage in the new AI era. And, they need to protect their business with the right level of policies, guardrails, and communication. Having recently returned from the AI+IM Global Summit (by AIIM International) in Atlanta, I’m eager to share insights I gathered from sessions and countless conversations, including whether a formal policy is even necessary.
You Might Not Need an AI Policy (Really!)
At the AI+IM Global Summit, I had the pleasure of listening to Lewis S. Eisen, JD CIP, an international policy expert. He brought up a thought-provoking point: You probably don’t need an AI policy.
Cue the collective gasp from compliance officers everywhere.
In theory, the risks and behaviors associated with AI should already be covered by other policies. After all, AI is just a tool. Policy should be technology agnostic. Remember the early days of social media when organizations had separate policies for LinkedIn, Facebook, and Twitter (now X)? Today, most have consolidated these into a single social media policy.
The same principle applies here. You shouldn’t need to explicitly tell employees “Don’t upload our critical HR data into ChatGPT” if your Acceptable Use, Information Management, Information Security, and Cloud-Based Apps policies are properly written. It’s a bit like not having to specify “Don’t use the office printer to create counterfeit money”—some things should be implicitly covered by existing guidelines.
Reality Check: Why You Might Need One Anyway
In practice, policies serve as formal communication tools designed to drive behaviors. I see the value in crafting an AI policy to clearly convey management’s and the board’s message. Let’s be honest – AI doesn’t necessarily create new risks, but it does spotlight existing vulnerabilities and exposes gaps in your current governance structure. Many companies are already behind the curve: information management policies that are gathering digital dust, privacy programs playing catch-up, and concerns about cloud-based apps multiplying faster than streaming service subscriptions. It’s a bit like installing a security system after noticing that your neighbor’s house has been broken into. Is it reactive? Sure. Is it still prudent? Absolutely.
Policy Basics to Roll Out AI–3 Focus Areas
Whether you decide to implement a formal AI policy or not, here are critical considerations for your AI rollout:
1. Vision and Communication from Leadership
Policy is less important than clear communication from leadership. A compelling vision for AI in your organization must be driven by leadership. In conversations with organizations worldwide, I’ve observed a spectrum of approaches. Some are aggressively proactive, encouraging AI adoption and providing numerous implementation options. Many others are taking a more conservative stance. They may have the tools—especially Microsoft Copilot—but haven’t taken a definitive position. It’s as if they’ve purchased an expensive exercise bike that’s now serving as an artful clothing rack. Some want AI to complement existing processes, while others seek radical transformation. All these positions are valid, but the key is communicating your stance clearly.
I’ve repeatedly seen advice suggesting we need comprehensive plans before implementing AI. While that sounds logical, it’s not always realistic. I don’t run IBM, Walmart, or Shell—but I do run a company, and I’ve implemented AI without a flawless roadmap, and I have helped enterprise-level clients take steps to move forward. It’s perfectly acceptable to acknowledge fears around AI and not having a perfect vision for AI utilization. I’ll be the first to admit: this is challenging. Yet leadership transparency about concerns builds trust faster than pretending to have all the answers.
We all have what I call “mini plans”—using AI to identify and solve specific problems. My biggest challenges? Time constraints and scaling limitations. I’m exploring numerous ways AI might help with these issues, even without a pristine strategic document.
2. Review of Existing Policies and Your Policy Program
Whether you’ve decided to create an AI policy or not, the emergence of AI risks presents an excellent opportunity to review your existing policies. I’m working with clients this year as part of their internal control, audit, and governance programs to evaluate policies in light of AI risks and opportunities. This task applies to companies of all sizes whether their policies are formalized or not. Even smaller businesses need to evaluate their practices and risks around areas such as handling sensitive information and using technology for the appropriate purposes.
Pay particular attention to these policy areas:
IT Acceptable Use Policy:
- Where might staff misuse AI tools (ChatGPT, Copilot, Claude) for sensitive or inappropriate tasks?
- Is “shadow AI” occurring—employees using AI tools without IT’s knowledge, creating unsanctioned data flows and compliance gaps?
- Does the policy clearly prohibit uploading company information to cloud-based services without proper authorization?
Information Management and Governance Policy:
- Are data quality and lifecycle risks adequately addressed? Do employees understand that AI models are only as good as the data they consume—like a gourmet chef working with ingredients from the discount bins at the supermarket?
- Is data properly controlled? Do users understand the risks associated with AI generating vast volumes of content that complicate classification, retention, and deletion processes?
- What constitutes a corporate record? Does AI-generated content qualify as a business record subject to retention rules?
IT Security Policy:
- Is the deployment of AI applications secured with clear usage standards for both hosted and third-party models?
- Are routine security assessments of AI models and APIs required?
- Is there monitoring and alerting for AI-related anomalies or unusual behavior patterns?
Privacy Policy:
- Are there restrictions on applying AI to personal or private data?
- Does the collection of personal data require explicit consent, especially when AI is used for surveillance, profiling, or analysis?
- How does AI decision-making affect individuals (hiring, pricing, access to services), which may contravene data protection laws like GDPR, PIPEDA, or AIDA?
3. Procedures and Guideline Development with Iteration in Mind
As Lewis S. Eisen, JD CIP wisely notes, policy should address non-negotiables—ethical behavior, regulatory compliance, legal requirements. In my business, AI provides more templates, research, and ideas (which are all wonderful), but it doesn’t fundamentally change our non-negotiables: ethical behavior, high-value work, tailored content, specific expertise, and professional judgement.
But business, like life, involves countless negotiables, and this is especially true with AI. Unlike previous technological rollouts, AI isn’t binary—it fluctuates, evolves, creates, and sometimes contradicts itself. Your AI strategy will require frequent reassessment and updating. This is precisely why procedures and guidelines (unlike policies) are crucial for capturing evolving details. Procedures and guidelines are where you can get specific about technology. You can modify procedures more frequently, as they address negotiable elements likely to be revised over the coming year. Place detailed instructions about AI usage here.
Conclusion: Pragmatic AI Governance
An AI policy may or may not be the right approach for your specific context–though guidelines surely are. What matters more is clear messaging, thorough communication, review of existing policies to identify vulnerabilities, and developing procedures to address the multitude of negotiables that will inevitably arise. If you do decide to create an AI policy, don’t overcomplicate it. I recently experimented with using ChatGPT to draft an AI policy that addressed information, legal, and governance considerations. It performed admirably, proving that you can, in fact, use AI to help write your AI policy.