AI is everywhere. I suspect some organisations are caught off guard at the rapid growth and adoption of AI tools. You can’t ignore it. A total ban is possibly unworkable and somewhat naive; the same applies to a policy requiring approval for usage. A robust, but also flexible and evolving, policy is probably the safest bet. So, if you/your organisation are scrambling to cobble together/create an AI workplace policy, read on.
What makes a good AI workplace policy?
The organisations getting this right aren’t the ones trying to control every aspect of AI use, nor are they the ones throwing caution to the wind with a do-what-you-want non-existent policy. They’re the ones building frameworks that acknowledge both the potential and the pitfalls of AI.
A good AI workplace policy will:
-
include encouraging experimentation – innovation happens when people feel safe to explore new tools and approaches
-
ensure fact-checking is embedded and that users know not to assume any AI output is 100% correct
-
have a list of approved AI tools – those that have been vetted for security and functionality
-
have clear guidelines on appropriate use – with examples of what good practice looks like
-
ensure data security is embedded in the training, ie not feeding customer data into AI tools
-
include accountability measures and review processes for AI-assisted work
-
include when and where to use transparency statements about the use of AI
This isn’t about creating bureaucratic hurdles. It’s about giving people the confidence to use the tools effectively while protecting the organisation.
The evolving policy document matters, but it needs backing up with practical training.
The AI staff training should include:
Prompts
AI is ONLY as good as the instructions you give it. Specific, contextual prompts yield better results. The difference between “write a sales report for the board” and “write a 500-word executive summary of Q3 sales performance for the board, highlighting the top three growth areas and main challenges, using the attached data, and in the style of previous sales reports” is the difference between generic fluff and a useful report you can work with. Prompt engineering is a digital literacy skill. AI tools are not mind readers – they need clear, detailed instructions to deliver what you actually want.
Data security
Establish data protection protocols, ie what information you should NOT include in prompts such as customer data, identifiable financial details and confidential strategies. If you wouldn’t put it in an email to a competitor, don’t put it in an AI prompt either.
Hallucination (Don’t trust facts!)
AI hallucinates – which means it makes up case law, statistics, and references. It can also be accused of “overegging the pudding”, ie AI tends toward overconfidence and not admitting what it doesn’t know. Factual claims should always be verified independently. In some sectors, this will be more important/dangerous than others, ie legal, medical, and academic. These require sector-specific guidance as the stakes involved in getting it wrong are too high.
Use your own voice
AI-generated content can sometimes sound generic and hollow, technically correct but soulless. Your voice (personal and/or corporate/brand) is as crucial as ever. Your expertise, experience, and perspective are irreplaceable. AI is a mere tool; it will never replace humans. Nor is it a replacement for critical thinking. It’s one part of a digital toolbox. Human insight, context, experience and judgment remain the differentiating factors in most professional work. The organisations that will thrive are the ones that get the balance right. The balance between harnessing the tech and still maintaining the human voice.
We may currently be hitting a plateau in AI development, which gives organisations some breathing room to develop thoughtful approaches rather than reactive knee-jerk policies.
This represents the current landscape as of August 2025. Given the pace of change in AI development, policies and approaches should be reviewed regularly to remain effective and relevant.
