Escape the shadow
In some recent emails, I talked about chatbots and privacy. I got a couple of very similar questions in return which basically said “How do I even know where AI is being used in our organisation?”.
The honest answer is: you probably don’t.
When we think about AI at work, we mostly picture people typing into ChatGPT or Copilot. But that’s just the most visible side.
I’ve mentioned before how most tools (if not all) are adding AI features these days; often enabled by default. If you’re not careful, some AI feature will suddenly get access to your calendar or cloud storage. I’ve seen it happen, mostly due to consent fatigue: people just hitting “yes” buttons until they go away.
Added to that, you get vendors using external AI in the background: support tools routing questions through AI for triage or CRMs using AI to score leads.
Both of those are what’s called “Shadow AI”: AI that’s touching your organisation’s data without any official approval, oversight or even awareness.
Knowing about this isn’t just proper hygiene. The EU’s AI Act for high-risk systems comes into application in August this year, and the first thing it requires is an inventory of all your AI use.
Even if you don’t consider yourself in the high-risk category, it’s probably a good idea to make this inventory anyway (and you might need it to prove you’re not high risk).
How do you do this? It’s complicated.
Start by inventorying your subscriptions, check release notes, check integrations that have been authorised, look for AI-like names. Then ask your vendors if they use AI, get clear answers from them.
And, obviously, talk to your team. Don’t scold them, just try and figure out what’s being used. Then ask them why, and if it’s useful. If it is, it might be worth making official.
You might not catch everything, but a rough picture is better than none at all.
Colin