Even hype-sensitive zoomers know: AI brings not only fascinating opportunities, but also plenty of risks. Ethics, law, tech, there is lots to discuss. But in day-to-day practice there’s another risk, one that often stays invisible.
It lives in the seams of organizations: the handovers between departments that nobody formally owns, or that simply aren’t visible.
Up to now, smart people patched those gaps pragmatically. A quick call here, an extra check there, a bit of intuition or experience. AI can’t, doesn’t, and usually isn’t even allowed to do that. Which means that a tiny crack in the organization can quickly grow into a deep fault line.
Everyone covers their own turf (and rightly so)
Every department has AI on its radar by now. IT checks security, uptime, system prompts and other technical aspects. Compliance looks at laws and regulations. Content makes sure brand voice and writing rules are followed. All legitimate, all neatly managed within their own domains.
But the real problems creep in at the edges. Where one team’s checks stop, and the other’s haven’t yet started.
How the gaps play out in practice
IT: KPIs are uptime, security, performance. Whether a source file is factually correct, or whether the wording matches brand voice? Out of scope. That’s for the business to solve. In practice, they assume that whatever they’re handed is correct.
Content: they know exactly how to apply rules contextually: tone, audience, type of communication. But they have no access or mandate to decide how texts are ingested and reused by AI systems. In practice, they’re already patching gaps intuitively, interpreting on the fly, or fixing errors after the fact when someone notices.
Compliance: their frameworks were written for static documents and fixed processes. With dynamic AI output, that’s far less effective because every run can be different, depending on context and promptEen prompt is de instructie die je aan een AI-model geeft zoals bijvoorbeeld ChatGPT. Het is hoe je communiceert met het systeem: wat je vraagt, hoe je het vraagt en... Meer. In practice, they often handle this ad hoc, relying on spot checks or reacting after an incident.
And to be fair: these pragmatic fixes often work… until they don’t. And with AI as a giant multiplier, that moment comes faster than ever.
The cost of frayed seams
Take Air Canada. Their chatbot told a customer he could get a refund after his flight. Not true, but the chatbot had picked it up from somewhere in the knowledge base. To IT the system looked safe, to compliance it looked compliant, and to content the tone of voice was fine. The court decided it was still Air Canada’s responsibility. Result: compensation and reputational damage.
Or DPD. Their chatbot suddenly started swearing at customers. A technical glitch? Not exactly. Somewhere in the source there was language that got misinterpreted. No department had flagged it: for IT the source was stable, for content it was internal and not their concern, and for compliance not risky. The result: reputational damage far larger than the slip itself.
Or the McDonald’s drive-thru case. Technically the system worked fine. But it started placing absurd orders: hundreds of nuggets, ice cream with burgers. For IT, not a security issue. For content, not a tone-of-voice matter. For compliance, not a legal problem. But for operations? A nightmare: wrong orders flooding stock, billing, and customer service.
The double damage
The consequences of these gaps cut both ways.
Externally: customers receive promises chatbots can’t keep, AI advisors work with outdated information, systems talk to people in odd or even offensive tones.
Internally: risk assessments sound less risky than they are, compliance reports lose nuance, quarterly results are presented rosier than reality.
Mistakes that used to be manageable suddenly carry bigger risks, because AI repeats them at scale, interprets them with misplaced enthusiasm, and sometimes just hallucinates.
It’s not that people are careless. On the contrary. IT rightly focuses on tech, content on communication, compliance on rules. But in the seams, where domains overlap, there’s still risky, often invisible, space.
Content: isn’t that just cosmetic?
For content specialists there’s an extra layer to the problem. Many of these errors show up in content: wrong text, outdated policies, unclear information. Which means fingers often point at them when things go wrong. And that doesn’t help, because it makes the problem look cosmetic.
That’s a misunderstanding.
Content isn’t a coat of paint. It’s infrastructure. It’s the pipeline through which information flows across all systems. AI doesn’t just widen that pipeline, it makes it brutally transparent. Including the mistakes you used to quietly patch up backstage.
Drip -> storm
To make it more sinister: AI rarely fails in one big spectacular meltdown. It starts with thousands of small drips. A wrong number here, an outdated procedure there, an ambiguous phrasing somewhere else. A content guideline people understand, but that AI interprets into nonsense. Individually harmless, together a downpour.
A few suggestions
The problem should be clear by now. GenAI can turn a tiny chip in your organizational windshield into a shattered pane. And yes, that’s yet another metaphor, but let me have this one.
The point is: don’t underestimate the multiplier effect of AI. You can no longer afford to squint and hope for the best. You’ll need to find those cracks, describe them, and mitigate them.
Here are five things you can do if you’re using AI with internal information sources:
- Treat unstructured content as data
A forgotten FAQ or old policy can suddenly be taken by AI as current advice, with all the risks that entails. - Make AI access conditional on ownership
No owner? Then AI doesn’t get to use it. That’s how you stop random notes from being elevated to official policy. - Automatic expiry dates with a kill switch
Make sure outdated content is automatically removed from AI systems, just like you patch or retire old software. - Monitor information as infrastructure
Outdated content is just as dangerous as outdated software. Treat information updates like security patches. - Implement strict AI confidence thresholds
Don’t let AI guess when confidence is low. A chatbot unsure about refund rules shouldn’t “have a go”, it should hand over to a human.
This won’t fix everything, of course. But it helps close some of the gaps, or at least shows you where to look for them.
The takeaway
Yes, AI introduces new risks. But more than that, it amplifies old ones. The ones that used to be quietly patched by experienced people with pragmatism. AI doesn’t patch, It passes everything along, multiplying it with mathematical enthusiasm.
And as long as nobody deals with the devil in the details, tiny cracks will keep widening into deep chasms. Because AI multiplies problems faster than most organizations can solve them.
Don’t stress….it’s just me!
I’ve spent over 25 years working in content strategy and digital transformation, which means I’ve seen enough technology hype cycles to be skeptical and enough genuine innovation to stay curious.
Want to talk shop? Do get in touch!
I have a newsletter and it's bearable. Subscribe to read my (Gen)AI articles!



