So yeah, I was on ChatGPT the second it launched. Obviously I started experimenting immediately. Could it write good articles for me (nope), how do prompts actually work and why? And of course, I tried to trick it(for ages you could get it to say anything if you claimed it was a thought experiment).
But real insights didn’t come from those experiments. They came later, and fast, when I started prompting for other people, ended up developing a prompt framework, and eventually joined the GenAI & Content Strategy team.
Now that gig is wrapping up and I finally have time to breathe and look back with some perspective. It feels like now is a good moment to write down what I learned. We’re doing a listicle because I haven’t done one in a while. And god, do I love a good listicle. Just for you, I’m keeping it to a nice round 10. Nobody’s ever accused me of being brief, but stick with me and you might actually get something useful out of this.
AI doesn’t fix broken processes
To automate well (AI or otherwise), you need serious insight into every relevant step of a process.
You can’t automate a process you don’t explicitly know or understand. But I regularly see people thinking: let’s just throw AI at it, it’ll work out. It won’t.
If you don’t know exactly what steps make up the process you want to automate, AI can’t execute those steps sensibly either. You’ll miss automation opportunities. Worse: if you’re not careful, you’ll automate mistakes you’re already making, but at scale.
The solution doesn’t start with AI. It starts with you. Map your process: describe what steps you go through, what standards matter at each step (when is it good?), what the decision points are, what risks are involved. Only when you have that clarity can you determine where AI makes sense and where it’s safe.
What you can do today: Pick one task you want to automate and write down step by step what you currently do. Describe each intermediate step toward the end result. For each step ask: what happens here, when is that done well, what happens if this goes wrong? How much time does this step take? Then you’ll see where AI can help and where it can’t.
Efficiency is nice, but quality is much nicer
If AI can’t do it at least as well as a professional, you’re not making gains. You’re making mess.
Working faster is obviously great. But producing bad output faster? That’s no progress at all.
In GenAI projects, efficiency is often the main goal. The number of times I’ve heard someone say “Eh, that’s good enough” is disturbing. Because if the quality of what you make suffers from your AI deployment, you’re just making worse mess faster. And once the race to the quality bottom begins, and every next decision about “is this good enough?” gets answered with “eh…sure,” you hit bottom fast. Really fast, given the volumes involved in AI deployment.
Quality must be the starting point. You only deploy AI if its output is at minimum as good as what a professional would make. Preferably better. That requires very clear quality criteria: what IS good output? How do you check that? Who decides? If you can’t answer those questions, you’re not ready to automate.
What you can do today: For one use case, establish: what’s the minimum quality level we accept? Write it down, quantify it, define it. What are your standards? Then test properly whether AI meets them.
AI doesn’t know what quality is
If you don’t know what’s good, AI doesn’t either. Let me elaborate a bit on this part of lesson 2.
LLMs optimize on patterns, not on your style guide, internal processes, or other lovely specific knowledge. So if you yourself can’t explain what a good blog post is, a strong customer email, or a clear FAQ, then AI can’t either. It falls back to the average of the patterns it was trained on. You know what I am talking about: that GenAI language we’re starting to recognize. The uncanny valley of content.
You need to be able to define what quality means for you. What makes this text good? What makes this email effective? What criteria apply here? Only when you have that clarity can you sensibly set GenAI loose on it.
And it’s not “write a good headline.” It’s: “Persuasive headline of maximum 8 words, containing this keyword [keyword] and summarizing the provided article.” Something along those lines. To do that well and consistently, you need to establish what standards apply for what counts as quality output in your organization.
What you can do today: Same advice as point 2. But important enough to address separately.
Prompts and prompting: not an afterthought
Your promptEen prompt is de instructie die je aan een AI-model geeft zoals bijvoorbeeld ChatGPT. Het is hoe je communiceert met het systeem: wat je vraagt, hoe je het vraagt en... Meer is a crucial steering mechanism for good output or nasty hallucinations. You need to take that seriously AND manage it.
Prompting is how you steer GenAI. You type something, you get something back. In business practice, the prompt has major influence on your GenAI project.
On the quality of your output (have you figured out what that quality looks like yet? Quick reminder.), on consistency, on the number of correction rounds (and thus efficiency and costs). Plus, worst case: a bad prompt is a cute little hallucination grenade. But prompting still gets left to individual talent or skill way too often.
Prompts, like everything else around GenAI, need management, standards, and governance.
Because without standards, it’s very hard to determine prompt quality. What are you even looking at?
Without management, you have zero insight into what prompts exist, what they actually do, whether they carry intrinsic risk or other things you want visibility on.
And without governance, all kinds of systems (and just Word docs) contain outdated information baked into prompts. Sometimes it’s literally data that’s baked in (yes, really) and sometimes instructions that are obsolete.
What you can do today: Establish prompt standards and (this is unpopular): enforce them! Document your prompt lifecycle process. Who checks whether prompts still work, are effective, when do they check, how do they check, and what do you do when prompts are out of date?
People are overwhelmed, not necessarily empowered
AI raises an endless stream of questions that go way beyond the technology. Help your people navigate this new and impactful reality.
When it comes to AI, the room is quite polarized. One extreme: people who literally see every problem as an opportunity to slap AI on it. The other end: people who want nothing to do with it.
Both reactions are legitimate. Both need attention.
Both groups benefit from knowledge, insight and understanding. For the AI enthusiasts: what the limitations and conditions are for smart, responsible, good GenAI deployment. For the sceptics: what it CAN do for them AND how.
Because most people understand that AI can do something. But they don’t know exactly what it means for them. How does this affect my role? Do I have to do everything differently now? Will I be redundant soon? That uncertainty leads to resistance, to misuse, or to passivity. What a waste.
Guidance isn’t a luxury. It’s a requirement. People need time to learn, to experiment, to make mistakes. And they need context: this is where AI is good, this is where you’re still indispensable.
Don’t forget that AI affects all of us broadly. The issues are endless: existential, business, scientific, personal, societal, ecological, you name it. You can’t solve all that overnight, but you can help people with the knowledge, insights, and vocabulary to have good conversations about it with each other AND continue to apply critical thinking.
What you can do today: Talk to one colleague who uses AI (or doesn’t). Ask: what makes you uncertain? Listen. Help. And as an organization? Find balance between hype stories and messy reality. Help your people with usable vocabulary. They can think perfectly well themselves, but it’s useful if they know what needs thinking about. Even better if they understand they don’t need to be technical to do that.
Risks are contextual
Low-risk AI deployment doesn’t exist. Whether the risk is acceptable depends on what you’re deploying it for.
A FAQ bot that invents ice cream flavors? Funny. A FAQ bot that hallucinates about medication? Life-threatening. Technically speaking, both bots are the same. But the impact is totally different.
Many organizations still underestimate this. They look at the tech instead of the context. Simple AI use seems low-threshold, but small errors can have enormous consequences. Especially in regulated sectors like finance, pharma, and healthcare, where virtually nothing is low-risk.
Risk analysis must be context-sensitive. Not just: what can AI do? But: what happens if it goes wrong? In this context? For this audience? With these stakes?
Even an innocent-looking hallucination (our office closes at 2pm) can lead someone in a financial setting to make a different decision about their money than they intended. With maybe real consequences. Fictional example, obviously, but you get the point.
What you can do today: Yeah, that depends on what sector you work in and what decisions are yours. But pointing out that low-risk is extremely relative? Good start.
Content claims its place or doesn’t get it
Content is what LLMs make. Content is information/data in context. So you’d think content specialists play a relevant role in LLM implementations. So far, disappointing.
GenAI largely revolves around language. LLMs produce content. But AI projects are mainly populated by techies, product owners, data scientists. Not the people who wrestle daily with tone of voice, audience, effectiveness, and contextual meaning (semantic integrity). The result: decisions about content use get made by people who can’t tell the difference between good and bad. But they often decide what counts as good or bad anyway.
That’s bad for customers. Bad for the business. Because bad content damages your brand and impacts conversion. But beyond that, content specialists have the skill to turn data, stakeholders, machine-generated drafts, technical limitations, and guidelines into something actually useful and effective.
And yes, content specialists are sometimes involved, sometimes not. But being involved and having an advisory role isn’t enough to guarantee quality and effectiveness. For that, content people need to participate in conversations at the GenAI design table. And that needs to be a real conversation.
That also requires something from the content specialists themselves. In a corporate setting, content specialists aren’t artists. A product page really isn’t art. Letting go of that idea is already a good step.
Next step: sufficient knowledge of what the GenAI pipeline (and things like chunking and embedding and all that fun stuff) looks like, so you can actually talk about it. You don’t have to do it yourself. You do need to understand it well enough to have a meaningful role at the GenAI table.
What you can do today: Are you a content specialist? Go immerse yourself in GenAI. Pick your direction, take a course, read up, listen to podcasts. Make sure you know what you’re talking about, so those tech and data people can take you seriously.
LLMs aren’t the solution for literally everything
A probabilistic system is the wrong choice for deterministic questions.
LLMs are good at generating language, recognizing patterns, and writing human-sounding texts. But they’re not good at exact calculations, not great at retrieving specific data, and not talented at preserving meaning.
Yet I see LLMs being deployed for tasks where you need certainty. A specific, grounded answer. An output that isn’t drawn from the probability space an LLM operates within.
Only once you’ve retrieved that information exactly from a deterministic solution should you let the LLM explain it in human language.
Choose your tools based on what they can do, not based on what’s trendy.
What you can do today: Make a list of tasks you use AI for. Ask per task: is this a probabilistic question (approximately is good enough) or deterministic (it must be exactly that answer)? Choose your tool accordingly.
Your source is your foundation
If your source sucks, everything you do with it will suck too. And doing 26 interventions to solve that problem isn’t necessarily the best or most efficient idea.
Garbage in, garbage out. We’ve known that for years. But with GenAI it gets an extra dimension. Because AI doesn’t make bad data into better data. It makes data that SOUNDS better. And that’s maybe worse, because then you almost believe it’s good data.
You can’t RAGRAG (Retrieval-Augmented Generation) is een techniek waarbij een LLM bij het antwoorden op een vraag/uitvoeren van een prompt, informatie opzoekt in bronnen die je vooraf hebt gedefinieerd. Daardoor hoef je... Meer, prompt-engineer, or fine-tune your way out of that. If your source data is incomplete, outdated, incorrect, unclear, not sensibly readable for machines, then everything that follows is meh.
So invest hard in your data sources. Clean them, keep them current, make sure they’re meaningful for machines, make sure you know what can and can’t be used for your automations. Only then does GenAI really have something to work with.
What you can do today: Be critical about what’s being used as a source. If it’s messy, do something about it.
Preserving meaning is the real problem
Even with clean data and smart systems, you have no guarantee of “semantic integrity”: maintaining relational meaning.
You can build the perfect knowledge graph, an ontology that’s correct, and have a data source that’s endlessly well organized. And STILL the AI can produce something that’s technically correct but wrong in terms of meaning. That’s because in the GenAI pipeline you have all kinds of points where context and nuance can be lost (and regularly are).
That’s a problem that still isn’t really solved, not even with the newest techniques. And yes, there’s definitely smart patching happening, but guarantees? Not yet.
So this is the fundamental challenge when you combine language and systems: how do you safeguard meaning? How do you prevent AI’s interpretation from subtly (or less subtly) deviating from what you actually want/need to say?
And factuality is contextual, right? The statement “Xaviera is tall” is true in Peru but a fat lie in the Netherlands (because at 1.62m I’m medium at best here). Bit simple as an example, but it tells you something about context and nuance, where machines are notoriously terrible.
What you can do today: Well, if you solve it 100%: you are now rich. Until then, at least be aware of it. Also be aware that shifts in factuality happen very slowly and subtly, until the tipping point where truth > slightly less true > factually wrong is suddenly at hand, with all potential consequences. Keep that in your head.
Ha, you made it
I think you’ve discovered the pattern in all my lessons? If not, because you’re scanning like a good web reader should, here’s the TL;DR:
If you don’t have the basics in order (data, knowledge, people, processes, etc.), then GenAI fixes little for you. In fact: the volume at which you can then make mistakes is inspiring. That volume part is the newest of everything. The rest is the same song, different verse.
Don’t stress….it’s just me!
I’ve spent over 25 years working in content strategy and digital transformation, which means I’ve seen enough technology hype cycles to be skeptical and enough genuine innovation to stay curious.
Want to talk shop? Do get in touch!
I have a newsletter and it's bearable. Subscribe to read my (Gen)AI articles!



