“Could you create a promptEen prompt is de instructie die je aan een AI-model geeft zoals bijvoorbeeld ChatGPT. Het is hoe je communiceert met het systeem: wat je vraagt, hoe je het vraagt en... Meer for…” My conversations increasingly start with that sentence. At work that is, thankfully not yet when I’m dancing bachata and persuading my friends to have just one more tequila. But still. The question (not the tequila) sometimes makes me slightly uneasy.
Because whilse I absolutely see the value in clever collaborations with AI, I also see the risks: biasBias is de set aan vooroordelen in AI-output die het resultaat is van de data en de manier waarop een model is getraind. Omdat LLM's leren van bestaande teksten, waarin... Meer, hallucinations, generic language… you know the drill. But there’s another one, less visible yet rather persistent: output blindnessOutput blindness ontstaat wanneer je AI-gegenereerde content oppervlakkig scant zonder kritisch te beoordelen of het klopt, relevant is of aansluit bij wat je bedoelde. Dat gebeurt omdat je brein overschakelt... Meer.
Yes, that’s a thing
Perhaps you recognise it: you get AI to make something for you, scan the output briefly and think… yeah, that’s fine. While you actually have no idea whether it’s genuinely good or what exactly it says. Your brain nods enthusiastically, but is actually already thinking about lunch.
Output blindness occurs when you let AI generate things for you and you’re no longer sharp enough to properly examine what comes out.
No idea if it’s an official term yet, but it’s definitely a phenomenon. Research now confirms that this excessive trust in AI systems is a real risk.
Your brain is a lazy bugger
You see this effect particularly with tasks you repeat often, but really no one is safe. GenAI sometimes spits out endless chunks of text, and that alone makes it difficult to stay alert.
Meanwhile, your brain does exactly what it always does: it looks for ways to save energy. The more often you experience AI output as ‘good enough’, the faster your assessment reflex dulls. Especially when it looks perfectly fine at first glance.
And before you know it, you’re scrolling past half a page of text without a single critical thought emerging.
That’s output blindness.
Craftsmanship requires friction
And that’s not just a risk of more errors, but also of superficiality. Because the more often we park our substantive sharpness, the easier it becomes to find mediocrity normal. Especially when no one else protests, because hey, the text looks smart… right?
But somewhere along the way, you lose something. And that’s not just your grip on quality, but also on what makes your work actually good: the thinking, the doubting, the sharpening. Everything that craftsmanship is.
This doesn’t just affect the quality of your output, but also your own thinking capacity. Because while you can create wonderful things with GenAI, we now also know that if you plonk your brain in the back seat too often, you’ll eventually be driving around with a loudly snoring passenger. MIT researchers already observed weaker brain connectivity and poorer memory in intensive AI users.
Three small interventions
So I started thinking. How can I make small interventions in my prompts that counteract this effect somewhat?
I wanted to achieve two things:
- Keep the user’s brain active during the GenAI content process
- Simultaneously stimulate learning capacity
With the last two prompt requests I received, I deliberately approached the design along these two lines.
This yielded three small but effective interventions. Not particularly spectacular, mind you, but effective. Which one I choose depends on the context and what I want that user’s brain to do.
First hero: Eddie Thor
Eddie Thor is your friendly final editor. Not the sort who opens every Word document with a sigh, but one who invites you to improve your text together. He’s personal, mildly sarcastic, and always comes with concrete suggestions, including examples. Sometimes he asks a few pointed questions, sometimes he makes a rewrite proposal, and at the end he politely asks if you’d like to think further together.
The intervention seems small, but does exactly what I want:
- Eddie’s tone is so personal and varied that it pulls your brain out of automatic mode just long enough
- His output is well-formatted: white space, subheadings, clear structure. Not visual mush, but something your eyes (and thus your head) can move through easily
All of this makes even a repetitive task feel less mechanical. And ensures you’re still participating in the thinking process rather than just pressing buttons.
Second hero: The crossroads
Sometimes I let a prompt begin with a crossroads. No automatic rewrite instruction, but first a small substantive choice. Which direction do you want to go?
For example, with a prompt for campaign copy. You input your original text, and GenAI then asks you this question:
Do you want the text to become:
- A. More persuasive: with clear arguments and call-to-actions?
- B. More personal: with a more human tone and greater relatability?
- C. More emotional: with greater emphasis on feeling and experience?
That explanation is deliberate. It helps prevent you from just clicking the first option, but actually makes you think for a moment: what am I actually trying to achieve here?
The intervention is small: a choice, with context. But precisely because of that, your brain stays active between input and output. You have to switch gears for a moment. And that prevents the copy-paste reflex that GenAI can sometimes make so tempting.
Third hero: Questions, questions, questions
Sometimes I simply postpone the moment of output. First the user must say something back, think, or choose. Because for some tasks, a direct response from GenAI is too quick. Then you miss the cognitive work that should actually have happened in between.
With this method, I let GenAI first ask a few questions, tailored to the input. Not endlessly, mind you, just two or three pointed questions. For example, with a prompt for a LinkedIn article or long-form piece:
- What insights do you want the reader to really take away?
- What mustn’t be lost in the tone or nuance?
- Should this piece primarily inform, activate, or position?
The questions force the user to make a half-conscious intention explicit. Not a complete brief, but a micro-reflection. Just enough to switch the brain back on.
It’s a bit like The Crossroads, because there too you’re slowing down the process. But there’s a fundamental difference in what happens.
With The Crossroads you choose from a few directions that are already laid out for you, and here you must formulate something yourself. No menu of choices, but putting words to what you want, mean, or find important.
That difference, between choosing and articulating, is important. Because it prevents you from falling into the paste-prompt-copy-input-copy-output fever dream and keeps your brain exactly where it belongs: switched on.
But there’s one more thing
Lovely. But then that learning capacity remained. So I also made a version of the same prompt in steps (which fit within one prompt). Not because that’s earth-shattering, but because it works. Instead of “make it”, the essence of this variant is: first tell me what’s not good, then explain why, and only then ask if you should make something.
Perhaps you’re thinking: duh. And that’s fine. But I’ve now had to acknowledge that the temptation to switch off your brain when using GenAI is really quite overwhelming. And that’s not just laziness. Your brain actually quite likes that. The brain is an optimiser. Always busy with reorganisations. Everyone out who doesn’t contribute effectively to the ROI. Including you.
By postponing the output a bit longer and making the reflection explicit, you force yourself to think about what’s actually happening before you shoot into generation mode. Not big, not revolutionary. But enough to go from mindless clicking back to serious thinking.
Auto-destroy
And that touches on something bigger. Because the more often we outsource the thinking work, the more we’re also dismantling the value of that thinking work itself. Not just individually, but as a profession. Researchers found a clear connection between heavy AI use and declining critical thinking ability. In any case, we’re still in a phase where content without an expert(!) in the loop is often just… meh.
And from one content professional to another: if we elevate mediocre output to the norm by not deploying our knowledge and skills, then we’re making our field rapidly and unnecessarily obsolete.
I don’t just find that a shame because I love this profession. I also find it unfair to those we ultimately do it for: the target audience, customers, clients, citizens, and members. They deserve better than mediocre rubbish.
AI can sing false notes too
And yes, there’s content that can just be meh and still work. But there’s also content that you can’t have made by the modern equivalent of the musical box from The Emperor’s Nightingale.
Naturally, I don’t solve all of this with these little methods. But it’s a start. Have you got better ones? Share them, I’d love to hear. And if I have better ones, you’ll hear from me.
Ik ben het maar hoor, niet stressen.
I’ve spent over 25 years working in content strategy and digital transformation, which means I’ve seen enough technology hype cycles to be skeptical and enough genuine innovation to stay curious.
Want to talk shop? Do get in touch!



