“Can you get the thing out of the thing for me?” My mom looks at me expectantly.
Slightly annoyed, I get up, walk to the study and get her glasses for her.
“You should really try sentences mom” I try, knowing full well, that her next request will sound pretty much the same.
And truly, I shouldn’t complain. I understood what she wanted perfectly and her request yielded the desired result.
Because I know our study. I know where she usually leaves her glasses. I know my mom.
That sentence, to anyone else, would be useless. If my mom were to walk up to a random person on the street and say the exact same thing, that person would not find her glasses or even know what she wanted and would most likely avoid any eyecontact with my overall very charming mom.
But to me, it’s perfectly clear. There’s no ambiguity in her request because we share years of lived context: body language, habits, familiar routines. The shape of her sentence leans on all of that. It works because I’m already standing inside the world she’s referring to.
This is how most human language operates. It’s compressed, referential, messy and it works beautifully as long as the context is shared. You don’t need to say everything out loud when you’ve already lived it together.
Language without shared context? That’s just noise.
Large Language Models don’t know your study though. And this is where promptEen prompt is de instructie die je aan een AI-model geeft zoals bijvoorbeeld ChatGPT. Het is hoe je communiceert met het systeem: wat je vraagt, hoe je het vraagt en... Meer design often goes wrong. LLM’s don’t have your lived experience. But we write prompts like they do.
But your model isn’t your mom. It hasn’t lived in your home. It doesn’t know what kind of month your industry has had. But we speak to it as if it does.
We hand it instructions like “make this friendlier” without defining what friendly means in the context of a recent data breach. We ask it to “rewrite for inclusivity” without specifying what kind of exclusion we’re correcting, or for whom. We give it crisis communications and expect it to intuit the exact temperature of concern without ever naming the weather.
This isn’t just optimistic. It’s structurally naive.
The confident fibber
LLMs are exceptionally good at sounding like they understand your assignment. They produce sentences that scan as professional, coherent, appropriate. The grammar is clean, the tone feels warm and even the structure makes sense.
But appropriateness is contextual. And this context is exactly what’s missing.
What you get instead is the model’s best guess at what “professional” or “empathetic” or “clear” might mean in a generic scenario. What it has been trained on based on patterns and large amounts of data. Which yields median output. Which is fine if you’re writing generic scenarios. But most work that matters isn’t generic.
The model will write you a perfectly serviceable apology email. But it won’t know whether this apology should acknowledge fault, deflect responsibility, or thread the needle between legal liability and human decency. It will give you a perfectly polished policy update. But it won’t know whether the moment calls for reassurance, transparency, or urgency.
Sidequest: that doesn’t mean it’s failing, It just means that LLM’s are not mind readers.
Your output will reflect that, as I am sure many of you have noticed. It will be smooth, confident, and missing the point by degrees you might not even notice until someone else points it out.
The expensive shrug
Here’s what happens when prompts operate on vibes:
You ask for “clear communication about the policy change.” The model gives you clear communication. But it’s the wrong kind of clear: corporate-speak clear instead of crisis-moment clear. It sounds like it’s hiding something, even when it isn’t.
You ask for “inclusive language” in a job posting. The model swaps in approved phrases and calls it done. But inclusive language isn’t just word choice: it’s perspective, framing, what you center and what you assume. The posting still reads like it was written for people who look like the last ten hires.
You ask for a “supportive response” to a customer complaint. The model gives you supportive language. But it’s supportive-for-easy-problems supportive, not supportive-for-the-thing-that-actually-went-wrong supportive. Believe you me, when I tell you, that customers can tell the difference.
Look, none of these outputs are bad. They’re just hollow. Professional-sounding shells without structural awareness of what the specific moment demands.
And it’s expensive because it looks fine until it doesn’t looks so fine anymore. But by then, the email is sent, the policy is published and the customer has already walked away.
Structure isn’t the enemy of creativity
The obvious fix is to get more specific. Write longer prompts. Add more examples. Give the model more context to work with. That helps, up to a point.
But specificity isn’t the same as structure. And structure is what’s actually missing.
Most prompts today are constructed like elaborate wishes. They pile on adjectives and hope the model assembles them into the right intention. “Write something that’s professional but warm, clear but not cold, inclusive but not preachy, urgent but not panicked.”
That’s not an instruction, that’s a more of a drunk vibe check at 3 am on a saturday.
What the model needs and what any intelligent system needs, is architecture. Not just what to aim for, but how to aim. Not just the destination, but the actual map.
Which is where components come in.
A prompt with a spine
Anyway, this problem has irked me for a while. I was one of those, “please take all of this whole long list into consideration” people. But slowly and yes, also surely I realized that it wasn’t really working. Not consistently anyway.
And I also realized, that many of my most effective prompts, had in them the same structure. So I started looking into the patterns that were working. It took some cursing and yes, some crying, and some throwing of hardware on my part, but i cracked it.
The pattern was simple: my best prompts weren’t just giving instructions, they were giving context. They weren’t just saying what to write, they were saying who was writing it, why, and what couldn’t go wrong. They had bones, not just wishes.
Of course my hyper focus self, refused to stay in her lane, so I didn’t stop there, but created a full framework for prompt architecture. Because well, having free weekends wasn’t as much fun as everyone made it out to be. But, that’s not what this article is about.
Let’s get back to the structure. First: it’s called the ECHO framework, which stands for Ecosystem for Controlled Human/AI Output (read the documentation on GitHub). But it also stands for the fact that AI is really just an echo of us, humans.
Within the ECHO framework prompts are built like structures, and not prayers. You go from an unstructured prompt:
Please write a helpful response to a patient who is concerned about side effects from their new medication. They’ve been reading things online and seem anxious. Be supportive and reassuring while staying professional. Make sure to use clear language they can understand and don’t contradict their doctor.
To a structured prompt:
Context
A patient has contacted a health provider with concerns about side effects from a newly prescribed medication. They’ve read conflicting information online and are clearly anxious. The goal is to respond in writing, in a way that eases uncertainty without dismissing valid fears.
Perspective
Health literacy-informed. Assume limited medical background and potential confusion about terminology. Prioritize clarity and accessibility over clinical precision.
Role
You are a patient education specialist. You have access to reliable, up-to-date medical information and are trained in supporting patients with questions about treatment.
Instruction
Respond to the patient’s concerns about side effects in a way that acknowledges their anxiety and reinforces confidence in the treatment plan. Translate any relevant risk information into language that is understandable and meaningful to the patient’s context—instead of listing generic side effects.
Guidelines
- Use plain, respectful language
- Acknowledge the effort they took to research
- Be reassuring but honest
- Frame uncertainty in a way that empowers, not alarms
GuardrailsGuardrails zijn beperkingen die je instelt in je prompt om te voorkomen dat een AI ongewenste, onjuiste of riskante output genereert. Guardrails werken als veiligheidsregels in je prompt. Ze vertellen... Meer
- Do not contradict the prescribing doctor’s advice
- Do not downplay legitimate concerns
- Do not give diagnostic or personalized medical advice
Output Instructions
Write two paragraphs:
- Acknowledge and validate their concern
- Provide clear, actionable guidance based on verified information
Output Example
N/A
Self-Check
- Does the response validate their concern without amplifying it?
- Does it support the doctor-patient relationship, rather than undermining it?
*this is a demo prompt, not thoroughly constructed or tested (ECHO has its own quality methodology)
Same ask, completely different structure.
The first version leaves the model guessing what “empathetic” means in this scenario. The second version tells it exactly who it is, what happened, what needs to happen next, and how to get there. It may seem more restrictive, but it’s actually more precise.
And precision isn’t the enemy of creativity, it’s what makes creativity possible in the first place.
Modularity is memory
The real advantage of structured prompts isn’t just clarity, It’s (potential) reuse. When your tone guidelines live in a reusable component, you don’t have to remember how to be “empathetic but not effusive” every time you write a customer service prompt. When your legal guardrails are modular and there’s a change in the law, you don’t have to reconstruct your compliance logic from scratch.
You build once. You refine once. You apply everywhere it fits.
And when something changes, when your brand voice evolves, when regulations shift, when you realize your “inclusive” language wasn’t actually inclusive, you update the component and not individual instances.
Scale without drift
Here’s the thing about working with AI at any real scale: consistency becomes survival. You can wing it with one prompt, maybe five, maybe even 10. But when you’re running dozens of prompts across multiple teams, contexts, and risk levels, ad hoc stops working. Tone drifts, guardrails get forgotten, self-checks skipped in one team and not another. Important context gets buried in whoever happened to write the prompt that day.
Structure solves this not by imposing rules, but by making logic visible.
When every prompt declares its context, role, and constraints explicitly, you can see where they align and where they diverge. You can trace decisions. You can spot drift before it becomes damage. You can work at scale without losing track of what matters.
Don’t prompt like you live together
My mom can say “the thing in the thing” because we share a lifetime of context. The model has no such luxury or (honestly) woes.
If you prompt like you share a kitchen with your LLM, you’ll keep getting output that sounds professional but misses the moment. Output that’s confidently wrong in ways you won’t catch until someone else does.
If what you’re building matters, stop prompting like you live together and start building prompts that think for themselves.
Ik ben het maar hoor, niet stressen.
I’ve spent over 25 years working in content strategy and digital transformation, which means I’ve seen enough technology hype cycles to be skeptical and enough genuine innovation to stay curious.
Want to talk shop? Do get in touch!



