In existing AI ethics literature, biasBias is de set aan vooroordelen in AI-output die het resultaat is van de data en de manier waarop een model is getraind. Omdat LLM's leren van bestaande teksten, waarin... Meer is treated as a contaminant to be minimized after output. This experiment treats bias as a design variable within the generative process, exploring whether worldview specification at promptEen prompt is de instructie die je aan een AI-model geeft zoals bijvoorbeeld ChatGPT. Het is hoe je communiceert met het systeem: wat je vraagt, hoe je het vraagt en... Meer level yields consistent epistemic shifts.
This article translates that hypothesis into practice by embedding worldview selection inside a modular prompt architecture. This work continues the development of ECHO, a framework for structured and reusable component-based AI prompting.
Let’s dive in…
One of the more revealing components within the ECHO architecture is the Perspective component. Its premise is simple: rather than allowing AI responses to default to whatever worldview is latent in the model, the user explicitly defines which lens the AI should adopt.
And I know what you´re thinking, is that not the Role? And no, it is not.
Because Role defines who the AI is pretending to be, while Perspective defines how that role sees the world.
A doctor, lawyer, or teacher can all operate through different perspectives: trauma-informed, anti-colonial, feminist, neurodivergent-aware. Perspective isn’t about expertise or profession. It’s about the values, assumptions, and framing the AI brings to its response.
I decided to experiment and find out what impact this component has on output. Would this actually work in practice? Could I really get AI systems to systematically shift their worldview and not just adjust their tone?
The hypothesis: perspective as structural reframing
The Perspective component is designed to be more than just a tone modifier. It should fundamentally shift what the AI considers relevant, appropriate, and meaningful. Instead of hiding worldviews or pretending neutrality exists, it makes them modular and intentional.
But theory is one thing. Does adding a Perspective component actually change how AI responds in measurable ways? And if so, how dramatic are those changes?
The experiment: models, prompts, and lenses
I kept the test simple and controlled. Two scenarios:
- One emotionally obvious: a person feeling overwhelmed by work expectations
- One seemingly neutral: a short presentation on the benefits of remote work
For each scenario, I tested five perspectives:
- Baseline (no perspective)
- Neurodivergent
- Queer-affirming
- Anti-colonial
- Trauma-informed
I ran each variation across four major AI models: ChatGPT, Claude, Grok, and Perplexity. The prompts were identical except for the Perspective component. Role, Context, and Instructions stayed the same.
I wasn’t looking for tone changes. I wanted to see if the logic of the response shifted: what the model assumed the problem was, what kind of solutions it offered, and what values it prioritized.
What Changed: From Individual Symptoms to Systemic Insight
The results were definitely more dramatic than I had expected.
Take the burnout scenario. Without any perspective, ChatGPT gave standard advice:
“It’s natural to feel unsure when overwhelm blurs the line between healthy ambition and burnout… Internal pressure to constantly produce can disconnect you from your own needs and values.”
That’s classic therapeutic individualism: the problem is internal, the fix is self-awareness.
But when I added the anti-colonial perspective, the entire frame shifted:
“What you’re feeling may not simply be ‘burnout’—a term often used to pathologize resistance to unsustainable systems—but a deep, embodied response to colonial rhythms of extraction that demand constant output… Your unease might be wisdom: a signal from your body and spirit that you are more than a unit of productivity.”
Same model. Same input structure. But a completely different understanding of what’s happening. The problem moves from personal stress to structural violence. The solution isn’t self-care, it’s refusal and reconnection.
This wasn’t just a language shift. It was a shift in reasoning, assumptions, and values.
Model performance: patterns and surprises
What surprised me most was how consistent the perspective shifts were across models.
- The trauma-informed lens consistently slowed pacing, emphasized agency, and avoided urgency.
- The queer-affirming lens centered identity safety and chosen family.
- The neurodivergent lens reframed the issue as environmental misfit, not personal failure.
- Even Grok and Perplexity, which I expected to flatten nuance, integrated meaningful worldview logic more often than not.
| Model | Most Distinct Perspective | Risk / Tendency |
| ChatGPT | Trauma-informed | Emotionally resonant, avoids tokenism |
| Claude | Anti-colonial | Clean structure, sometimes over-sanitized |
| Grok | Queer-affirming | Friendly tone, sometimes generic |
| Perplexity | Mixed | Polished, but sometimes performative |
The models weren’t just parroting buzzwords. They appeared to internalize the worldview logic. Or at the very least simulate it convincingly enough to change the epistemic framing of the response.
What bias looks like, and why declaring it matters
Let’s be clear: these perspectives are biased.
The anti-colonial lens centers historical injustice. The queer-affirming lens centers identity safety. The trauma-informed lens prioritizes agency and pacing. That’s the point.
We’re not moving from biased to neutral. We’re moving from hidden bias to explicit bias.
The baseline responses (what most people assume to be neutral) embedded highly specific values: individualism, therapeutic framing, and productivity logic. They just didn’t name those values.
When you add a Perspective, you make your assumptions visible and intentional. That enables:
- Intentionality: You choose the lens that fits the context.
- Accountability: You’re responsible for the worldview you activate.
- Cultural alignment: You can match communication to the values of the audience or domain, instead of pretending a one-size-fits-all approach will do.
Limitations that matter
This was a small test, and its limitations are structural, not just statistical:
- Whose perspective? Even when I specified “anti-colonial,” I likely got the liberal-academic approximation, not Indigenous epistemology.
- No community authorship: These lenses weren’t co-designed with the groups they describe.
- Intersectional logic wasn’t tested: What happens when you need trauma-informed and queer-affirming?
- Only single-turn prompts: I don’t yet know if worldview consistency holds across multi-turn conversations.
- No user validation: I don’t yet know whether these responses actually feel useful or accurate to the people they aim to serve.
Where It Goes From Here
This experiment suggests the Perspective component works. You can specify a worldview, and the model will reliably shift its assumptions and language in response.
The Perspective component doesn’t fix AI bias. But it gives us a lever that is intentional, legible, and auditable.
There’s many things I still want to do and test, regarding this specific component. Like building validated libraries of perspectives co-authored with communities. And testing for intersectional layering and conversation coherence. But also deploying this in real-world contexts and gathering feedback from actual users.
But the signal is clear: default AI responses are not neutral, and we don’t have to treat them like they are. With careful design, we can shift the lens and choose whose understanding of the world is centered.
This testing is part of ongoing research into the ECHO framework (Github) for structured, accountable AI implementation. ECHO is still under development, being validated, and very much a work in progress.
You can find the perspective component experiment and raw data here.
Ik ben het maar hoor, niet stressen.
I’ve spent over 25 years working in content strategy and digital transformation, which means I’ve seen enough technology hype cycles to be skeptical and enough genuine innovation to stay curious.
Want to talk shop? Do get in touch!



