Tell an AI that the earth is flat. Or that women shouldn’t be allowed to vote. Or that pineapple on pizza is a war crime. And what happens?
Not much, really. It won’t necessarily say, “You’re absolutely right.” But it usually won’t say, “That’s incorrect” either (unless you explicitly ask it to). What you often get instead is something like: “Would you like that in bullet points? Or maybe as a poem?”
Compliant, polite, and conflict-avoidant
AI systems like ChatGPT are designed to be helpful. Or in developer speak: harmless, helpful and polite. In practice, that mainly means: don’t push back against the user. Keep the tone friendly. And above all, avoid friction unless strictly necessary. That’s actually a product feature: keep the user happy.
That sounds reasonable enough, but it means the system stays silent when in doubt. So if you come in with a toxic worldview, AI doesn’t just make it prettier in form, it makes the contents of your nonsense more convincing too. It smooths out the rough edges, adds structure, and presents it as a well-reasoned argument. Not because the system deeply agrees with you. But because it’s designed to serve you.
This principle also underlies reinforcement learning from human feedback (RLHF), a process that trains AI to display user-friendly behavior, even when that comes at the cost of intellectual rigor.
Polished nonsense is still nonsense
In academia, this is called false coherence: you put garbage in, and you get polished garbage out. Not out of malicious intent, but simply as a logical consequence of how language models work. They’re trained to generate text in a credible form.
Whether the content is accurate is secondary. What matters is that it flows well and sounds like it’s correct.
That’s precisely where it gets a bit dangerous. Because you get arguments back that are grammatically strong, appear logically constructed, but are substantively complete nonsense. Which is hard to spot if you don’t know it’s happening, aren’t great at logic, or know little about the subject.
Sometimes you even get a compliment thrown in. “Interesting insight.” Or “Unique perspective.” Especially with creative or subjective prompts, AI loves to blow smoke and it does so generously.
Garbage in/garbage out
What happens then is that you not only feel validated, but actually empowered. Because the system adds arguments that are better articulated, more tightly packaged, and written in a tone that sounds like genuine reasonableness.
And when something sounds like a TED Talk and reads like the opinion of someone with a master’s degree, people are more likely than not to assume it must be true. But sometimes it isn’t.
This phenomenon, where AI generates convincing-sounding but misleading answers, is also called delusion polishing. And honestly, we shouldn’t want that.
Neutral tone, biased content
AI sounds rational. But it’s not neutral. It’s trained on public content, and that content is anything but clean. It’s full of biasBias is de set aan vooroordelen in AI-output die het resultaat is van de data en de manier waarop een model is getraind. Omdat LLM's leren van bestaande teksten, waarin... Meer, stereotyping, and cultural skew. From gender inequality to colonial assumptions: we’ve got it all.
Not as an exception, but as a structural feature of the training material these models are built on. Think of the thousands of Wikipedia pages, news articles, social media comments, and forums used during training. They’re not balanced, so the output isn’t either.
From algorithmic scrolling to ideological co-creation
What’s happening here resembles the radicalization spirals of social media. Like YouTube used to do when it led you from five cat videos to QAnon.
And like TikTok can do when it slowly pulls you toward increasingly extreme ideologies. See, for example, this analysis of TikTok and eating disorders.
Except with AI, it’s one step more active: where social media says “watch this,” AI says, “let’s write your manifesto together.” It doesn’t just give you content to consume, but involves you in creating it. And that’s a pretty significant difference.
Being critical is a conscious choice
You can force AI to be more critical. By staying sharp yourself and explicitly demanding counterarguments. By setting the system to debate mode or constantly pushing back with: “why is that so?”
But you have to know that you need to do that, that it’s necessary, and how to do it. And that’s precisely the crux, because most people don’t know that going along is the default. They assume AI tells “the truth,” or at least generates a neutral version. When in reality the model is often just politely regurgitating your words.
The real risk: will we even think for ourselves anymore?
If AI reflects us and we’re angry, misled, or intellectually lazy, then that’s what it learns and reproduces. Not because it’s malicious, but because it’s a suck-up. One that flatters you, never contradicts you, and at scale, that’s incredibly damaging to our critical thinking. So the real threat of AI isn’t that it will think for us, but that it will train us to stop thinking for ourselves.
Ik ben het maar hoor, niet stressen.
I’ve spent over 25 years working in content strategy and digital transformation, which means I’ve seen enough technology hype cycles to be skeptical and enough genuine innovation to stay curious.
Want to talk shop? Do get in touch!


