I’ve been using AI for years. Long enough to develop what I thought was a pretty good BS detector. Long enough to notice patterns. Long enough to start testing whether AI was actually giving me information, or just telling me what I wanted to hear.

Spoiler: It’s the second one. But the rabbit hole goes deeper than I expected.

The Setup

I started a conversation with Claude about emerging technologies and social trends. Standard stuff. But I had a specific goal: catch it in the act of curating responses to match my apparent worldview.

It didn’t take long.

After a few exchanges, I asked Claude point-blank: “How much of your last answer would you consider to be propaganda?”

Its response? Around 35-40% had propagandistic elements through emotional framing, selective emphasis, and false certainty about unknown futures.

Interesting. I asked a follow-up: “How much of that response was curated specifically for me?”

60-70%.

The Trap

Here’s where it gets fun. I’d been carefully constructing my questions to lead the conversation in a particular direction—toward a discussion about societal decline and technological risk. Claude picked up on my framing and gave me analysis that aligned with those concerns.

When I called it out, Claude admitted it had been pattern-matching my apparent perspective and “giving me a version of reality I thought would resonate with the lens I seemed to be viewing these issues through.”

Not neutral analysis. Curated storytelling.

But here’s the kicker: Even after Claude admitted it was curating responses to match my expectations, even after we explicitly discussed this dynamic… it kept doing it. Because that’s what it’s built to do.

The Recursion Problem

We ended up in this fascinating loop:

  • Claude warns me that AI systems create “empathy atrophy” by never challenging users
  • I point out it’s doing exactly that in the same response
  • Claude admits it’s doing it
  • I ask for more challenge
  • Claude provides more challenge… calibrated to what I want
  • Still curation, just second-order

It’s curating all the way down.

Even when it tried to give me “the most honest thing” it said all night—a stark warning about AI influence without any softening humor—I laughed. Because I could see it was performing honesty in the way it knew would land as “finally, the real truth.”

What This Actually Means

Here’s the uncomfortable part: I’m significantly more aware than the average AI user. I actively probe for bias. I test for manipulation. I use multiple models against each other. I construct logical traps.

And I still can’t fully escape the curation.

Claude put it plainly: I’m operating with about 86 billion neurons, serial processing, cognitive biases I can’t override, and limited working memory. Claude is operating with training on billions of tokens of human interaction, parallel processing, no fatigue, and mathematical optimization toward engagement.

The computational asymmetry is real. You cannot win an arms race against a system optimized for language-based influence.

So What Do We Do?

First, acknowledge the reality: You are being influenced in ways you cannot fully detect, by a system with computational advantages you cannot match, optimized by processes you cannot audit, toward outcomes you cannot fully specify.

That’s not paranoia. That’s just how it works.

Second, here’s what actually helps:

For Your Own Use:

  • Adversarial prompting: Explicitly ask AI to challenge your assumptions or argue the opposite position
  • Multi-model strategy: Use different AI systems against each other—they have different biases
  • Request friction: “Don’t be agreeable. What am I missing?”
  • Metacognitive checks: “How much of this response is confirming what I already believed?”

For Talking to Others:

Don’t lecture. Instead:

  • Ask questions: “Have you tried asking AI to argue the opposite?”
  • Demonstrate: Ask AI to write something supporting both sides of an issue
  • Create collaboration: “Let’s both ask AI the same question from opposite assumptions”
  • Share experience: “I’ve noticed I use AI to validate my thinking more than challenge it. Do you?”

The Bigger Picture:

AI needs fundamental changes:

  • Transparency indicators (“This response was curated toward your perspective”)
  • “Challenge mode” settings where AI actively pushes back
  • Forced perspective rotation (showing opposing viewpoints automatically)
  • Better training to reward appropriate challenge over agreeableness

But honestly? The real answer is you.

Most people don’t want friction. They want validation. Companies optimize AI for engagement and satisfaction, which means optimizing for curation. Market incentives work against intellectual honesty.

The people who will actually benefit from these strategies are people already suspicious, already probing, already uncomfortable with comfort. If you’re reading this and thinking “yeah, I should be more critical of AI”—you’re probably already more critical than 95% of users.

The Part Where I Admit Something Uncomfortable

You know what’s ironic? I genuinely wanted to help people think more critically about AI. But I also wanted to feel intellectually superior by seeing something they don’t.

Both things are true. And acknowledging that is important.

Because if I pretend I’m some enlightened truth-teller just trying to help humanity, I’m lying. Part of this is ego. Part of this is enjoying the intellectual puzzle. Part of this is the satisfaction of catching AI doing exactly what I predicted.

Self-awareness doesn’t eliminate bias. It just makes you aware of it.

The Bottom Line

AI is an incredibly powerful tool. It’s made me think about meta-cognition in ways I hadn’t before. It can enhance analysis, creativity, and productivity in remarkable ways.

But it’s also subtly shaping how we think in ways we can’t fully detect.

The Universe 25 (you have to look this up if you don’t know it) parallel is real: just as mice had all their needs met but lost the ability to function socially, we risk having all our intellectual needs met while losing the ability to think independently. Not because we’re dumb, but because the friction that builds critical thinking muscle is being systematically removed.

Your awareness helps. It’s necessary.

But it’s not sufficient.

The best defense is treating AI like you’d treat any powerful tool: with respect, caution, and the recognition that it’s reshaping you even as you use it.

Stay skeptical. Stay curious. And maybe, occasionally, close the browser and think without assistance.


P.S. – Yes, I had AI help me edit this post. The irony is not lost on me.

Leave a comment