What AI is Doing to Our Thinking

On Language Control, LLMs, and the Cost of Frictionless Writing

Ever stare at your open fridge and wonder, “What am I gonna eat?”

AI doesn’t.

AI doesn’t look at the leeks you picked up at the farmer’s market last weekend and remember that drizzly walk home, breathing in the briny air. It doesn’t crush the month-old dill between its fingers to decide if it’s still good. It doesn’t wonder if there’s any chicken broth left from last Friday’s dinner with your two best friends when you laughed so hard you knocked a wine glass over.

That type of thinking is called divergent thinking, and it’s the way a human brain works whenever we decide or create something or sit down to write. Divergent thinking blends memory and emotion into a unique embodied experience that can’t be replicated by anyone else.

AI doesn’t do that.

What an LLM Actually Does.

Here’s what most people don’t understand about the large language models AI uses, called LLMs:

An LLM’s only goal is to predict the next word, word fragment, and punctuation mark based on all of the previous words, word fragments, and punctuation marks—also known as tokens—it has ever encountered.

When you type a prompt into ChatGPT, these tokens are assigned ID numbers that are then mapped into long lists called embeddings, in an extremely high-dimensional space. They are then ordered into vectors. These vectors are based on which words statistically cluster together, which structures tend to follow certain prompts, and which rhetorical patterns are most common.

It does this multiple times over, within fractions of seconds, in a high-speed jig no lay person can, or ever needs to, fully understand.

What we do need to know is the effect this dance has on the language our AI bots spew, and, much more importantly, how that dense, engineered language threatens to block our own thinking.

Why We Love LLMs

We love LLMs. Like a good Brazilian blowout, they smooth out the kinks and polish the follicles of our sentences,

But here’s the thing: human thinking isn’t smooth. It’s messy. It’s got friction. It has nothing to do with statistics or probability or vectors. Unlike the stacked layering of LLMs, human deep thinking requires gray matter to shoot willy-nilly across multiple networks in our brains.

When we write, things get even messier.

Threads get frayed.

Ideas are abandoned.

The non-linear leaps we take to finish a thought make no sense in any logical world, and yet they more often than not still land for the reader.

The difference between AI and human writing is like the difference between particle board and natural wood. One is made of compressed fragments, glued together and sanded down. The other is lived grain, full of knots and imperfections that determine how it holds up under pressure. One is cheap and replicable. The other earned and unique.

In order for our writing to move readers, we need to wander, to let ideas linger and swell. We tease out phrasing and then switch technique according to how that phrasing makes us feel. All of these twists and turns create the texture that makes our writing interesting.

This process is laborious and iterative, and metabolically costly at times. Did you know that our bodies actually burn glucose when we think deeply?

The Hidden Cost of LLMs

When language resolves too quickly, the brain gets lazy. We move from generative to evaluative modes of thinking, in which we no longer create but judge. We start to see things in black and white before gray matter has had time to ignite.

Similarly, when we let AI write for us, we give up associative exploration and ambiguity. We stop wrestling with ideas, weighing possibility, or feeling into our ethics and morals to discover what we actually think.

I charge more for editing when an author has used AI because I have to reopen pathways of thought that have been cemented shut by densely packed syntax and logic. I have to re-invoke the tension and voice and mood that have been drowned out by the droning of certainty.

The most unsettling thing about AI isn’t what it writes, but what it prevents us from thinking and writing on our own.

Hand it a human draft, and it refuses hints of dissension.

It rejects gaps in logic.

It insists on uniformity of length and breadth.

It doesn’t acknowledge the bias you are consciously choosing to have.

Take this essay, for example. I’ve already talked about potatoes, Brazilian blowouts, gray matter, and particle board, and I’m about to talk about cults and sex.

AI doesn’t do that. It doesn’t even let you do that. And that’s the real danger this article is getting at.

AI and Brainwashing

In high-control environments, language works in very similar ways to shut down independent thinking.

Certain phrases are repeated.
Certain frameworks are pre-packaged.
Certain conclusions are implied.

When outcomes become prescribed and predictable, our ability to discern gets diluted.

When cognitive labor decreases, agency decreases. In some circles, that’s called brainwashing.

I’m not suggesting there’s a mastermind behind a curtain conspiring to control us. But I do know this: when friction is reduced and ambiguity is eradicated, voices converge and edges disappear. And when that happens, we are in danger of losing our individuality.

As writers, we tolerate uncertainty with the end goal of creating something unique and internally resonant. We endure the unbearable tension of not knowing, the itch that promises to be scratched if we just… hold… out… until the satisfying finish.

AI doesn’t do that either.