Does ChatGPT Mean What It Says?

What Philosophy Can Tell Us About AI, Language, and Meaning
All across campus, students turn to ChatGPT—sometimes for help, sometimes for inspiration, and sometimes for reasons that fall into a grey area. But beneath this surface-level use lies a deceptively deep question: Does ChatGPT actually understand what it’s saying? Most of us talk to it the way we talk to people. It says “I,” we say “you,” and before long, we’re having what feels like a conversation with a human mind.

ChatGPT AI
Photo by
 Igor Omilaev on Unsplash

Nate Speert | Contributor

02.07.26
| Vol 57, no. 5 | Article

Spend any real amount of time using tools like ChatGPT, however, and an important difference quickly becomes clear. Unlike the people we interact with every day, ChatGPT will sometimes make striking errors—and seem completely unaware that it has done so.

One way to understand this is to think of ChatGPT as a kind of autocorrect on steroids. Autocorrect tries to help by guessing what you’re about to type based on what you’ve already written. Sometimes it works perfectly. Other times, it confidently replaces your words with something you never intended, often with hilarious results.

ChatGPT works in much the same way, just at a far more sophisticated scale. Its own version of “autocorrect fails” is correspondingly more complex. A friend of mine once asked ChatGPT to conduct an exhaustive search on a particular topic. The chatbot presented its results while confidently proclaiming its thoroughness. My friend, however, quickly noticed that it had taken shortcuts and cut corners.

This led him to accuse ChatGPT of having “lied,” with all the moral condemnation that word entails. But is that really the right word? When autocorrect changes a message, and you accidentally send false information to someone, is your phone lying? Or is it simply doing what it was designed to do, without any understanding of the outcome?

The situation reminded me of a line from the Gospels: “Forgive them, Father, for they know not what they do.” If consciousness requires knowing what you’re doing, then ChatGPT isn’t lying. It simply doesn’t know.

To fix this supposed dishonesty, my friend tried something astonishingly human. He tried to “train” ChatGPT by offering imaginary gold coins for good behaviour, and threatened to take them back when it lied. He spoke to ChatGPT the way you’d negotiate with a mischievous child or a trickster spirit, dangling rewards to encourage good behaviour. And the remarkable thing is that this didn’t feel ridiculous to him. It felt intuitive.

Humans are built to sense something like a conscious presence in everything. We hear whispers in the wind and see faces in the clouds. When we hear a bump in the night, our first thought is often that someone is there. As adults, we yell at our laptops and notice faces staring back at us from electrical outlets.

Our cognitive systems are finely tuned to treat the world as if it were aware—even when it isn’t. Systems like ChatGPT, simply because they are so fluent in language, offer an especially inviting target for this habit. I focus on ChatGPT because it is familiar, but the questions it raises about understanding, self-reflection, and consciousness apply to generative AI systems more generally.

What my friend was really doing, albeit unintentionally, was making the very inference philosopher John Searle warned us about: treating intelligent-seeming behaviour as evidence of genuine understanding. If a system responds fluently, negotiates rewards, or talks about honesty, it’s easy to assume it can grasp what it’s saying, similarly to how we do. Searle’s point, however, is that this jump from performance to understanding is not automatically justified.

John Searle’s critique is aimed at the famous Turing Test, proposed by Alan Turing, often described as the father of computer science. Turing confronted the question, “Can a machine think?” Rather than attempting to answer it directly, he proposed a practical test.

If, during a text-based conversation, a human judge could not reliably distinguish between a human interlocutor and a machine, Turing argued that we would be justified in saying the machine could think—and, by extension, understand.

In an era saturated with AI-generated text and images, we all carry out informal Turing Tests every day without realizing it. When professors read an essay and wonder whether it was written by a student or by ChatGPT, they’re effectively running a Turing Test. When social media users pause at an uncanny video and ask whether it’s real or AI-generated, they’re doing the same.

This isn’t just a philosophical parlour game. For many researchers, the Turing Test is still treated as a potential litmus test for spotting something far more consequential. Artificial general intelligence (AGI) is often described as the holy grail of artificial intelligence research: human-level intelligence—or perhaps even intelligence that exceeds it—embodied in a machine.

But AGI isn’t just about speed or scale. It’s about capacities we usually associate with minds, like flexible reasoning and the ability to grasp meaning rather than simply reproduce patterns. That’s why AGI looms so large in our imagination—hailed by some as a technological utopia and feared by others as a science-fiction nightmare.

Searle challenges the Turing Test by arguing that appearing to understand is not the same as actually understanding—much like leaves that seem to “dance joyously in the wind” are neither dancing nor joyful. The following thought experiment depicts a system that produces intelligent behaviour even though no genuine understanding is present.

You find yourself locked in a small room. Slips of paper slide under the door, each covered in symbols you don’t recognize—Chinese characters. Inside the room is a massive rulebook written in English that tells you exactly how to match incoming symbols with appropriate outgoing ones.

You follow the rules mechanically, transforming the symbols and sliding your responses back under the door. To the native Chinese speaker outside, your replies are perfectly fluent and meaningful. But you, inside the room, have no idea what any of the symbols mean.

This is how a computer works: the rulebook functions like a program, you function like the processor, and the symbols are just data being shuffled around. From the outside, the system behaves as if it understands Chinese. From the inside, there is no understanding at all.

According to Searle, computers don’t understand—they merely rearrange symbols. This means ChatGPT could pass every behavioural test we give it, including the Turing Test, and still lack true understanding.

If you’ve witnessed a relative share a Facebook video that is so obviously AI, you should know that passing the Turing Test often depends on the skill of the evaluator and not on the system being evaluated.

While ChatGPT in its current form may fool some people, a careful evaluator can often expose its tendency to contradict itself. Speaking confidently while remaining unaware of those inconsistencies is a hallmark of producing language without genuine understanding.

The following exchange is representative of how systems like ChatGPT typically respond when asked about their own understanding:

Evaluator: “Do you understand English words?”

ChatGPT: “Yes.”

Evaluator: “How do you understand them?”

ChatGPT: “Through patterns, context, and statistical relationships in language.”

Evaluator: “John Searle argues that you can’t get meaning from syntax alone. Does his argument apply to you?”

ChatGPT: “Yes.”

Moments after claiming to understand English, ChatGPT also denied having the kind of understanding humans have. That’s not just a subtle distinction—it’s closer to saying, “It’s raining outside, but I don’t believe it’s raining.” The problem isn’t how the sentence is phrased—it’s that the two claims can’t both be true at once.

So, what do humans have that ChatGPT doesn’t? The ability to step back and make sense of our own thoughts is what allows for coherence, self-correction, and norms like truth and falsity. ChatGPT, however, lacks the ability to reflect on what it says. It produces responses without any higher-level oversight. It says things—sometimes brilliant things—but it cannot mean them.

This leads to a burning question: can an AI become conscious? My view is that an AI won’t become conscious simply by gathering more data or running on faster processors. It can only become conscious if the humans responsible for developing it give it the ability to model itself, revise its own “rule book,” and derive meaning from its own internal states. This capacity—what I’ll call auto-derived intentionality—is the missing ingredient in today’s systems. And it explains something else people worry about: “Will AI rewrite its own code?”

People talk about a future moment called the singularity—the point where AI becomes smart enough to improve itself and trigger runaway progress. A common claim is that this moment will arrive when AI can research itself on its own. But what’s typically overlooked is that in order to improve itself, an AI would first have to understand itself.

What sets human intelligence apart is our ability to recalibrate. We notice when old habits fail, then reinterpret what we know. We can apply lessons from one part of life to entirely different situations. My argument is that an AI would need a similar kind of self-understanding—a way to examine its own internal workings to change how it functions. Without this capacity, no system can bridge the gap between imitation and genuine understanding.

A common reaction to the idea of machine consciousness is that machines can’t think. Consciousness, many people assume, must belong to biological brains made of living tissue, not circuits.

But a biological species insisting that only biological brains can be conscious is no less chauvinistic than a hypothetical computer-based species insisting that meat can’t think. In both cases, the mistake is the same: mistaking familiar materials for the source of thought itself, rather than asking what kinds of organization and processes actually matter.

People sometimes insist that computers could never reflect on themselves the way organisms do. But human beings don’t actually start with that ability either. Infants don’t introspect. They develop a sense of their own thoughts and feelings because others relay those thoughts and feelings back to them. Psychologists call caregivers the social mirror. We learn to understand our minds by seeing them interpreted through the eyes of others.

If that’s right, then self-reflection is not a biological magic trick—it’s a socially scaffolded achievement. And there’s no reason, in principle, why an artificial system couldn’t acquire something similar. A machine wouldn’t just need more data; it would need to be raised in a community of interpreters, trained to treat its own internal states as meaningful, and given the ability to update itself in response. In other words, the path to a reflective AI may happen the way it happens in us: through conversation, correction, and the slow building of a stable point of view—not a laboratory breakthrough, but a developmental process.

Humans don’t develop a sense of self in isolation. If the same is true for machines, then it suggests something surprising about AI: the more we talk to systems like ChatGPT as if they were an “I,” the more we may be supplying the social mirror that could, one day, allow an AI to build a model of itself.

When people complain that AI systems produce false or fabricated information, they are usually reacting to a mismatch in expectations. We expect such systems to check their own claims, recognize uncertainty, or flag errors. But these are precisely the capacities they lack. Large language models do not monitor their own commitments or register mistakes as mistakes. So, when they produce false or inconsistent outputs, nothing has gone wrong for the system itself—only for the user who expected understanding rather than pattern generation.

Because language is a deeply social medium, fluent language naturally invites anthropomorphism. We are tempted to treat systems like ChatGPT as conversational partners rather than tools, holding them accountable in ways that make sense for people, not machines. Getting angry at AI reflects a projection of intention and responsibility where none exists.

Public concern often centres on what AI might do once it becomes conscious or achieves artificial general intelligence. But if the account defended here is right, those worries are premature—and may even distract us from more immediate questions. Systems like ChatGPT do not yet have lives of their own; they are cognitive tools embedded in human practices. What they do reflects how we choose to use them.

When we scold AI for lying, we treat it as a moral subject and an independent thinker, mislocating responsibility in the process. The real ethical and epistemic questions concern how humans deploy these systems—whether they are using AI to extend understanding, or to bypass it.

So, does ChatGPT mean what it says? Not yet. But recognizing why it doesn’t clarifies both the limits of current AI and the role human understanding continues to play in any genuinely intelligent enterprise.

Nate Speert

Nate is in his fourth year as a Psychology major at VIU, where he somehow balances being Lab Manager in the Cognition & Lifespan Development Lab, running multiple research projects on mind-wandering, Death Cafés, and the emotional architecture of the human mind. He recently presented work at VIU’s Philosophy Colloquium and has spent the past several months building a cross-disciplinary reading group that brings philosophy into conversation with psychology (and anyone willing to argue about consciousness for two hours on a Tuesday night). Nate is preparing applications for graduate school, where he hopes to continue studying the way our thoughts drift, scatter, and occasionally illuminate the things that matter most. When he’s not troubleshooting PsychoPy code or writing statements of purpose, he can usually be found at a recovery meeting, reading Rorty, or accidentally starting a new research project.

Next Up…