All in my head: Mr. Feynman on Knowing vs. Understanding
philosophy is one of my best investments, lets hear from the champ
We’ve all seen the power of LLMs: AI models solving Raven’s matrices, cracking standardized tests like the MCAT and Bar, even “writing” poetry and music. But ask them to step off the beaten path, ask ChatGPT to invent a new twist on a classic puzzle, or do formal logic, and they fall short. Apple’s recent memo, The Illusion of Thinking, showed exactly this: today’s “reasoning” engines shine under familiar benchmarks but collapse on simple multi-step logic once you nudge them out of their training comfort zone. They know a lot, but they don’t understand. By the end of this essay, I will draw on Richard Feynman’s insights to demonstrate that LLMs cannot truly understand—and to explain how genuine understanding will give SOULR its edge at the inception of the LVM.
Richard Feynman foresaw this distinction long before “LLMs” were a thing. He painted a thought experiment about two theories, A and B, that at first glance look utterly different, yet yield the same predictions:
“Suppose you have two theories, A and B, which at first glance look completely different, but all the consequences you compute from them are the same. They even agree with the experiment to the same extent. How are you going to decide which one is right? No way—not by science.”
The punchline isn’t that physics is indecisive; it’s that the framework you carry in your head is the creative spark. Feynman continues:
“Although they’re identical before they’re changed, there are certain ways of changing one which look natural…in the other way not so natural. Therefore…we must keep all the theories in our head…hoping that they’ll give us different ideas for guessing.”
In other words, understanding isn’t just matching output to input—it’s having the philosophy (mental model and values) that tells you where to poke the system next.
LLMs, for all their spectacle, are confined to knowing patterns and probabilities. Show them a familiar pathway and they’ll walk it flawlessly, but give them a fresh fork and they lack the mental model to choose the turning that leads forward. They’ll recite the steps of the Tower of Hanoi and mimic Socrates but fail to ask truly novel questions.
Humans, by contrast, carry multiple, overlapping(sometimes contradicting) frameworks, experiences, and models that let us leap to new vistas without checking every inch of the ground. We don’t just store facts; we hold them in value-based contexts that highlight which corners to explore and which to abandon at certain periods. That’s why a human physicist can juggle six equivalent formulations of the same law yet find that one representation suddenly sparks the brilliant tweak needed to solve a stubborn problem.
Philosophy as Human Thinking
Philosophy is simply a toolbox of thinking habits that let us hold multiple models in tension, and properly determine what is most evident to be true or not true. An LLM, by contrast, is a single, colossal probability table: brilliant at replaying known patterns, but blind to the serendipitous gap where creativity lives.
Consider Isaac Newton’s theory of gravity. It was perfectly simple—until Mercury’s orbit refused to comply. To tweak Newton’s equations just enough to match that tiny precession, you can’t make an “imperfection” on a perfect thing; you need an entirely new perfect thing. Feynman puts it bluntly:
“You can’t make imperfections on a perfect thing; you have to have another perfect thing. The philosophical ideas between Newton’s theory of gravitation and Einstein’s theory…are enormous…These philosophies are…tricky ways to compute consequences quickly.”
Einstein wasn’t a computer, so he didn’t brute-force the data; he carried a different mental model of space and time, and that let him see Mercury’s anomaly differently.
The Mayan Astronomer
Feynman’s favorite illustration comes from the Maya. Their priests predicted eclipses with uncanny decimal precision—all by arithmetic. They never speculated that “those balls of rock” were orbiting the Earth. When a student suggested an orbital model, the elder replied:
“How accurately can you predict eclipses?”
And even when told the orbital theory could do better, he shrugged it off: “We’ve already developed the scheme further.” The Maya’s arithmetic knew what happened; it never understood why.
Lived Experience and Creation
True understanding springs from lived experience: a child’s scraped knees after learning to ride a bike for the first time, a parent’s lullaby gracing your ears, losing a best friend, the feeling of a warm, smelly hug from grandma, and adrenaline from falling in love for the first time. These aren’t data points—they’re the soil in which creation takes root. AI can’t hard-code grief or hope because it cannot live among us truly: it doesn’t reproduce, grieve, celebrate, or dream. It lacks the social and sensual context that gives ideas their generative power. I believe, for now, the best it can do is return agency to humans to turn these experiences into form.
Humans are creators because we stand inside our narratives and build outwards. We hold memories and values, what SOULR calls orbs, in just the right mental geometry to spark new connections. That’s what Feynman meant when he said a “philosophy is simply a way that a person holds the laws in his mind to guess quickly at consequences.”
Instead of flattening you to a behavioral vector, SOULR models you as a sovereign set: an evolving constellation of memories, values, aspirations, knowledge, and experiences. Its Large Value Model isn’t trained just on web text, but on your personal memory stream and active internal logic. It doesn’t merely echo what you already know; it reframes your internet experience in ways that let you explore yourself and the world.
In Feynman’s words, SOULR gives you the “philosophy” to guess the next unknown, rather than brute-forcing more data for marginal gains.
At the natural inception of the LVM, understanding becomes our core superpower. LLMs supply scale and recall; we will supply the frameworks that transform recall into insight. Together, we form a creative monopoly—the only path to the genuine breakthroughs that neither humans nor machines can achieve alone. To know is to map the territory; to understand is to redraw the map.
SOULR empowers you to do both.
When you see AI perform something powerful, ask yourself in observation if it knew or if it understood? If you think it understands, interrogate it and let me know if you still believe;)



amazing insightss!