Blog Archive

Monday, December 8, 2025

How Large Language Models View Our World: Summary & Review of a Big Think Interview with Dan Shipper, CEO and AI Podcaster


     Dan Shipper, the CEO of Every and an AI podcaster, notes that AI is being used in various ways. He distinguishes the way computers and science see the world, both of which are trying to:

 “…reduce the world into a set of really clean universal laws that apply in any situation. If X is true, then Y will happen.”

     He says that large language models (LLMs) see something different a:

“…dense web of causal relationships between different parts of the world that all come together in unique, very context specific ways to produce what comes next.”

     He says LLMs think, or rather operate, much like human intuition in that they are trained by many hours of direct experience.

     First, he talks about rationalism, its journey from Socrates to LLMs, and its limitations. He posits rationalism as the way we see the world. He calls Socrates the father of rationalism because he was the first to “officially” study truth and what makes things true due to his emphasis on inquiry. He points to one of Plato’s dialogues where Socrates was debating with Protagoras whether excellence, sometimes translated as virtue, can be taught. Here, Socrates stresses the need to define what excellence really is before determining whether or not it can be taught. Defining terms and ideas is one way we know things, expressing ideas through words. Thus, in a sense, to define is to know.     

     Shipper says that the rationalist worldview came to the fore in the Age of Enlightenment due to thinkers like Descartes, Newton, and Galileo, who used it to explain the world. With them came the affirmation that one needed to be able to rationally describe ideas, preferably mathematically.

     Shipper sees the social sciences as following the same framework or structure as the physical sciences. However, he acknowledges that there are limitations to doing that since the “soft” or social sciences have subjective aspects that do not lend themselves well to objective analysis. He mentions the current experimental replication crisis in psychology as a kind of proof that the social sciences are less amenable to understanding in the same way as the physical sciences are.  

     AI is similarly difficult to rationalize, he suggests. Here, he notes that there is no universally agreed-upon definition for AI. AI began in the 1950s with a method known as symbolic AI:

The idea that you could embody thinking in, essentially logic, logical symbols, and transformations between logical symbols, which is, it’s very similar to just basic philosophy.”

     Early AI, he says, could only solve simple problems, not the complex ones that permeate the world. AI follows rules, but with rules, there are exceptions, and those exceptions must be defined, and some may need continuous training in as the exceptions can be dynamic and constantly changing.

     Interestingly, the idea of neural networks was around when AI was young, but wasn’t taken seriously until the 80s and 90s. Neural networks are inspired and informed by the way the human brain works.

What you can do with a neural network is you can get it to recognize patterns by giving it lots of examples. For example, if you want it to recognize whether an email is important, what you can do is you can give it an example, say here’s an email from a coworker, and have it guess the answer. And if the answer is wrong, what we’ve done is we’ve created a way to train the network to correct its wrong answer.”

Language models are a particular neural network that operates by finding complex patterns inside of language and using that to produce what comes next in a sequence.”

     The LLMs can utilize the whole internet to accurately predict what is next in a sequence. He points out that the rules for neural networks are non-explicit, or rather, intuitive, and often, only some rules can even be found. Here, he gets back to the similarities to human intuition:

And what’s really interesting about neural networks is the way that they think, or the way that they operate it looks a lot like human intuition. Human intuition is also trained by thousands of hours of direct experience.”

     We have long-standing metaphors of the human mind being “like” a computer. Shipper sees this as problematic. Rather, he sees rationality as emerging out of intuition. He then goes on to suggest that intuition is beyond the reach of rationalism and, rather than being overshadowed by it, still retains its importance in human knowledge and understanding. He goes back to the dialogue where Protagoras argues with the aid of myths and metaphors, instead of rational definitions. He also says that neural networks are the first thing we have invented that works like human intuition. It is now widely acknowledged that intuition can be another way of knowing in addition to rationalism. He also notes that many of those who use ChatGPT often have noted that they can intuit what it will be good at or not, and when it is hallucinating, much like we can intuit the feelings of a close friend.

The interesting difference between how a language model sees the world and how a traditional computer sees the world is this: a traditional computer tries to reduce everything into a set of clean, universal laws that apply in any situation — essentially, “if X is true, then Y will happen.” It relies on clear, context-free chains of cause and effect.”

And what language models see instead is a dense web of causal relationships between different parts of the world that all come together in unique very context-specific ways to produce what comes next. I think language models do something really, really unique, which is that they can give you the best of what humanity knows, at the right place, at the right time in your particular context, for you specifically.”   

     Thus, he says, neural networks and language models are contextual. They are more based on pattern matching and a kind of fuzzy logic, and they use previous experience to predict the future.

“…the way that a more intuitive relational fuzzy pattern matching type experiential, contextual type way of knowing about the world has to be underneath the rational stuff for the rational stuff to work at all. It’s really about recognizing the more intuitive ways of knowing about the world as being the original parent and partner of rationality, and appreciating that for what it is.”   

     The contextual knowledge of LLMs and neural networks can enable hyper-personalized knowledge that can solve specific problems. They don’t need explicit definitions or rational models to be successful. These networks and models are trained to detect and predict. He says they change a science problem into an engineering problem. He thinks that better and more thorough data access and sharing, especially by the Big Tech companies, will lead to more thorough AI training. Shipper thinks we can utilize our intuition and basically put it into a machine that we can pass around.

     Shipper thinks AI will seriously enrich our understanding of ourselves. He sees AI as a mirror and as a metaphor.  However, limiting AI to only what is provable limits it.

The thing about it that makes it powerful is that it works on probability, it works on thousands of correlations coming together to figure out what the appropriate response is in this one very unique, very rich context. And allowing it to say only things that are provable, obviously begs the question: what is true and how do we know?

     AI can be messy, but the key thing to know is that with adequate training, it works, though we may not be able to entirely explain how it works.

Something to remember is each model builds on the models that came before it. They actually have a dense, rich idea of what it is to be good from all the data that they get. They also have a dense, rich idea of what it is to be bad. But in a lot of ways, the training that we’re doing makes them less likely to do any of that stuff.”

There’s something very practical and pragmatic about, we have a machine, we don’t know fully how it works, but we’re just going to teach it, and we’re going to iterate with it over, and over again until we basically get it to work.”

     Shipper sees AI as analogous to a gardener rather than as a sculptor who creates something from nothing. A gardener gives his plants the conditions to succeed without forcing that success directly. Gardening is much like a model, he suggests.

AI does a lot of the more repetitive specialized tasks, and it will allow individuals to be more generalistic in the work that they do. And I think that would be a very good thing.”

     In the last section, he mentions the idea that we’re moving from a knowledge economy to an allocation economy. He also says that an allocation economy will require more of a certain kind of worker with specific skills. Those specific skills will often be the skills of human managers, which include knowing one’s human assets and capabilities, such as “knowing what any given person on your team can do, what are they good at, what are they not good at.”

In a knowledge economy, you are compensated based on what you know. In an allocation economy, you’re compensated based on how well you allocate the resources of intelligence. There’s a particular set of skills that are useful today but are not particularly widely distributed that will become some of the main skills in this new economy, in this new allocation economy. And that is the skills of managers, those are the skills of human managers, which make up a very small percentage of the economy right now. I think it’s like 7% of the economy is a human manager.”

     Some of the ongoing problems that must be solved include how to manage the other humans working on the problem. He also suggests that managing humans and managing a model are similar. Managing a model, like managing humans, requires some intuitive abilities. Shipper mentions an interesting idea that intelligence is like compression in that it compresses a range of possibilities as answers into a small space, or rather, it can find the right answers quickly. He thinks that is partly how brains and consciousness work as well. He suggests that, in a sense, these models may have consciousness, or at least we can benefit from understanding them in that way. Just in case, he says:

I always say please and thank you to ChatGPT because you never know when the machine apocalypse is going to come.”  

  Well, now I understand neural networks and LLMs better than I did, and I’m grateful, so a worthwhile article. It was actually a transcribed podcast that I worked from. There is a video as well at the link in the references.



References:

 

How Large Language Models View Our World. Big Think. August 27, 2025. How large language models view our world - Big Think

No comments:

Post a Comment

      Art Berman is a legendary oil & gas expert. People listen to what he says. Now, he says enhanced geothermal systems (EGS) are seri...