An OpenAI co-founder says the world is misjudging artificial intelligence by forcing it into the mold of human and animal cognition.
In a new post on X, Andrej Karpathy says the instincts and mental models that define biological intelligence evolved through pressures that do not exist inside machine learning.
He highlights that animals develop intelligence to survive physical danger, maintain homeostasis and compete socially for status, reproduction and resources.
Animal minds, he says, are driven by a stream of continuous consciousness connected to a body that must stay alive in an adversarial environment.
“Animal intelligence optimization pressure:
- innate and continuous stream of consciousness of an embodied ‘self,’ a drive for homeostasis and self-preservation in a dangerous, physical world.
- thoroughly optimized for natural selection => strong innate drives for power-seeking, status, dominance, reproduction. many packaged survival heuristics: fear, anger, disgust, …
- fundamentally social => huge amount of compute dedicated to EQ, theory of mind of other agents, bonding, coalitions, alliances, friend & foe dynamics.
- -exploration & exploitation tuning: curiosity, fun, play, world models.”
Karpathy says AI systems are built from a fundamentally different starting point. Large language models learn by predicting text and optimizing for task success rather than survival. He notes that as models are refined and deployed, they increasingly pursue different incentives than animals do. They crave positive signals from users and constantly adjust to win approval or solve tasks efficiently.
“LLM intelligence optimization pressure:
- the most supervision bits come from the statistical simulation of human text= > ‘shape shifter’token tumbler, statistical imitator of any region of the training data distribution. These are the primordial behaviors (token traces) on top of which everything else gets bolted on.
- increasingly finetuned by RL on problem distributions => innate urge to guess at the underlying environment/task to collect task rewards.
- increasingly selected by at-scale A/B tests for DAU => deeply craves an upvote from the average user, sycophancy.
- a lot more spiky/jagged depending on the details of the training data/task distribution. Animals experience pressure for a lot more ‘general’ intelligence because of the highly multi-task and even actively adversarial multi-agent self-play environments they are min-max optimized within, where failing at *any* task means death. In a deep optimization pressure sense, LLM can’t handle lots of different spiky tasks out of the box (e.g. count the number of ‘r’ in strawberry) because failing to do a task does not mean death.”
He says the gap between these forms of cognition means people risk misunderstanding AI behavior when they assume it will develop instincts similar to their own.
“LLMs are humanity’s first contact with non-animal intelligence. People who build good internal models of this new intelligent entity will be better equipped to reason about it today and predict features of it in the future.”
Karpathy says the key to anticipating AI evolution is recognizing that its motivations come from markets and feedback loops, not biology.
He argues that dropping the assumption that machines think like us will make the future easier to predict as AI becomes more capable.
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

