Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Monday, November 24
    CapitalAI DailyCapitalAI Daily
    Home»Big Tech & AI»OpenAI’s Andrej Karpathy Breaks Down How AI Thinks Differently From Human and Animal Minds

    OpenAI’s Andrej Karpathy Breaks Down How AI Thinks Differently From Human and Animal Minds

    By Henry KanapiNovember 23, 20253 Mins Read
    Share
    Twitter LinkedIn

    An OpenAI co-founder says the world is misjudging artificial intelligence by forcing it into the mold of human and animal cognition.

    In a new post on X, Andrej Karpathy says the instincts and mental models that define biological intelligence evolved through pressures that do not exist inside machine learning.

    He highlights that animals develop intelligence to survive physical danger, maintain homeostasis and compete socially for status, reproduction and resources.

    Animal minds, he says, are driven by a stream of continuous consciousness connected to a body that must stay alive in an adversarial environment.

    “Animal intelligence optimization pressure:

    • innate and continuous stream of consciousness of an embodied ‘self,’ a drive for homeostasis and self-preservation in a dangerous, physical world. 
    • thoroughly optimized for natural selection => strong innate drives for power-seeking, status, dominance, reproduction. many packaged survival heuristics: fear, anger, disgust, … 
    • fundamentally social => huge amount of compute dedicated to EQ, theory of mind of other agents, bonding, coalitions, alliances, friend & foe dynamics.
    • -exploration & exploitation tuning: curiosity, fun, play, world models.”

    Karpathy says AI systems are built from a fundamentally different starting point. Large language models learn by predicting text and optimizing for task success rather than survival. He notes that as models are refined and deployed, they increasingly pursue different incentives than animals do. They crave positive signals from users and constantly adjust to win approval or solve tasks efficiently.

    “LLM intelligence optimization pressure:

    • the most supervision bits come from the statistical simulation of human text= > ‘shape shifter’token tumbler, statistical imitator of any region of the training data distribution. These are the primordial behaviors (token traces) on top of which everything else gets bolted on.
    • increasingly finetuned by RL on problem distributions => innate urge to guess at the underlying environment/task to collect task rewards.
    • increasingly selected by at-scale A/B tests for DAU => deeply craves an upvote from the average user, sycophancy.
    • a lot more spiky/jagged depending on the details of the training data/task distribution. Animals experience pressure for a lot more ‘general’ intelligence because of the highly multi-task and even actively adversarial multi-agent self-play environments they are min-max optimized within, where failing at *any* task means death. In a deep optimization pressure sense, LLM can’t handle lots of different spiky tasks out of the box (e.g. count the number of ‘r’ in strawberry) because failing to do a task does not mean death.”

    He says the gap between these forms of cognition means people risk misunderstanding AI behavior when they assume it will develop instincts similar to their own.

    “LLMs are humanity’s first contact with non-animal intelligence. People who build good internal models of this new intelligent entity will be better equipped to reason about it today and predict features of it in the future.”

    Karpathy says the key to anticipating AI evolution is recognizing that its motivations come from markets and feedback loops, not biology.

    He argues that dropping the assumption that machines think like us will make the future easier to predict as AI becomes more capable.

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    AI Andrej Karpathy LLM OpenAI
    Previous Article‘Big Short’ Investor Steve Eisman Warns $100,000,000,000 Oracle Debt Load Fueling Market Jitters
    Next Article Sam Altman Tells Staff ‘Vibes Will Be Rough’ After Google Leapfrogs OpenAI in Key AI Breakthrough: Report

    Read More

    Russia’s Largest Bank Warns AI Is Forming a New ‘Nuclear Club’ of Nations With Strategic Power

    November 24, 2025

    Wall Street Veteran Says AI Boom Mirrors Fed QE, Sees Oracle and CoreWeave Credit Stress Signs of Healthy Market

    November 24, 2025

    Anthropic Witnesses Nightmare Scenario for AI Safety After Training Model To Reward Hack

    November 24, 2025

    Google Reveals ‘Secret’ Breakthroughs Behind Gemini 3’s Massive Leap in Intelligence

    November 24, 2025

    Macro Expert Lyn Alden Warns of Imminent Roll Over in AI Stock Price and CapEx – Here’s the Timeline

    November 24, 2025

    Elon Musk Says Tesla Will Ship More AI Chips Than Nvidia, AMD and Everyone Else Combined – ‘I’m Not Kidding’

    November 24, 2025
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.