Andrej Karpathy’s latest conversation with Sequoia Capital captures a turning point in software development: coding is no longer just about writing syntax—it is increasingly about directing intelligent systems.
In the interview, Karpathy reflects on his own shift from traditional coding habits to what he calls agentic engineering: a workflow where AI agents generate, refactor, and execute large chunks of software while humans provide architecture, taste, and judgment.
If you build products, lead engineering teams, or simply want to understand where software is heading, this discussion is one of the clearest frameworks so far.
1) Why Karpathy Suddenly Felt "Behind"
Karpathy opens with a candid admission: around December 2025, he started feeling “behind” as a programmer despite years at the frontier of AI.
What changed was trust. Earlier AI coding tools still required frequent edits and guardrails. Newer models started producing code that worked end-to-end often enough that he could increasingly accept output directly and keep moving. That shift pulled him into a fast loop of side projects built mostly through prompting and iteration.
This is the emotional core of the interview: even elite engineers are adapting to a new baseline where leverage comes from orchestration, not manual throughput.
2) Software 1.0 → 2.0 → 3.0
Karpathy lays out a useful progression:
- Software 1.0: Humans write explicit rules in code.
- Software 2.0: Humans curate data and train neural networks; behavior lives in learned weights.
- Software 3.0: Humans program through prompts and context; the model is the runtime.
His point is not that old paradigms disappear. It is that the center of gravity changes.
A practical example he discusses is a menu-visualization app. In a classic stack, you would assemble OCR pipelines, image tooling, backend logic, and UI glue. In a Software 3.0 mindset, you can increasingly provide raw input, tool access, and constraints, then let the model coordinate the full transformation.
3) Vibe Coding Raises the Floor, Agentic Engineering Raises the Ceiling
Karpathy separates two ideas that are often conflated:
- Vibe coding lowers barriers. More people can build useful software faster.
- Agentic engineering increases upper-bound performance for professionals.
Agentic engineering is not “just let the model do everything.” It is a discipline for steering stochastic systems toward reliable production outcomes:
- decomposing tasks so outputs are verifiable,
- creating fast feedback loops,
- constraining tool access,
- reviewing architecture instead of micro-syntax,
- and hardening for security and correctness.
In other words, the new advantage is not typing speed—it is system direction under uncertainty.
4) Jagged Intelligence: Brilliant Here, Confused There
One of the most important concepts in the discussion is jagged intelligence: models can be exceptional in one domain and weak in an adjacent one.
Karpathy argues this pattern is tightly connected to verifiability. Where there are clear right answers (math, coding tests, deterministic checks), labs can run large-scale reinforcement loops and improve rapidly. Where evaluation is noisy or ambiguous, capabilities remain uneven.
That’s why a model might solve deeply technical codebase tasks yet fail at mundane real-world judgment in the next turn. Teams that internalize this “jagged” profile build safer workflows: they trust AI where verification is strong and add tighter human review where it is not.
5) The Human Role Is Shifting Up the Stack
As AI agents absorb implementation detail, the human role becomes more managerial and creative:
- defining product intent,
- choosing architecture,
- setting quality bars,
- making tradeoffs under constraints,
- and deciding what should exist at all.
Karpathy describes the agent like a very fast intern with near-infinite recall of APIs and syntax. The intern can execute quickly, but leadership still owns direction.
This framing is useful for teams: if you only optimize for “lines written,” you miss the real opportunity. The compounding gains come from better problem selection and better system design.
6) Toward an Agent-Native Computing Environment
Karpathy also points toward a broader infrastructure shift: many current tools, docs, and interfaces are designed for humans reading step-by-step instructions.
In an agent-native future, software surfaces should be built for machine interaction first:
- structured interfaces instead of click-path tutorials,
- explicit sensors/actuators for agent control,
- predictable tool contracts,
- and documentation optimized for execution, not just comprehension.
If this happens, the “best” products may be those that are easiest for trusted agents to operate on behalf of users.
7) You Can Outsource Thinking, Not Understanding
The interview closes on a line Karpathy quotes and endorses: you can outsource parts of thinking, but not understanding.
That may be the key leadership principle for the next decade. As intelligence gets cheaper and more accessible, the scarce asset becomes informed judgment:
- knowing when outputs are correct,
- seeing second-order effects,
- distinguishing novelty from noise,
- and identifying problems worth solving.
What This Means for Builders Right Now
If you are building with AI today, this conversation suggests a practical playbook:
- Adopt AI for implementation speed where outcomes are easy to verify.
- Invest in evaluation harnesses (tests, checks, metrics) before scaling autonomy.
- Move senior talent toward architecture and product judgment rather than code volume.
- Design tools and internal platforms for agent usability as a first-class requirement.
- Train teams on “supervision skills”: prompt strategy, decomposition, review patterns, and risk controls.
The teams that win won’t be the ones that resist this shift, or blindly automate everything. They will be the ones that learn to direct agents with precision.
Watch the Full Interview
If you want a single mental model for where software engineering is going, this interview is a strong place to start.