by karpathy on 6/18/2025, 9:40:16 PM
by pudiklubi on 6/18/2025, 5:17:02 PM
For context - I was in the audience when Karpathy gave this amazing talk on software 3.0. YC has said the official video will take a few weeks to release, by which Karpathy himself said the talk will be deprecated.
by afiodorov on 6/18/2025, 7:15:08 PM
> So, it was really fascinating that I had the menu gem basically demo working on my laptop in a few hours, and then it took me a week because I was trying to make it do it
Reminds me of work where I spend more time figuring out how to run repos than actually modifying code. A lot of my work is focused on figuring out the development environment and deployment process - all with very locked down permissions.
I do think LLMs are likely to change industry considerably, as LLM-guided rewrites are sometimes easier than adding a new feature or fixing a bug - especially if the rewrite is something into more LLM-friendly (i.e., a popular framework). Each rewrite makes the code further Claude-codeable or Cursor-codeable; ready to iterate even faster.
by bcrosby95 on 6/18/2025, 8:15:14 PM
I might be wrong, but it seems like some people are misinterpreting what is being said here.
Software 3.0 isn't about using AI to write code. It's about using AI instead of code.
So not Human -> AI -> Create Code -> Compile Code -> Code Runs -> The Magic Happens. Instead, it's Human -> AI -> The Magic Happens.
by alganet on 6/18/2025, 7:36:30 PM
> imagine changing it and programming the computer's life
> imagine that the inputs for the car are on the bottom, and they're going through the software stack to produce the steering and acceleration
> imagine inspecting them, and it's got an autonomy slider
> imagine works as like this binary array of a different situation, of like what works and doesn't work
--
Software 3.0 is imaginary. All in your head.
I'm kidding, of course. He's hyping because he needs to.
Let's imagine together:
Imagine it can be proven to be safe.
Imagine it being reliable.
Imagine I can pre-train on my own cheap commodity hardware.
Imagine no one using it for war.
by no_wizard on 6/18/2025, 6:57:20 PM
A large contention of this essay (which I’m assuming the talk is based on or is transcribed from depending on order) I do think that open source models will eventually catch up to closed source ones, or at least be “good enough” and I also think you can already see how LLMs are augmenting knowledge work.
I don’t think it’s the 4th wave of pioneering a new dawn of civilization but it’s clear LLMs will remain useful when applied correctly.
by arkj on 6/18/2025, 7:54:07 PM
>Software 2.0 are the weights which program neural networks. >I think it's a fundamental change, is that neural networks became programmable with large libraries... And in my mind, it's worth giving it the designation of a Software 3.0.
I think it's a bit early to change your mind here. We love your 2.0, let's wait for some more time till th e dust settles so we can see clearly and up the revision number.
In fact I'm a bit confused about the number AK has in mind. Anyone else knows how he arrived at software 2.0?
I remember a talk by professor Sussman where he suggest we don't know how to compute, yet[1].
I was thinking he meant this,
Software 0.1 - Machine Code/Assembly Code Software 1.0 - HLLs with Compilers/Interpreters/Libraries Software 2.0 - Language comprehension with LLMs
If we are calling weights 2.0 and NN with libraries as 3.0, then shouldn't we account for functional and oo programming in the numbering scheme?
by ath3nd on 6/18/2025, 9:47:11 PM
I find it hard to care for the marginal improvements in a glorifiedutocomplete that guzzles a shit ton of water and electricity (all stuff that can be used for more useful stuff than generating a picture of a cat with human hands or some lazy rando's essay assignment) and then ends up having to be coddled by a real engineer into a working solution.
Software 2.0? 3.0? Why stop there? Why not software 1911.1337? We went through crypto, NFTs, web3.0, now LLMs are hyped as if they are frigging AGI (spoiler, LLMs are not designed to be AGI, and even if they were, you sure as hell won't be the one to use them to your advantage, so why are you so irrationally happy about it?).
Man this industry is so tiring! What is the most tiring is the dog-like enthusiasm of the people who buy it EVERY.DAMN.TYPE, as if it's gonna change the life of most of them for the better. Sure, some of these are worse and much more useless than others (NFTs), but in the core of all of it is this cult-like awe we as a society have towards figures like the Karpathy's, Musks and Altmans of this world.
How are LLMs gonna help society? How are they gonna help people work, create and connect with one another? They take away the joy of making art, the joy of writing, of learning how to play a music instrument and sing, and now they are coming for software engineering. Sure, you might be 1%/2% faster, but are you happier, are you smarter (probably not: https://www.mdpi.com/2076-3417/14/10/4115)?
by fenghorn on 6/18/2025, 6:44:06 PM
First time using NotebookLM and it blew my mind. I pasted in the OP's transcription of the talk into NotebookLM and got this "podcast": https://notebooklm.google.com/notebook/5ec54d65-f512-4e6c-9c...
by amelius on 6/18/2025, 8:38:08 PM
Does it say anything about how this will affect wealth distribution?
by 1970-01-01 on 6/19/2025, 12:13:32 PM
Ironic how the shiny new paradigm in software (AI) was not leveraged for cleaning up the transcription. We are not baking a new kind of pie, we're simply proving that you can bake with the microwave. We're not heading into a new "3.0" era. We're growing new branches to the trunk. AI will be mistaken for causing a paradigm shift in coding right until it fails to overtake the trunk and high quality code is necessary again.
by agentultra on 6/19/2025, 5:13:37 PM
Buzzword soup. A lot of mixed analogies and metaphors. Very little justification for anything.
"We need to rewrite a lot of software," ok... why?
"AI is the new electricity" Really now... so I should expect a bill every month that always increases and to have my access cut off intermittently when there's a rolling AI power outage?
Interesting times indeed.
by msgodel on 6/18/2025, 7:10:39 PM
This is almost exactly what I've experienced with them. It's a great talk, I wish I could have seen it in person.
by meerab on 6/19/2025, 6:44:20 AM
Check out the transcription of Andrej Karpathy's keynote at AI Startup School in San Francisco.
by jimmy76615 on 6/18/2025, 6:49:47 PM
The talk is still not available on YouTube? What takes them so long?
by bredren on 6/18/2025, 8:21:47 PM
Anyone know what "oil bank" was in the actual talk?
by waynenilsen on 6/18/2025, 10:04:50 PM
I used https://app.readaloudto.me to listen it is helpful
by sensanaty on 6/18/2025, 9:26:54 PM
Christ I'm gonna be forced to listen to the moronic managers and C-suites repeat this "software 3.0" bullshit incessantly from now on aren't I...
by uncircle on 6/18/2025, 8:26:51 PM
AI sus talk. Kinda appropriate.
by yapyap on 6/18/2025, 9:14:05 PM
SUS talk
great name already
by romain_batlle on 6/18/2025, 8:22:56 PM
The analogy with the grid seems pretty good. The fab one seems bad tho.
by gooseus on 6/18/2025, 9:25:02 PM
But at what cost? And I don't mean the "human cost", I mean literally, how much will it cost to use an LLM as your "operating system"? Correct me if I'm wrong here, but isn't every useful LLM being operated at a loss?
by zkmon on 6/18/2025, 9:25:58 PM
I'm not an expert on the subject itself, but I can tell that the transcript, in its entirety, is missing a solid line. While the parts of this talk are great on their own, I feel they couldn't stitch the whole story together well. And probably he might not be confident of completeness and composition of his thought. What's the whole point? That should be answered in the first few minutes.
by kaladin-jasnah on 6/18/2025, 7:37:56 PM
Tangentially related, but it boggles my mind this guy was badmephisto, who made a quite famous cubing tutorial website that I spent plenty of time on in my childhood.
by swah on 6/18/2025, 6:10:25 PM
See also https://www.latent.space/p/s3
by iLoveOncall on 6/18/2025, 9:41:05 PM
Just a grifter grifting.
> The more reliance we have on these models, which already is, like, really dramatic
Please point me to a single critical component anywhere that is built on LLMs. There's absolutely no reliance on models, and ChatGPT being down has absolutely no impact on anything beside teenagers not being able to cheat on their homeworks and LLM wrappers not being able to wrap.
by lvl155 on 6/18/2025, 7:10:36 PM
I soak up everything Andrej has to say.
by jacobgorm on 6/18/2025, 9:52:53 PM
[flagged]
by mattlangston on 6/18/2025, 8:24:53 PM
Very nice find @pudiklubi. Thank you.
by adamnemecek on 6/18/2025, 6:57:53 PM
AGI = approximating partition function. Everything else is just a poor substitute.
by yusina on 6/18/2025, 9:12:28 PM
> I think broadly speaking, software has not changed much at such a fundamental level for 70 years.
I love Andrej, but come on.
Writing essentially punch cards 70 years ago, writing C 40 years ago and writing Go or Typescript or Haskell 10 years ago, these are all very different activities.
by Aeroi on 6/18/2025, 7:49:15 PM
TL;DR: Karpathy says we’re in Software 3.0: big language models act like programmable building blocks where natural language is the new code. Don’t jump straight to fully autonomous “agents”—ship human-in-the-loop tools with an “autonomy slider,” tight generate-→verify loops, and clear GUIs. Cloud LLMs still win on cost, but on-device is coming. To future-proof, expose clean APIs and docs so these models (and coming agents) can safely read, write, and act inside your product.
by sammcgrail on 6/18/2025, 8:28:55 PM
You’ve got “two bars” instead of “two rs” in strawberry
by pera on 6/18/2025, 7:09:37 PM
Is "Software 3.0" somehow related to "Web 3.0"?
by snickell on 6/18/2025, 9:27:42 PM
If you want to try what Karpathy is describing live today, here's a demo I wrote a few months ago: https://universal.oroborus.org/
It takes mouse clicks, sends them to the LLM, and asks it to render static HTML+CSS of the output frame. HTML+CSS is basically a JPEG here, the original implementation WAS JPEG but diffusion models can't do accurate enough text yet.
My conclusions from doing this project and interacting with the result were: if LLMs keep scaling in performance and cost, programming languages are going to fade away. The long-term future won't be LLMs writing code, it'll be LLMs doing direct computation.
Btw I notice many pretty bad errors in this transcription of the talk. The actual video will be up soon I hope.