Will Gavin Baker's forecast that it'll be abjectly humiliating for everyone who is a Tesla FSD skeptic hold?
➕
Plus
39
Ṁ5722
2026
26%
6 months (27th Jan 2025)
46%
12 months (27th Aug 2025)
50%
18-months (27th Jan 2026)

Gavin Baker, a famous investor, says it's going to be "abjectly humiliating for everyone who is an FSD skeptica in the next 12-18 months, maybe 6 months.

Here's the transcript.

Patrick: [00:57:50] Maybe we could talk about robotics. I had a really interesting conversation, call it, April of this year with an investor that has been investing in lots of these same things for long periods of time privately and publicly and has big positions in lots of the companies that we've been talking about today.

 

His observation to me was the big underestimation that's happening over, let's say, 5 years, is the role that robotics and robots will have combined with all of this technology we've spent all of today talking about. And I would love to hear you riff on that because in the near term, it feels like a little bit -- quite a bit of frothiness, like some crazy funding rounds for these companies that you don't really know what the [ really ] are being designed to do, sort of general purpose humanoid-type robots.

 

There's all sorts of interesting more specialized ones that are cool too. But what do you think about all this? Because it does seem kind of underdiscussed relative to just all the foundation model and semiconductor stuff.

 

Gavin: [00:58:44] I agree. I think it may end up being a bigger near-term disruption than what we were just discussing, the automation of a lot of white-collar labor. I think the first robot that's really going to impact the world is every Tesla car with what they call their AI 4 Hardware. Because from my perspective, there's a publicly sourced miles between disengagement.

 

So you have to remember for Tesla, Tesla is going to get the same miles between disengagements. Like if you built a new city on mars and it was populated by entirely different-looking cars and streets and everything, you could drop a Tesla in that city, and it would have the same miles between disengagements that it gets into any other city, where something like Waymo is geo-fenced, we're really able to using it in cities that have nice grids and good weather, et cetera, et cetera, et cetera.

 

It is clear to me looking at the crowdsourced data, miles between disengagements with different versions of FSD, when they cut over to [ WAF 3 ], which is effectively all deep-learning, I think, eliminated almost all human code, something dramatic changed in the rate of progress. And then when they cut over to 12.5, which runs best on AI 4, which used to be called the HW4, it's just like the local computer of the Tesla, it is now rolling out to AI 3, was another step function.

 

And these going to that same scaling law, those step function improvements were made with a fraction of the compute that Tesla is now installing publicly in their data center at the Gigafactory in Austin.

 

And they've actually filed -- sometimes, I wish they -- as an investor, I wish they wouldn't file so many of these patents, but they have filed some really innovative patents for data center cooling related to what they're doing with what looks -- I guess, been publicly said it's going to be over 50,000 H100s or H200s.

 

FSD is now on the same scaling law and arguably on a faster scaling law because they have a lot of catch-up to do that GPT isn't good on. So I think 12.5 is like GPT-3, and it could consistently drive me most places with no interventions. I'm a seasonal driver. I really only drive my Tesla in the summer, and actually, my wife, Becky, tends to do most of the driving because she likes it more than me.

 

So we kind of get like a seasonal look. It's almost like every May we check in. There was just always continuous progress. This year, it's like when we turned on 12.3, it was like all of the progress over the last 10 years was in that one release. From the first time I had that Tesla, and then we had that again, when I went 12.3 to 12.5, and they're probably like a GPT-2 level of compute. I think they're going to go really fast to GPT-4.5 compute which means you're going to get -- using these orders of magnitude, you're going to get like a 100x improvement really fast. So I think there's all these people who have been skeptical, they're all in for abject humiliation, they just are.

 

And then, unlike GPT-2, only Tesla has access to a visual trading dataset that is based on miles driven. We can argue whether it's a 100x, a 1,000x, 10,000x, bigger than the second biggest trading dataset, which is Waymo. So it's like people are, "Oh, how are they going to make money?" Well, it's like in this case -- in the world of self-driving, from my perspective, it's like they owed YouTube, they owed all of Meta’s properties and the open Internet, and X, and then, other people are like trying to do it using...

 

Patrick: [01:02:40] Yahoo!.

 

Gavin: [01:02:43] Yes, using Yahoo! Like good luck, like who's going to win? Now obviously, that could change, important to have humility. There may be an algorithmic breakthrough that reduces the importance of that trading dataset. And for sure, Waymo is going to try and brute force it. They just throw whatever amount of dollars they need to get the data to compete, and they have a different approach, using LiDAR, Tesla doesn't. We'll see.

 

I don't think it's a forgone conclusion, nothing about the future is certain. But just if I look at how amazing 4.5 is on AI4 hardware and think about the tiny amount of compute that, that was trained on and the mega cluster that they are standing up at Austin, using no techniques, we're going to skip -- I think 12.5 and GPT-2, we're going to skip really quickly to GPT-4.

 

And then look, I'm sure Waymo will brute force it. There may be algorithmic breakthroughs such that there are other people, we'll see. But then the other big thing is just using an LLM for FSD. One of the best followers at X is Dr. Jim Fan, who's NVIDIA's Head of Robotics, and he's had a lot of posts about how -- there's a fascinating exchange between him and Elon on X, it is amazing the extent to which AI happens on X.

 

The JAX team at Google and the PyTorch team on Meta got into this bitter fight. And it went to mimoff which framework was better for Mammoth, literally the heads of each lab had to step in publicly at X and make peace. But like, wow, you learned so much just following that fight. Like every AI researcher is active on X. AI happens on X, and it's such a great forum for using it.

 

But Jim Fan had this fascinating exchange with Elon, where Jim Fan talked about how LLMs could massively improve FSD, and Elon replied, "Yes, the only two data sources that will scale infinitely are synthetic data and real-world video." And I thought that was interesting. That goes to, I think, maybe the biggest risk in which this view that I just described of Tesla's autonomous future is wrong. It's just a synthetic video data can be used in the same way that synthetic data can be -- we know that synthetic written data works.

 

We don't know if synthetic video data works. Nobody knows. And obviously, there's a very high bar for regulator. I think it's something, whatever it is, like 50,000 or 100,000 people die in car crashes every year globally. It might even be 1 million. Obviously, we could take that down dramatically using AI, but we're much less willing to tolerate traffic fatal accidents from AIs than humans.

 

That is what it is. So it's going to be heavily regulated. But Dr. Jim Fan posited that the reason LLMs were going to be able to really help put the FSD is because of the following: this is the way my -- relative to some of the people working on these problems, my comparatively low IQ brain conceptualizes it. Anything that's been trained on real-world data just knows what to do, what a really good human driver would do in that real-world situation.

 

If there's a novel situation, it may not know what to do. And that's where, from my perspective, the LLM could really help because one of the emergent properties at GPT-4, and we could debate whether or not it actually is an emergent property or just in context learning, it has what's called the world model.

 

And that means, I'm sure you know this, but if you ask GPT-3, "Hey, what happens if you steered a champagne bottle upside down and put like a basketball covered in soap on top of it?" GPT-3, "No idea." GPT-4 will often get questions like that right. I should actually see if it gets that exact question right. A three-year-old human will say, "That's going to fall, the champagne bottle is going to shatter."

 

It's really hard for GPT-3, and that goes this jagged for two that people talk about. So if you put a really speed-optimized small LLM in -- locally on each Tesla, there might be just enough reasoning capability to unlock another step function at FSD capability.

 

Now look, Waymo will have that, too, as well lots of other people, but they won't have Tesla Vision, all of the proprietary dataset. Just see, this is going to be a reality in a way that it’s abjectly humiliating to everyone who is an FSD skeptic in the next 12 to 18 months, maybe in the next six months. And I have never been willing to make a prediction like that before.

 

So then you take that, and the same thing goes for humanoid robots. Google showed this with research called TensorRT 2, where dropping an LLM into a humanoid robot with a world model that understood what things were and what to do, just made it so much easier instead of trading that humanoid robot how to pick up a tennis ball and a basketball and a football and how each one is different. It could reason.

 

And so this is why putting LLMs into these humanoid robots I think is going to be so transformational for the world and make a lot of blue-collar labor optional. I do think politicians and political systems are utterly unprepared for what may be coming. But the one thing I would say that Elon and Jensen profoundly agree on publicly is that humanoid robots are the future, not the specialized robots.

 

And the reason is just, of course, a specialized robot could be better than a humanoid robot at any given task, but the humanoid robot could do any task that a human can, the world is optimized for humans, and their massive scale efficiencies in manufacturing. And so because you can make -- it's almost like humanoid robots are going to be to the field of robotics as GPT was to AI. GPT was a generalizable type of AI, and these humanoid robots are going to be a generalizable form of robotics.

 

And because of that, they're going to be manufactured at such a scale that they have a cost advantage and then the physical world is going to start to be optimized for them, and that's why they're going to win. So good luck to all these non-humanoid start-up robot companies. I hope you get Lycos or CMG or MySpace type venture outcome, but I don't think any of them are going to be Google.

 

And in the same way that so much of GPT advantages incumbents, whether that's Meta, Google, X, xAI, Microsoft, these humanoid robots, the reason it advantages incumbents is because they have the raw ingredients of data, compute and capital, which is what you need to effectively monetize these, and that's why their ROIC is going up even as they real CapEx.

 

I do think that incumbent manufacturers who have expertise and battery design, actuators, motors with big datasets are going to be advantage. I'm reasonably bullish on optimists. That's just such a giant market. There's going to be so many competitors. And whether it evolves FSD, I could see a world where there's just two or three companies, maybe it's Tesla, Waymo, and some open-sourced variant or it could be synthetic real word data works, LLMs really improved the efficiency of that specialized visual algorithm.

 

And so there's like thousands of them. I think that's unlikely, but it's possible. I think robotics may end up advantaging incumbents in the same way, I think, FSD and LLMs, GPT have advantaged -- generally advantaged incumbents and then startups willing to take like a new AI-first approach. But I do think robotics is going to change the world, it's super-exciting, like I can't wait to have my own personal robot.

Related market:


I'll consider the public opinion on long-term Tesla FSD Bears, the mainstream media coverage, Gavin Baker's own view on the matter, and material outcomes, like Tesla's weekly driverless rides.

I won't bet.

Get Ṁ1,000 play money
Sort by:

There is a typo in the third option "18-months (27th Jan 20260"

ty

The earlier dates resolving YES will also imply the later dates resolving YES, or no?

Yes, if the earlier dates resolve YES, later dates resolve YES.

I don't get it. Like most FSD skeptics (including me) don't make their skepticism of FSD their core identity. How could being wrong about a tangential thing ever be "abjectly humiliating"? Like I chose the restaurant for dinner and it turns out it wasn't good. Have I been "abjectly humiliated"?

As a Tesla bull myself, I have to say, your observation is keen here. I think a lot of people are projecting here, because their position on TSLA has become a part of their core identity!