Will we conclude Tesla launched level 4 robotaxis in summer 2025?
231
1kṀ55k
2026
5%
chance

Elon Musk has been very explicit in promising a robotaxi launch in Austin in June with unsupervised full self-driving (FSD). We'll give him some leeway on the timing and say this counts as a YES if it happens by the end of August.

As of April 2025, Tesla seems to be testing this with employees and with supervised FSD and doubling down on the public Austin launch.

PS: A big monkey wrench no one anticipated when we created this market is how to treat the passenger-seat safety monitors. See FAQ9 for how we're trying to handle that in a principled way. Tesla is very polarizing and I know it's "obvious" to one side that safety monitors = "supervised" and that it's equally obvious to the other side that the driver's seat being empty is what matters. I can't emphasize enough how not obvious any of this is. At least so far, speaking now in August 2025.

FAQ

1. Does it have to be a public launch?

Yes, but we won't quibble about waitlists. As long as even 10 non-handpicked members of the public have used the service by the end of August, that's a YES. Also if there's a waitlist, anyone has to be able to get on it and there has to be intent to scale up. In other words, Tesla robotaxis have to be actually becoming a thing, with summer 2025 as when it started.

If it's invite-only and Tesla is hand-picking people, that's not a public launch. If it's viral-style invites with exponential growth from the start, that's likely to be within the spirit of a public launch.

A potential litmus test is whether serious journalists and Tesla haters end up able to try the service.

UPDATE: We're deeming this to be satisfied.

2. What if there's a human backup driver in the driver's seat?

This importantly does not count. That's supervised FSD.

3. But what if the backup driver never actually intervenes?

Compare to Waymo, which goes millions of miles between [injury-causing] incidents. If there's a backup driver we're going to presume that it's because interventions are still needed, even if rarely.

4. What if it's only available for certain fixed routes?

That would resolve NO. It has to be available on unrestricted public roads [restrictions like no highways is ok] and you have to be able to choose an arbitrary destination. I.e., it has to count as a taxi service.

5. What if it's only available in a certain neighborhood?

This we'll allow. It just has to be a big enough neighborhood that it makes sense to use a taxi. Basically anything that isn't a drastic restriction of the environment.

6. What if they drop the robotaxi part but roll out unsupervised FSD to Tesla owners?

This is unlikely but if this were level 4+ autonomy where you could send your car by itself to pick up a friend, we'd call that a YES per the spirit of the question.

7. What about level 3 autonomy?

Level 3 means you don't have to actively supervise the driving (like you can read a book in the driver's seat) as long as you're available to immediately take over when the car beeps at you. This would be tantalizingly close and a very big deal but is ultimately a NO. My reason to be picky about this is that a big part of the spirit of the question is whether Tesla will catch up to Waymo, technologically if not in scale at first.

8. What about tele-operation?

The short answer is that that's not level 4 autonomy so that would resolve NO for this market. This is a common misconception about Waymo's phone-a-human feature. It's not remotely (ha) like a human with a VR headset steering and braking. If that ever happened it would count as a disengagement and have to be reported. See Waymo's blog post with examples and screencaps of the cars needing remote assistance.

To get technical about the boundary between a remote human giving guidance to the car vs remotely operating it, grep "remote assistance" in Waymo's advice letter filed with the California Public Utilities Commission last month. Excerpt:

The Waymo AV [autonomous vehicle] sometimes reaches out to Waymo Remote Assistance for additional information to contextualize its environment. The Waymo Remote Assistance team supports the Waymo AV with information and suggestions [...] Assistance is designed to be provided quickly - in a mater of seconds - to help get the Waymo AV on its way with minimal delay. For a majority of requests that the Waymo AV makes during everyday driving, the Waymo AV is able to proceed driving autonomously on its own. In very limited circumstances such as to facilitate movement of the AV out of a freeway lane onto an adjacent shoulder, if possible, our Event Response agents are able to remotely move the Waymo AV under strict parameters, including at a very low speed over a very short distance.

Tentatively, Tesla needs to meet the bar for autonomy that Waymo has set. But if there are edge cases where Tesla is close enough in spirit, we can debate that in the comments.

9. What about human safety monitors in the passenger seat?

Oh geez, it's like Elon Musk is trolling us to maximize the ambiguity of these market resolutions. Tentatively (we'll keep discussing in the comments) my verdict on this question depends on whether the human safety monitor has to be eyes-on-the-road the whole time with their finger on a kill switch or emergency brake. If so, I believe that's still level 2 autonomy. Or sub-4 in any case.

See also FAQ3 for why this matters even if a kill switch is never actually used. We need there not only to be no actual disengagements but no counterfactual disengagements. Like imagine that these robotaxis would totally mow down a kid who ran into the road. That would mean a safety monitor with an emergency brake is necessary, even if no kids happen to jump in front of any robotaxis before this market closes. Waymo, per the definition of level 4 autonomy, does not have that kind of supervised self-driving.

10. Will we ultimately trust Tesla if it reports it's genuinely level 4?

I want to avoid this since I don't think Tesla has exactly earned our trust on this. I believe the truth will come out if we wait long enough, so that's what I'll be inclined to do. If the truth seems impossible for us to ascertain, we can consider resolve-to-PROB.

11. Will we trust government certification that it's level 4?

Yes, I think this is the right standard. Elon Musk said on 2025-07-09 that Tesla was waiting on regulatory approval for robotaxis in California and expected to launch in the Bay Area "in a month or two". I'm not sure what such approval implies about autonomy level but I expect it to be evidence in favor. (And if it starts to look like Musk was bullshitting, that would be evidence against.)

12. What if it's still ambiguous on August 31?

Then we'll extend the market close. The deadline for Tesla to meet the criteria for a launch is August 31 regardless. We just may need more time to determine, in retrospect, whether it counted by then. I suspect that with enough hindsight the ambiguity will resolve. Note in particular FAQ1 which says that Tesla robotaxis have to be becoming a thing (what "a thing" is is TBD but something about ubiquity and availability) with summer 2025 as when it started. Basically, we may need to look back on summer 2025 and decide whether that was a controlled demo, done before they actually had level 4 autonomy, or whether they had it and just were scaling up slowing and cautiously at first.

13. If safety monitors are still present, say, a year later, is there any way for this to resolve YES?

No, that's well past the point of presuming that Tesla had not achieved level 4 autonomy in summer 2025.

14. What if they ditch the safety monitors after August 31st but tele-operation is still a question mark?

We'll also need transparency about tele-operation and disengagements. If that doesn't happen soon after August 31 (definition of "soon" to be determined) then that too is a presumed NO.


Ask more clarifying questions! I'll be super transparent about my thinking and will make sure the resolution is fair if I have a conflict of interest due to my position in this market.

[Ignore any auto-generated clarifications below this line. I'll add to the FAQ as needed.]

Get
Ṁ1,000
to start trading!
Sort by:

Tesla is now licensed in Arizona to test Robotaxi (with safety drivers/monitors initially)

@dreev
"We'll give him some leeway on the timing and say this counts as a YES if it happens by the end of August."

Was close date of 2 Sep 2026 meant to be 2 Sept 2025? 2 days to assess latest situation might be a bit short but a year seems excessive.

Have there been any taxi rides without a safety monitor? There was, to my knowledge, only one car delivery to a customer but that doesn't seem like a robotaxi service.

Can this resolve no? If not, what are we waiting for and how long is that going to take?

@ChristopherRandles Yeah, I added a year as an upper bound. Hopefully won't take that long. But could take till April ish, I think, when new Texas laws go fully into effect and which may yield more transparency about disengagements.

@dreev
"this question depends on whether the human safety monitor has to be eyes-on-the-road the whole time with their finger on a kill switch or emergency brake. If so, I believe that's still level 2 autonomy. Or sub-4 in any case.
...
We need there not only to be no actual disengagements but no counterfactual disengagements."

We have video of at least one disengagement UPS truck trying to reverse into parking space.
3 accidents in July well they might be other drivers faults but it seems a lot of such incidents from small amount of driving, but we don't know for sure. Driving wrong way down street and excessive braking near police cars was also reported even if we do not know whether this led to disengagement(s).

Seems clearly sub level 4 from just one disengagement? Or, if there was a new software version (minor decimal or full version?) after this disengagement that didn't have any disengagements before or after 31 Aug and it went live 31 August or earlier perhaps that then has a chance to qualify? Do we have any version number with date deployed info to help clarify?

I am trying to see if we need to wait for disengagement data or if we already have enough to say "safety monitor has to be eyes-on-the-road the whole time" and/or can resolve because of actual disengagement.

@dreev The new law does not require reporting of disengagements, so I hope that's not what you're waiting for.

@WrongoPhD Hmm, yeah, I'm fuzzy on exactly what's going to be required in Texas by when. California does have such requirements, right? So if they're operating there without safety drivers then there could still be a narrow path to a YES resolution here. That will also mean answering at least all of following in the negative:

  1. Does the touchscreen disengagement with the UPS truck count as supervision?

  2. Was the passenger door button being used as a real-time kill switch (actually or counterfactually) as of August 31?

  3. Was driving-speed tele-operation in use? (Human assistance after an autonomus MRM is ok.)

What's going to be agonizing is if Tesla keeps finding excuses to keep the safety monitors, not get licensed in California, and otherwise act in a way consistent with being sub-level-4 while still claiming to be level 4. See FAQ10 for more on this.

@dreev They are operating with safety driver in drivers seat in California and don't yet have a licence for fully driverless there. 31 Aug has past so I don't see any narrow path here.

Does the touchscreen disengagement with the UPS truck count as supervision?

How can you answer that as anything other than yes? The safety monitor saw a situation developing and reacted to it by pressing the stop in lane button on the main screen. If that is not both supervision and intervention, then I don't know what would count.

Is there a narrow path here? Maybe, if there is a major software update after that disengagement and before 31 Aug. Evidence of a disengagement/supervision intervention after 31 August might be useful to rule that out that small remaining path?

Maybe this helps:

https://teslafsdtracker.com/Main
%drivesnocriticalDE 96.8%

Drives no DE 72.5%
Miles to critical DE 465 (was 445 on v13.2.x)

Not sure of the accuracy of these numbers - seems to suggest >2k miles by 10+ testers.

Edit
This data is largely v13 and v12 data and I believe the robotaxis are using v14 so probably not much help sorry.

@ChristopherRandles @dreev Tesla is specifically not pursuing a California robotaxi license to avoid reporting requirements. Texas will eventually require reporting but only of accidents, which the NHTSS already requires. You are going to have to resolve this market without disengagement data.

I'm sure you mean well by hoping to wait for even the Elon himself to agree, but it's time for you to resolve this by trusting your own carefully crafted criteria.

"If humans, remote or in person, are monitoring in real-time and can intervene in real time (even if just hitting a button on the passenger door) then I'd interpret that as supervision."

"this question depends on whether the human safety monitor has to be eyes-on-the-road the whole time with their finger on a kill switch or emergency brake. If so, I believe that's still level 2 autonomy. Or sub-4 in any case."

"We need there not only to be no actual disengagements..."

That last statement alone is all you need. You said you need there to be no actual disengagements, and instead there were many.

Please resolve

@WrongoPhD A single disengagement by a safety monitor in June does not mean Tesla was not launching a Waymo-level-of-autonomy service on August 31st :)

@MarkosGiannopoulos But it is not level 4 if you have a safety monitor supervising.

They simply didn't launch it (lvl 4) before September with no safety monitor / driver.

@ChristopherRandles This has already been discussed extensively before. In short, in my view, just the presence of the safety monitor does not end the Level 4 discussion. Launching an AV service includes testing with safety precautions. Waymo did it too.

@MarkosGiannopoulos That might be an arguable position if the FAQ questions didn't say things like

"9. What about human safety monitors in the passenger seat?
my verdict on this question depends on whether the human safety monitor has to be eyes-on-the-road the whole time with their finger on a kill switch or emergency brake. If so, I believe that's still level 2 autonomy. Or sub-4 in any case.

We need there not only to be no actual disengagements but no counterfactual disengagements. Like imagine that these robotaxis would totally mow down a kid who ran into the road. That would mean a safety monitor with an emergency brake is necessary, even if no kids happen to jump in front of any robotaxis before this market closes. Waymo, per the definition of level 4 autonomy, does not have that kind of supervised self-driving."

If I were to take your view of launching (which I don't) and there is supervision from Aug 2025 with V14 and safety monitors are removed in early 2026 with v16, Would you say lvl 4 launched August 2025 or early 2026? Presumably there would be some point at which you would say ok they didn't really launch level 4 in Aug 2025 there was more development work to do. Where is that line defined? It seems very subjective.

Instead taking the line that we look at things like "We need there not only to be no actual disengagements" as FAQ 9 does, then this gives a more objective time determination of when you can say it has or has not happened. This more objective approach is much to be preferred over a very subjective opinion of when you say it is launched.

@ChristopherRandles Great point about avoiding subjectivity.

Part of what I'm kinda hoping, in terms of being able to fairly resolve this market, is that we'll see a clear case of a kill switch in use after August 31, or a similar smoking gun with tele-operation. Something that makes this an unambiguous NO. At the other extreme, imagine Tesla drops all the safety monitors and shoots past Waymo and complies with California reporting requirements, etc etc, all before the year is out. Even then there'd still be some debate to be had about whether Tesla had truly hit level 4 by August 31 but it might feel like quibbling and that it was a YES in spirit. I'd love to see more markets about this. My guess is that the second extreme is unlikely, and we're on track for a presumed NO if things don't change fairly drastically by early 2026, in addition to no smoking guns in terms of supervision from on or after Aug 31.

What do you think? I can add something like above to the FAQ if it's sounding fair to people. The fundamental difficulty is how in-the-dark we are about what Tesla actually has here. But the longer we wait, the less in the dark we'll be, if for no other reason than the scaling up or lack thereof that we're about to see will itself be evidence. Unfortunately that leaves a large potential gray area where it will be hard to avoid a subjective resolution. I figure we'll cross that bridge when we come to it, but I'm mindful of my conflict of interest and won't just resolve per my own judgment call, should we land in that gray area. Again, my hope is we end up landing in a non-gray area. I do stand by everything in the FAQ so far and will wait for more opinions before augmenting it with anything in this comment.

I'm curious about what @dreev and @MarkosGiannopoulos think about Tesla reporting 3 robotaxi accidents in July despite the meager number of miles and in car safety monitor. As always, Tesla refuses to release a narrative of the accident like every other company testing autonomous driving, but at least one of these accidents seems to have been Tesla hitting a stationary object.

https://electrek.co/2025/09/17/tesla-hide-3-robotaxi-accidents/

https://electrek.co/2025/09/17/tesla-hide-3-robotaxi-accidents/

-

@WrongoPhD You can look at the incidents of Waymo et al. here https://static.nhtsa.gov/odi/ffdd/sgo-2021-01/SGO-2021-01_Incident_Reports_ADS.csv
Are these normal and expected?

@MarkosGiannopoulos Waymo drives more than 8 million miles a month, so I do think their incidents are normal and expected. Tesla had 3 incidents in a miniscule number of miles, which to me seems very, very bad. It's not totally clear how many miles the Tesla robotaxis have driven in July, but during their earnings call in late July they reported only 7k miles in their first month.

@WrongoPhD Also an occasional level of incidents for Waymo when there is no human present may well be acceptable because the driver wages are being saved. With Tesla, they aren't saving on driver wages so incidents are painful extra costs rather than an expense reducing the driver wages savings. However it is early days for Tesla and some extra testing costs are not going to put off Tesla from continuing to try.

My reaction is more on the Musk's end of year estimate for monitor removal which was already a slip from a month or two is more likely to slip further. Though that was probably already likely because Musk's timeframes are often optimistic.

Austin hours of service expanded

bought Ṁ10 YES

@dreev "Anyway, I think the big question mark we're waiting to resolve (before resolving this market) is whether what Tesla launched this summer in Austin counts as unsupervised, by which we mean level 4. If humans, remote or in person, are monitoring in real-time and can intervene in real time (even if just hitting a button on the passenger door) then I'd interpret

that as supervision."

I'm not sure why you don't think the several videos and reports of the safety monitor hitting the emergency stop button meets this threshold. Confirmation that the door opening button was used as an emergency stop remains unrebutted and to me qualifies as real time monitoring and intervention. I'd suggest, though, if you do feel it's necessary to keep this market open that you make a decision either way after the Q3 Tesla meeting. We're unlikely to get any helpful government disclosures about interventions anytime soon enough to be helpful for this market beyond that.

@WrongoPhD I'm loath to commit to resolving that soon but am happy to commit to a deadline if there's a consensus on when it should be. @MarkosGiannopoulos makes a case for April in a recent comment.

PS: Does everyone know about Manifold's loan feature, so your mana (mostly) isn't tied up in this market while waiting for it to resolve?

@dreev My understanding is that Texas still won't require reporting interventions in April, so I'm not sure what waiting until April gets you. Again, what is your argument against concluding the safety monitors interventions that we already know about qualify as "humans, remote or in person, are monitoring in real-time and can intervene in real time (even if just hitting a button on the passenger door) then I'd interpret

that as supervision."

@WrongoPhD Do we have a smoking gun on the passenger door button? I saw a video where a YouTuber said he heard the door open/shut, but (a) I couldn't tell from the video myself, and (b) it was a contrived situation and I'm not totally sure whether it would count.

I do think we have a lot of circumstantial evidence for NO but I'd feel better waiting to be sure.

© Manifold Markets, Inc.TermsPrivacy