Elon Musk has been very explicit in promising a robotaxi launch in Austin in June with unsupervised full self-driving (FSD). We'll give him some leeway on the timing and say this counts as a YES if it happens by the end of August.
As of April 2025, Tesla seems to be testing this with employees and with supervised FSD and doubling down on the public Austin launch.
PS: A big monkey wrench no one anticipated when we created this market is how to treat the passenger-seat safety monitors. See FAQ9 for how we're trying to handle that in a principled way. Tesla is very polarizing and I know it's "obvious" to one side that safety monitors = "supervised" and that it's equally obvious to the other side that the driver's seat being empty is what matters. I can't emphasize enough how not obvious any of this is. At least so far, speaking now in August 2025.
FAQ
1. Does it have to be a public launch?
Yes, but we won't quibble about waitlists. As long as even 10 non-handpicked members of the public have used the service by the end of August, that's a YES. Also if there's a waitlist, anyone has to be able to get on it and there has to be intent to scale up. In other words, Tesla robotaxis have to be actually becoming a thing, with summer 2025 as when it started.
If it's invite-only and Tesla is hand-picking people, that's not a public launch. If it's viral-style invites with exponential growth from the start, that's likely to be within the spirit of a public launch.
A potential litmus test is whether serious journalists and Tesla haters end up able to try the service.
UPDATE: We're deeming this to be satisfied.
2. What if there's a human backup driver in the driver's seat?
This importantly does not count. That's supervised FSD.
3. But what if the backup driver never actually intervenes?
Compare to Waymo, which goes millions of miles between [injury-causing] incidents. If there's a backup driver we're going to presume that it's because interventions are still needed, even if rarely.
4. What if it's only available for certain fixed routes?
That would resolve NO. It has to be available on unrestricted public roads [restrictions like no highways is ok] and you have to be able to choose an arbitrary destination. I.e., it has to count as a taxi service.
5. What if it's only available in a certain neighborhood?
This we'll allow. It just has to be a big enough neighborhood that it makes sense to use a taxi. Basically anything that isn't a drastic restriction of the environment.
6. What if they drop the robotaxi part but roll out unsupervised FSD to Tesla owners?
This is unlikely but if this were level 4+ autonomy where you could send your car by itself to pick up a friend, we'd call that a YES per the spirit of the question.
7. What about level 3 autonomy?
Level 3 means you don't have to actively supervise the driving (like you can read a book in the driver's seat) as long as you're available to immediately take over when the car beeps at you. This would be tantalizingly close and a very big deal but is ultimately a NO. My reason to be picky about this is that a big part of the spirit of the question is whether Tesla will catch up to Waymo, technologically if not in scale at first.
8. What about tele-operation?
The short answer is that that's not level 4 autonomy so that would resolve NO for this market. This is a common misconception about Waymo's phone-a-human feature. It's not remotely (ha) like a human with a VR headset steering and braking. If that ever happened it would count as a disengagement and have to be reported. See Waymo's blog post with examples and screencaps of the cars needing remote assistance.
To get technical about the boundary between a remote human giving guidance to the car vs remotely operating it, grep "remote assistance" in Waymo's advice letter filed with the California Public Utilities Commission last month. Excerpt:
The Waymo AV [autonomous vehicle] sometimes reaches out to Waymo Remote Assistance for additional information to contextualize its environment. The Waymo Remote Assistance team supports the Waymo AV with information and suggestions [...] Assistance is designed to be provided quickly - in a mater of seconds - to help get the Waymo AV on its way with minimal delay. For a majority of requests that the Waymo AV makes during everyday driving, the Waymo AV is able to proceed driving autonomously on its own. In very limited circumstances such as to facilitate movement of the AV out of a freeway lane onto an adjacent shoulder, if possible, our Event Response agents are able to remotely move the Waymo AV under strict parameters, including at a very low speed over a very short distance.
Tentatively, Tesla needs to meet the bar for autonomy that Waymo has set. But if there are edge cases where Tesla is close enough in spirit, we can debate that in the comments.
9. What about human safety monitors in the passenger seat?
Oh geez, it's like Elon Musk is trolling us to maximize the ambiguity of these market resolutions. Tentatively (we'll keep discussing in the comments) my verdict on this question depends on whether the human safety monitor has to be eyes-on-the-road the whole time with their finger on a kill switch or emergency brake. If so, I believe that's still level 2 autonomy. Or sub-4 in any case.
See also FAQ3 for why this matters even if a kill switch is never actually used. We need there not only to be no actual disengagements but no counterfactual disengagements. Like imagine that these robotaxis would totally mow down a kid who ran into the road. That would mean a safety monitor with an emergency brake is necessary, even if no kids happen to jump in front of any robotaxis before this market closes. Waymo, per the definition of level 4 autonomy, does not have that kind of supervised self-driving.
10. Will we ultimately trust Tesla if it reports it's genuinely level 4?
I want to avoid this since I don't think Tesla has exactly earned our trust on this. I believe the truth will come out if we wait long enough, so that's what I'll be inclined to do. If the truth seems impossible for us to ascertain, we can consider resolve-to-PROB.
11. Will we trust government certification that it's level 4?
Yes, I think this is the right standard. Elon Musk said on 2025-07-09 that Tesla was waiting on regulatory approval for robotaxis in California and expected to launch in the Bay Area "in a month or two". I'm not sure what such approval implies about autonomy level but I expect it to be evidence in favor. (And if it starts to look like Musk was bullshitting, that would be evidence against.)
12. What if it's still ambiguous on August 31?
Then we'll extend the market close. The deadline for Tesla to meet the criteria for a launch is August 31 regardless. We just may need more time to determine, in retrospect, whether it counted by then. I suspect that with enough hindsight the ambiguity will resolve. Note in particular FAQ1 which says that Tesla robotaxis have to be becoming a thing (what "a thing" is is TBD but something about ubiquity and availability) with summer 2025 as when it started. Basically, we may need to look back on summer 2025 and decide whether that was a controlled demo, done before they actually had level 4 autonomy, or whether they had it and just were scaling up slowing and cautiously at first.
13. If safety monitors are still present, say, a year later, is there any way for this to resolve YES?
No, that's well past the point of presuming that Tesla had not achieved level 4 autonomy in summer 2025.
14. What if they ditch the safety monitors after August 31st but tele-operation is still a question mark?
We'll also need transparency about tele-operation and disengagements. If that doesn't happen soon after August 31 (definition of "soon" to be determined) then that too is a presumed NO.
Ask more clarifying questions! I'll be super transparent about my thinking and will make sure the resolution is fair if I have a conflict of interest due to my position in this market.
[Ignore any auto-generated clarifications below this line. I'll add to the FAQ as needed.]
Update 2025-11-01 (PST) (AI summary of creator comment): The creator is [tentatively] proposing a new necessary condition for YES resolution: the graph of driver-out miles (miles without a safety driver in the driver's seat) should go roughly exponential in the year following the initial launch. If the graph is flat or going down (as it may have done in October 2025), that would be a sufficient condition for NO resolution.
Update 2025-11-06 (PST) (AI summary of creator comment): The creator outlined how they would update their probability assessment based on three possible scenarios by January 1st:
World 1: Tesla misses deadline, safety riders still present, no expansion → probability should drop
World 2: Tesla technically hits deadline with some confirmed rides without safety riders, but no scaling up → creator would be suspicious this is a publicity stunt/controlled demo rather than meaningful evidence of level 4 autonomy
World 3: Safety riders gone in Austin plus meaningful expansion → would be a meaningful update on probability Tesla has cracked level 4 autonomy
Note: All three scenarios would still leave question marks about whether Tesla achieved level 4 by the August 31, 2025 deadline.
Update 2025-12-10 (PST) (AI summary of creator comment): The creator has indicated that Elon Musk's November 6th, 2025 statement ("Now that we believe we have full self-driving / autonomy solved, or within a few months of having unsupervised autonomy solved... We're on the cusp of that") appears to be an admission that the cars weren't level 4 in August 2025. The creator is open to counterarguments but views this as evidence against YES resolution.
Update 2025-12-10 (PST) (AI summary of creator comment): The creator clarified that presence of safety monitors alone is not dispositive for determining if the service meets level 4 autonomy. What matters is whether the safety monitor is necessary for safety (e.g., having their finger on a kill switch).
Additionally, if Tesla doesn't remove safety monitors until deploying a markedly bigger AI model, that would be evidence the previous AI model was not level 4 autonomous.
Update 2025-12-22 (PST) (AI summary of creator comment): The creator clarified that customer trips with only the customer in the car (no Tesla employees) are needed to check off the "removing safety monitors" milestone. The creator also reiterated that Elon Musk's November statement (that Tesla had not cracked unsupervised self-driving as of then) is considered strong evidence for NO resolution, regardless of whether Tesla achieves it in late 2025 or early 2026.
Update 2025-12-24 (PST) (AI summary of creator comment): The creator is waiting on several factors to determine fair resolution:
Verdict on physical kill switches - whether safety monitors have real-time kill switches
Degree of tele-operation - how much remote human control is involved
Safety record with confidence intervals - statistical evidence of safety performance
What FSD version the robotaxis had on August 31 - which software version was running at the deadline
Transparency about disengagements - part of California driver-out permitting requirements
What Musk meant by calling unsupervised autonomy "close" in November - interpretation of his November statements
The creator emphasizes that the relevant definition of "supervised" is not just about a human watching, but about being ready to intervene in real time in order to be safe enough. The creator expects the situation to become less murky with more hindsight.
Update 2025-12-24 (PST) (AI summary of creator comment): The creator clarified the standard for level 4 autonomy regarding remote assistance:
If a car gets confused by unusual situations, stops, and phones a human to ask what it should do, and the human merely tells it the answer without taking control - that still counts as level 4
The bar is: no real-time control at driving speeds and no monitoring with the ability to disengage in real time
The creator also noted evidence that Tesla robotaxis in San Francisco sometimes needed the human backup driver to take over through dead intersections during a power outage, contrasting with Waymo's ability to navigate autonomously in the same conditions.
Update 2025-12-27 (PST) (AI summary of creator comment): The creator clarified that FAQ3 (presuming interventions are needed if there's a backup driver) specifically refers to backup drivers in the driver's seat, not passenger seat safety monitors. A backup driver in the driver's seat can take over driving in real time if the car messes up.
The creator emphasized that passenger seat safety monitors are a "highly confusing and ambiguous middle ground" between level 2 and level 4. Being able to intervene via a touchscreen is "quite slow and limited" compared to a driver's seat backup driver.
The creator stated there are still unanswered questions about kill switches and tele-operation before giving a definitive verdict on whether the current setup counts as supervision.
The creator also noted that supervision, in the context of self-driving autonomy levels, means a human watching in real time with the ability to intervene.
People are also trading
@JDTurk It's obvious enough Tesla is not at Waymo's level but this market doesn't require that. There's nothing obvious about how to resolve this market. Hopefully the FAQ has things pinned down enough that with more time it will become obvious.
(For those just tuning in, I actually want to taboo words like "obvious". I'd like us all to demonstrate epistemic humility and a spirit of cooperation in figuring out what's most true and most fair.)
Tesla now has a California-like (e.g. human in the driver's seat) pilot in Germany (Eifelkreis Bitburg-Prüm, a city of 100K people).
https://www.linkedin.com/posts/mwvlw-rlp_frohe-botschaft-zum-jahresende-aus-der-ugcPost-7409248393060958209-y_NN/
An updated homepage for Robotaxi https://www.tesla.com/robotaxi
We’re bringing autonomous rides to you today—starting with Model Y. Autonomous Robotaxi rides are currently being offered in Austin, Texas. To get started, download the Robotaxi app.
Unsupervised FSD testing now carrying person in the backseat (but a Tesla employee, for now) https://x.com/philduan/status/2002843697273262468

@MarkosGiannopoulos I'd say we should wait for customer trips with only the customer in the car to check off that milestone. I'm also forgetting now what we decided about the significance of that milestone. I'm still thinking that Elon Musk's seeming admission in November that Tesla had not cracked unsupervised self-driving as of then is pretty strong evidence for NO for this market, whether or not Tesla does crack it this month or early in 2026.
@dreev In regard to the "admission" thing, see past discussion https://manifold.markets/dreev/will-tesla-count-as-a-waymo-competi#d5huzbcese6
In any case, Musk's comments are immaterial. The hard facts are
- Robotaxi in Austin in June was running an unreleased version of v14
- Robotaxi in June had paying customers in cars that drove themselves without a person in the driver's seat
- v14 is now public and shown to be a significant improvement over v13
- The June launch was not a one-off event; Tesla has expanded the service area and the number of cars (now 30 vs 200 of Waymo - a company several years ahead), and also has a California operation (130 cars)
- Tesla is about to remove the safety monitor (already doing rider-only tests with employees in the backseat). This means achieving in 6 months what took Waymo 2.5 years.
@MarkosGiannopoulos
- Robotaxi in Austin in June was running an unreleased version of v14 but tesla owners have had several releases 14.1,14.1.1 through 14.1.7 and 14.2, 14.2.1 and 14.2.1.25. Given the times over which these have been released it makes little sense to assume all these were available to Tesla at end of August. The driverless tests could easily be another version like the v15 order of magnitude larger version or v14.3
-Safety monitor in front passenger seat has remained there for nearly 4 months now. It is possible to put safety monitors in a level 4 system and you wouldn't expect many interventions and for it to be easy to quickly remove them. 4 month delay suggests there are issues and excessive interventions they are still ironing out. This suggests system at end of August was not quite at level 4 ability. Even if you believe they had ability at end of August, Tesla wasn't feeling confident enough to remove safety monitors so they haven't launched it by end of August.
- Yes v14 is better than v13 but this doesn't mean v14.0.x or whatever that was available at end of August was good enough. The long delay before safety monitors are removed and the several later versions in this period tends to suggest what was available at end of August was not good enough and Tesla still has not launched without safety monitors yet
- June launch was not a one-off event, fair enough there has been expansion to SF bay area. However I would question: when does the exponential growth start if there are still zero passenger rides without safety monitor? Is the launch when the exponential growth pattern of rides & taxis without safety monitors starts?
-Maybe they are about to remove or maybe v15 testing continues for a month or two before safety monitors are removed. We surely now want to wait to at least see what version is first used without safety monitors? Achieving in 6 months what Waymo took 2.5 years, well this is suggesting lots of fast progress and if still not launched then it seems more reasonable to tend to deduce they weren't ready for level 4 at end of August rather than they were ready but are being very cautiously slow with the roll-out while not needing to make further refinements.
Length of delay before removing safety monitors, number of different versions and reports of erratic driving sufficient to start regulatory investigations and a few accidents with very few cars operating. AFAICS this is all pointing to not quite level 4 capable at end of August 2025 as well as not launched in time at the level required.
@ChristopherRandles "when does the exponential growth start if there are still zero passenger rides without safety monitor? Is the launch when the exponential growth pattern of rides & taxis without safety monitors starts?"
The launch was in June when you had the first paying customer in a car that drove itself. It's as simple as that. This market is not about "exponential growth". This market is about the launch of the service.
The safety monitor has now been removed for Tesla employees. My estimate is they will have non-Tesla customers doing rider-only this week to keep Musk's promise.
@MarkosGiannopoulos It had to be "level 4" robotaxis which don't need near constant supervision. They weren't at that level at June. Maybe they had improved a bit by end of August but several different versions since and monitors still not removed is not a good impression that level 4 service is reached yet. In June, they launched a level 2 service.
>The safety monitor has now been removed for Tesla employees.
You mean there is more testing that has recently started (not offering rides). Maybe this indicated a major new version that they want tested before allowing safety monitors to be removed. Maybe the start date of this testing might indicate when they actually have a system they trust to be level 4 capable? The launch of the service surely has to come after the capability.
@ChristopherRandles "It had to be "level 4" robotaxis which don't need near constant supervision. They weren't at that level at June." ... "The launch of the service surely has to come after the capability."
If your expectation was that there would be zero safety personnel in the car, with customers on board on day 1, it's entirely not based on any past experience of such programmes (Waymo, etc). The service was Level 4 in June, and they also needed to have some kind of safety monitor, as this was an initial launch with paying customers.
@MarkosGiannopoulos
"My expectation", what does my expectation matter?
Elon Musk repeatedly stated, prior to June 2025, that Tesla would launch a robotaxi service in Austin with "no one in the car" as a fully unsupervised, paid service. This claim was made during various public appearances and earnings calls in early 2025.
https://www.fool.com/earnings/call-transcripts/2025/01/29/tesla-tsla-q4-2024-earnings-call-transcript/
"So, we're going to be launching unsupervised full self-driving as a paid service in Austin in June. So, I talked to the team. We feel confident in being able to do an initial launch of unsupervised, no one in the car, full self-driving in Austin in June."
This is the background to the question.
They didn't do that, they weren't ready to do it, so they launched a level 2 service while they built up not only a bit more confidence in the system under operational conditions but also improvements to the system through a series of software updates.
Back then I hoped they would succeed without too much delay, but they clearly have delayed doing it.
Guys guys guys, please demonstrate epistemic humility. A car with no one in the driver's seat is probably more than level 2. But the lack of scale and the possibility of safety monitors with real-time kill switches and unknown amounts of tele-operation means we can't act like we know it's level 4 either.
The research y'all are doing is insanely helpful and I'm very grateful and you've shifted my own probability in both directions at various times. Let's just maintain a fully collaborative, truth-seeking tone. Tendentiousness backfires. When readers can tell what you want to be true, they take the evidence you're offering with lots of salt.
Some of the things, off the top of my head, that we're waiting on in order to feel more confident about the fairest resolution:
Verdict on physical kill switches
Degree of tele-operation
Safety record, with confidence intervals (e.g., zero deaths in a million miles is weak evidence of superhuman safety)
What FSD version the robotaxis had on August 31
Transparency about disengagements (part of driver-out permitting in California?)
What Musk meant by calling unsupervised autonomy "close" in November
I continue to expect this to get less murky with the benefit of more hindsight.
Finally, I started out asking for more epistemic humility but everyone here is 1000% better than what I've seen in other markets, where people just jawbone each other about how "obviously" passenger seat safety monitors do / don't count as supervision.
In my opinion, the relevant definition of "supervised" is not just about a human literally watching but rather about being ready to intervene in real time in order to be safe enough. That's what we're waiting to be sure about.
@dreev "the relevant definition of "supervised" is not just about a human literally watching but rather about being ready to intervene in real time in order to be safe enough."
Well, the established player, Waymo, just reminded us that they rely heavily on remote monitoring
Navigating an event of this magnitude presented a unique challenge for autonomous technology. While the Waymo Driver is designed to handle dark traffic signals as four-way stops, it may occasionally request a confirmation check to ensure it makes the safest choice. While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets. We established these confirmation protocols out of an abundance of caution during our early deployment, and we are now refining them to match our current scale. While this strategy was effective during smaller outages, we are now implementing fleet-wide updates that provide the Driver with specific power outage context, allowing it to navigate more decisively.
https://waymo.com/blog/2025/12/autonomously-navigating-the-real-world
So, at what point do we accept that some remote monitoring is ok and has little effect on whether the car does in fact drive itself? :)
@MarkosGiannopoulos Is that a genuine question? The excerpt you quoted answers it beautifully. Namely, it illustrates exactly where we've set the bar for counting as level 4. If the car can get confused by unusual situations, stop, and phone a human to ask what it should do, and, importantly, the human merely tells it the answer and does not take control -- that's still level 4.
I wrote the following post to answer all the Waymo whataboutism, based on previous discussion in this market: https://agifriday.substack.com/p/waymo
I'd summarize it as no real-time control at driving speeds or monitoring with the ability to disengage in real time.
PS: I found some evidence that Tesla robotaxis in San Francisco at least sometimes needed the human backup driver to take over through dead intersections. A nice irony at 0m49s in that video: a Waymo autonomously navigating a dead intersection while the Tesla's backup driver is manually navigating it. And something confusing in the video: the display that normally shows the car's path and other traffic and pedestrians shows only the Tesla. Did the power outage somehow spoil its ability to detect other cars? Only at one point (right around when the Waymo appears) does it register another car as an obstacle on the display.
@dreev Waymo cars also sometimes end up needing a human to take over. Example https://x.com/Cyber_Trailer/status/2004671911847231904
@MarkosGiannopoulos I can't tell if we're missing each other on this question. If the car is confused or has a problem and it stops autonomously and waits for human assistance, that's allowed for level 4 autonomy. Level 5 is the one where the car has to be able to handle everything a human can handle. If a human is ever need in real time while the car is driving, that's at most level 3.
Review of things we know:
Waymo is level 4
Tesla is way more than level 2 (in spirit even if not technically)
Tesla has not yet driven a customer without human "supervision"
Review of things we don't know for sure:
Whether Tesla is level 3 (let alone level 4)
Whether Tesla's human supervision is necessary
The role of tele-operation for Tesla robotaxis
Whether the safety monitors have their finger on a kill switch
Whether Tesla can scale up to hundreds of unsupervised cars
How far we are from vision-only level 4 autonomy for the masses
>"4 Whether the safety monitors have their finger on a kill switch"
If they don't have finger on a kill switch but can intervene via main screen (as seen) and are required to be eyes on road / paying attention while the car is driving is this the same as having figure on a kill switch or at least supervision that restricts what is going on to level 2?
These seems extremely likely to me by rules requesting passengers do not speak to monitor when ride is underway and the likelihood this is required under the level 2 permitting used.
Note also FAQ 3: If there's a backup driver we're going to presume that it's because interventions are still needed, even if rarely.
So even if there is no kill switch, I still think it is supervised not unsupervised. If monitors were removed within a couple of months using same version software as was in use at end of Aug then I could see an argument that the software was ready and launched even if not done in a level 4 way by the deadline so it was level 4 ability operating in level 2 way, out of caution before a larger scale roll out of a level 4 service. With 11? software versions coming out and monitors still not removed nearly 4 months after end of Aug then it is still supervised.
So on your don't know for sure list
1. Tesla is operating and regulated as level 2, they may be close or even at level 4 capable now but not at end of Aug 2025.
2 Supervision is presumed necessary for this question by presence of monitors per FAQ 3. While this might well not be necessary now under latest software versions there were enough interventions, erratic driving reports and official investigations to say supervision was necessary back in June 2025 and fixes aren't likely immediate so almost certainly still necessary in July and August.
3456 we don't know for sure but I don't see the need to know in order to resolve the question no.
@ChristopherRandles Let me start by reemphasizing that I personally don't see this market getting to a YES at this point. But it's not that easy to declare a NO at this point either. For starters, FAQ3 is specifically referring to backup drivers in the driver's seat, right? As in someone who can take over the driving in real time if the car messes up. (If you also read FAQ2, it's made more explicit.) The passenger seat safety monitors are a highly confusing and ambiguous middle ground between level 2 and level 4. Being able to intervene via a touchscreen is quite slow and limited. I don't think we have any footage of that happening at normal driving speed, do we?
Supervision, in the context of self-driving autonomy levels, means a human watching in real time with the ability to intervene. I think there are unanswered questions about kill switches and tele-operation before we can give a definitive verdict on that.
(Quick review of the levels: 2 = human monitoring, ready to intervene in real time if car messes up; 3 = human not monitoring but ready to take over in real time if the car beeps; 4 = human never needed in real time; 5 = human never needed ever.)
To also reemphasize, I know people have very strong intuitions in both directions about the safety monitors. To some it feels painfully obvious that that's supervision and not fundamentally different from a backup driver in the driver's seat. To others it feels night-and-day different. If you can't take over the actual driving at a moment's notice then, the Tesla proponents argue, the AI is meaningfully the one in control. In that framing, the human's ability to give the AI a stop order is only supervision in the most literal way.
So just remember, whatever arguments you make, show respect and sympathy for those with opposite intuitions.
(Not to say I'm accusing you of failing to do that, @ChristopherRandles, and I think some of your arguments -- like how many new FSD versions have come out between August 31 and whenever the safety monitors are finally removed -- are persuasive.)
@dreev On your open questions
1. Whether Tesla is level 3 (let alone level 4)
Level 3 requires a fallback driver ready to take over. This was never the case in Austin because the safety monitor was on the passenger seat. I guess we can call Level 3 what Tesla is doing in California. In Austin, Tesla is now further demoing Level 4 (but they need to go beyond a couple of demo videos on Twitter) https://x.com/philduan/status/2002843697273262468
2. Whether Tesla's human supervision is necessary
Since June, Tesla has been confident enough not to have someone in the driver's seat. They will have customers doing rider-only rides pretty soon, within 6-7 months since the June launch. What further criteria do you have for this?
3. The role of tele-operation for Tesla robotaxis
This is unlikely to prove one way or the other, especially if you want to focus on the summer period. No special hardware has been noticed in the cars that would indicate they are teleoperated.
4. Whether the safety monitors have their finger on a kill switch
Will that still be a point of discussion if, in a couple of weeks, rider-only is launched for customers?
5. Whether Tesla can scale up to hundreds of unsupervised cars
They went from 10 to 160 if you count California as well. This is within the "launch" description of the market, I think.
6. How far we are from vision-only level 4 autonomy for the masses
Why is this part of resolving the market? The question is whether Tesla launched a robotaxi service.
(/Attempting to understand other sides view)
I think we need to break this down:
>"To also reemphasize, I know people have very strong intuitions in both directions about the safety monitors. To some it feels painfully obvious that that's supervision and not fundamentally different from a backup driver in the driver's seat. To others it feels night-and-day different. If you can't take over the actual driving at a moment's notice then, the Tesla proponents argue, the AI is meaningfully the one in control. In that framing, the human's ability to give the AI a stop order is only supervision in the most literal way."
I am not sure about the "Tesla proponents" label. I think I am a Tesla supporter but in this case accept they haven't done it in time. What I see you arguing here for 'yes resolution arguers' is that what matters is "driving control" of the vehicle and if the interventions are more like changing the destination than taking over the driving then that should be considered good enough for a yes resolution. I do not agree with that point of view. The criteria level in the question title is level 4 and you said "4 = human never needed in real time" so humans needing to watch practically every second if not 100% of time is not at level 4.
We can perhaps try to break this down into various standards of 'humans needed' and 'vehicle control' to see what we think is the more important viewpoint.
1. No humans in car or remotely watching unless car reports it is unsure. I think Waymo are at this level and it is level 4 even if there are very occasional accidents.
2. Human in car is supposed to watch all the time but never intervenes.
3. Human in car but entirely an air hostess role than a pilot role. Human doesn't need eyes on road so this qualifies as level 4
4. Human in car but mainly an air hostess role than a pilot role. Human doesn't need eyes on road most of time but might be called to give advice after car stops if car is uncertain of what to do.
5. Human in car but mainly an air hostess role than a pilot role. Human doesn't need eyes on road most of time but might be called to give advice either while approaching situation or after car stops if car is uncertain or suspects it might become uncertain of what to do. (If it doesn't get advice in time then it tries to take safer option like stopping and waiting for advice.)
6. Human in car but mainly an air hostess role than a pilot role. Human doesn't need eyes on road most of time but might be called to take over driving with x seconds notice.
My view on these is that
6 would be level 3 not level 4 so should resolve no
4. seems to be level 4 like waymo so the question could be resolved yes.
5. is arguable a better system than 4 so while the human might occasionally take driving decisions while vehicle still moving I could see a yes resolution being more appropriate than no
3. qualifies as level 4 and should resolve yes
2. is a level 4 capable software being used in a level 2 manner. I don't believe the software was at this capability level in June and almost certainly not in July or August.
1 seems a clear level 4 resolve yes situation.
My suggested resolutions for these situations don't completely align with humans needing eye on road pretty much whole time, but it is fairly close. (Supposed to keep eyes on road but never intervene can be treated as not 'needing' to but 5 is a somewhat awkward case.)
As the descriptor in the title is level 4 and not about "driving control", I don't see any reason to morph the question into being about "driving control" rather than general understanding of what level 4 means which you have said is "human never needed in real time".
Coming back to the question "Are interventions more like changing the destination than taking over the driving?" I would argue that if human needs eyes on the road to know when to intervene then this is supervision not merely an instruction to change the destination. If they never need to intervene then I would concede that the supervision isn't necessary and it is level 4 being used in level 2 way. This 'never need to intervene' brings us back to only needing one accident or intervention or one critical disengagement for a no resolution (like indicated in FAQ9) and we know we have had some of these since August 2025.
The Robotaxi Tracker site now includes data on crashes from NTHSA https://www.teslarobotaxitracker.com/nhtsa?provider=tesla
We have yet to see any catastrophic headlines like this for Tesla
“Waymo said Saturday that it was stopping service across San Francisco after numerous online videos showed its autonomous vehicles snarling traffic during the citywide blackout.”
“Many of the videos and images showed Waymos stuck one behind the other, with human drivers passing them by.”
https://missionlocal.org/2025/12/sf-waymo-halts-service-blackout/
@MarkosGiannopoulos Catastrophic? It makes sense to me that supervised self-driving would muddle through, trusting the human supervision, while unsupervised self-driving would be ultra-conservative and stop driving when it sees an environment very different from training and with traffic lights gone dark. "Better safe than sorry" is basically Waymo's motto.
I was bracing for something very different when you said "catastrophic headlines"! (And even so, Waymo will hit a quarter billion rider-only miles in the new year so at some point we really should be ready to learn of a Waymo literally killing someone and still correctly view it as life-saving technology.)
@dreev well, multiple cars freezing in the middle of the road one behind the other and blocking traffic in multiple cases is not something that has happened with Tesla.
@MarkosGiannopoulos True, but that might be because the supervising human takes over in such situations, right? There are so many ways it's so tricky trying to compare Teslas and Waymos. The orders of magnitude difference in scale, for starters. (In which direction? Depends if we're talking about unsupervised or not! Way more FSD miles in private Teslas; way more Waymo miles with empty driver's seats. The scale is vastly skewed and we can't even agree in which direction.) And the cherry-picking media coverage is just an epistemic hellscape. Everyone has an agenda.
I've been bearish on Tesla and predicted they wouldn't launch robotaxis at all this past summer. I was wrong. But the bulls expected a scale-up to millions of cars by now, so they were wrong too. Prediction is hard. Especially about Tesla.
But I think it's going to happen. We've gone from a year away in Musk-reality to a month away. Maybe a Musk-month is about a year? I'm not betting NO in the 2026 robotaxi markets. (Or maybe I am, depending on the odds, who knows.)
I did originally, back in April, predict it wouldn't happen for Tesla till Musk relented on lidar. I think in retrospect I ought to have said "either lidar or another year-ish of AI progress".


