
The impetus for this market is asking whether Elon Musk is correct in rejecting lidar sensors for self-driving.
FAQ
1. Did the Tesla robotaxis in Austin in summer 2025 count?
No, even if we decide those were level 4, we need wider deployment (see next FAQ item) than that to count for this market.
2. How wide is "widely deployed"?
We're currently discussing this in the comments. We can pick a threshold in terms of number of cars, number of autonomous miles, or pick some other operationalization.
3. Is this specific to Tesla?
No, but I don't know of anyone else attempting vision-only level 4 self-driving. So effectively yes, so far. But just in case, in the very unlikely event that Waymo drops all their fancy sensors before Tesla makes this work, that could still be a YES.
[ignore all the AI-generated clarifications below this line; nothing is official until added to the FAQ above]
People are also trading
@AlanTennant You could probably get a human to drive into a painted wall if you painted it just right. And the AI just has to be better than humans. So this is perfectly possible in principle. But lidar is getting cheap so I agree it's dumb not to use it at this point. And it's not like we stop caring about additional safety once we hit human-level. (See also my Musk vs McGurk post.)
And do @traders think we need a stipulation that it be safely deployed? I'm thinking it's hard for it to reach our threshold for "widely" if it's killing people more often than human drivers do. So maybe the resolution criteria can ignore the safety question.
What do @traders think of using 100M autonomous miles as the threshold for "widely deployed". Waymo passed that recently and it's roughly the number of miles between fatalities for human drivers. So if a self-driving system has a perfect safety record but much fewer than that many miles, we only have weak evidence that it has human-level driving ability, safety-wise.