Will the first self-driving car company to deploy in 50 cities use an end-to-end approach?
16
84
340
2035
33%
chance

As this article (https://www.technologyreview.com/2022/05/27/1052826/ai-reinforcement-learning-self-driving-cars-autonomous-vehicles-wayve-waabi-cruise/) describes, first generation self-driving companies like Waymo and Cruise have (seemingly) doubled down on having the interface between perception, planning, and control be modular and are continuing to use HD maps combined with LIDAR for driving. On the other hand, companies like Wayve and comma.ai are shooting for an end-to-end learned approach where perception translates directly to control using RL or something similar. Tesla falls somewhere in the middle but currently seems to be in the former category.

This question will resolve to "Yes" if a company clearly using the latter approach is the first to deploy self-driving cars in 50 cities. It will also resolve to "Yes" if one of the companies in the former group pivots approaches and uses mostly or entirely end-to-end approaches in their deployed production systems. It resolves to "No" if a member of the former category reaches 50 cities without changing their approach. There's also a large space of scenarios in which the resolution is uncertain in which case it will resolve to probability whenever the first self-driving company reaches 50 cities. I'll use my own discretion combined with helpful comments to determine the resolution.

I've set the resolution date to 2035 but that's mostly just far enough out that I'll be able to change it if it still hasn't happened by then.

Get Ṁ200 play money
Sort by:
bought Ṁ20 YES

If Waymo makes it to 50 cities first, but hasn't published any updates on how their model works, would you just automatically resolve 'no' or wait for some confirmation of how their system is structured?

Probably wait for a short period of time (month) and then resolve "no".

Imagine spending $3B on lidars and being decades (half-) behind dozen-person startups, and probably some that haven’t even been formed yet.

Can’t imagine.

RIP hand-coding, mapping, and LiDAR

Long live AI—input -> model -> output

“Rich Sutton

March 13, 2019

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin….

…. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation.

… Time spent on one is time not spent on the other. …. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation. 

… This is a big lesson. As a field, we still have not thoroughly learned it, as we are continuing to make the same kind of mistakes.

… We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.

… The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity.

http://www.incompleteideas.net/IncIdeas/BitterLesson.html

predicts NO

@Gigacasting I don't think that modular AI systems is the same as "building human knowledge into the AI". For example, AlphaGo doesn't use built-in human knowledge, but does have a couple separate modular AI systems - one model for evaluating states and another for picking out candidates moves, which isn't all that different from having separate subsystems for perception and planning, I think. AlphaGo might still count as end-to-end, I dunno, but my point is that having modules with distinct dedicated purposes is a different story from self-driving AI systems that have lots of hand-written rules/knowlege like you're talking about.

More and more likely Karpathy left because Tesla is taking the wrong path (hand-engineered, hard silos between perception and planning, coasting on compute and labeling budget instead of end-to-end)

https://m.youtube.com/watch?v=lSXwIzww6Us

Couple years old, but shows Cruise is likely doing solid end-to-end and Comma (<$20mm raised at the time) was way ahead, and remains way ahead of everybody except Tesla—who spends billions and has a 2-3 order magnitude higher training budget and ~1oom more powerful on-device chip

Yes—

bought Ṁ25 of NO

The companies that are furthest along seem to be in the non-end-to-end category, I think they will get there first most likely.

More related questions