Grok & Tesla: what will be true by 2025?
➕
Plus
19
Ṁ2624
2026
84%
Grok will be integrated in Tesla vehicles
83%
Grok will be able to control navigation
23%
Grok will enable voice control of vehicles
15%
Grok & FSD will merge into one neural network

Explanations:

  • Grok will be integrated in Tesla vehicles. Resolves true if Tesla vehicles will have a LLM/LMM set up

  • Grok will be able to control navigation. Resolves true if the smart assistant will be able to control the car route, for example setting up the navigation

  • Grok will be able voice control of vehicles. Resolves true if we'll be able to control FSD by voice (for example, if you say "turn left", the car should turn left when allowed).

  • Grok & FSD will merge into one neural network. This means that the LMM should be trained natively with the FSD data (and not only have access to it through some kind of API or interface)

As you notice from the descriptions, it doesn't matter that the LLM is actually called Grok once it's set into the Tesla vehicles.

Also, it's enough that "some" Tesla vehicles have those features (not necessarily all).

Hope the descriptions make sense, I may edit them if someone points out the necessity in the comments

Get
Ṁ1,000
and
S3.00
Sort by:

The latest update about Grok is quite impressing counting how young and small the team is

https://youtu.be/4Ot5HLKhyVw?si=Oc_ZB5Zl0GN7qgFT

Grok & FSD will merge into one neural network

This requires that a version of Grok be trained on the data, but not necessarily that the resulting model is the one that controls FSD in commercial Tesla vehicles, correct?

@HarrisonNathan if I understand correctly, yes: there will be a third model that has its roots in the FSD and Grok data

bought Ṁ5 Grok & FSD will merg... YES

@SimoneRomeo I think they will do that.

@HarrisonNathan I think it's a no brainier. Besides making fsd more user friendly, it would make it so much better once it learns abstract concepts about humans and society (or even just traffic rules 😂😂). Let's see if they'll make it by the end of next year. Hope so

@SimoneRomeo It does lend itself to "rogue AI jams every major road on Earth as part of robot takeover" fantasies, though.

@HarrisonNathan ahaha, right

@HarrisonNathan i mean, in terms of intelligence I don't know if it should be much different. The model should be smart based on the parameter size and how much compute was used to train it, regardless of the amount of data or whether it was just trained with camera footages or also internet text. It makes me think of what would be a conscious FSD like if it is not able to talk. Would it try to communicate with us somehow? Ahah weird thoughts

@SimoneRomeo I don't think any current model actually "tries" to communicate. That is, there is no decision making involved, and it doesn't have any motive to convey information. Probably the number one mistake I see people making when they talk about what an AI "knows" is to assume that its output represents an effort to tell them something, when really it has no reason to care about them or what they think of it. I'm unable to see how general goal-oriented behavior would spontaneously emerge, but I won't rule it out entirely.

@HarrisonNathan yeah, I'm not saying current AI models try to communicate. I mean, we are totally diffent as living beings than current AI models because we have one major goal - staying alive. This goal has shaped our evolution and helped us evolve the brains we have - to survive. This being said, it's crazy that our evolution led us to ponder the meaning of life. And somehow this search for meaning drives us as much as other more basic needs. Does it give us a competitive advantage? Was it obvious that it would emerge because of the brains we have? Could such a feature evolve in AI models? Could it evolve spontaneously from an FSD neural network once it gets a brain big enough? And if so, what would it do? I'd expect it'd try to communicate with us as we created it. I think these thoughts are as fascinating as spooky

@SimoneRomeo My naive guess is that it probably requires some more complex architecture that allows the model to make choices about what it thinks about.

@HarrisonNathan maybe. But what does it mean exactly? What's made of if it's not part of our neural network? And how did this structure evolve? Or when? Do fish have it? What about whales, elephants or monkeys? Or what about homo herectus? Or what about hunter gatherers?

@SimoneRomeo I'm of a mind that all vertebrates likely have a subjective experience very similar to our own, and the only thing that sets us apart is an instinctive capacity for complex language. We won't be able to prove this for some time, but objective measures of animal cognition only keep showing greater capabilities. As to why we have goal-oriented behavior, I think it's probably fundamental to why we have complex brains in the first place. We have to do a lot of complicated things in a complicated environment, and this goes way back, perhaps to the Cambrian.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules