How long until someone vibe codes a robot that accidentally kills them?
16
125Ṁ501
2029
August 12, 2029
4%
Before 2026
8%
Before 2027
15%
Before 2028
22%
Before 2029
41%
Before 2030

Inspired by this tweet.

  • Update 2025-05-06 (PST) (AI summary of creator comment): For resolution, it must be known or somehow evident that the robot involved was vibe coded. The act of vibe coding should be the culprit in the accidental death.

Get
Ṁ1,000
to start trading!
Sort by:
bought Ṁ10 YES

If the robot in the video had killed someone would that count or do we need more proof of the vibe it was coded with?

@NivlacM The question implies that the act of vibe coding should be the culprit. So, yes, even though the robot in the video could just as well kill someone, I doubt that someone vibe coded it, so I wouldn't count that. We should know the robot is vibe coded, or it should be somehow evident that the robot is vibe coded.

I hate the term "vibe coding". This shit is neither vibes nor coding. It's "randomly generating code with language models that aren't built for code" and that's what I'll call it forever.

@Gameknight Brains are not built for generating code either.

@Gameknight But yeah, vibe coding is bad.

@Ersagun Brains are literally built for adaptability. We were never "built" to be good at "logic" or "math" or "art", but we did it anyways.

LLMs are probabilistic word generators. They will confidently bullshit without a thought or a clue. This is what makes them unsuited for coding, which requires logic.

o1 can maybe be considered true AI off of the logic function, but they can still fail, especially in complex things that require even a decent memory.

@Gameknight Every organ in every animal is built for adapting to their environment, if you want to frame it that way, but evolution is aimless. And most animal brains aren't this teachable. They are adaptive enough for the animal's environment. Cat brains are perfectly enough for their environment, they can learn a few things but they cannot learn math. We just have a more teachable, bigger brain for some reason.

So brains' runtime adaptability (learning) doesn't include being able to code for most animals. And their compile time adaptability (evolution) is aimless in general, so 'built for adaptability" doesn't make much sense.

Also the fact that current language models are optimizing for probabilistic word generation doesn't make them more or less capable. (Remember, we are also just built to pass on our genes, other things are just happy accidents, repeat: evolution is aimless.) The architecture gives us clues about why they fail when they do, but when they work, they just work. They are less capable than most technical people right now, they might be better than most people in a few years, while still using the same architecture (or a weirder one, that doesn't directly optimize for "general problem solving").

@Ersagun the human brain has language centers and the capacity to reason. Programming is translating reason into a specific language. Maybe we’re not built for programming but we built programming to conform to us.

@LiamZ For LLMs (and most other neural nets), typically, the GPU does most of the work. So if you didn't know how computers are designed, you might think the GPU, and specifically parts of a GPU that can do very fast matrix multiplications are designed for language, or "intelligence". But they are not, GPUs were initially designed to do graphics work, and gradually got more and more general purpose computing capabilities, and it turns out massive parallelism they provide is very useful for ML applications. So "the brain has language centers" is a similar claim to saying "the computer has language centers" and pointing out the GPU. The brain can do languages, yes, and languages evolved to suit the capabilities of brains. If every brain had the language capabilities of (let's say) Tolkien our languages would look and sound very different today. So brains are not made for languages, languages are made to suit brains.

@Ersagun specialized region (similar to our in built facial recognition) in case that’s not clear.

@LiamZ I didn't say you cannot identify a brain region that is critical for language. I said it doesn't necessarily mean the region is designed for language.

@Ersagun I mean we shouldn’t use the term “designed for” at all when talking about evolution but we do have a region of the brain with a biological mapping that does specialize in language from evolutionary pressure.

@Ersagun similarly birds have a region for song.

@LiamZ So?

@LiamZ I think you get my point here

@Ersagun so LLMs are approximating our reasoning and language functions through stastics based on a large corpus of output from human brains. That approximation can work very well in many applications but it’s more true that code is designed for us and LLMs are designed to approximate our output.

© Manifold Markets, Inc.TermsPrivacy