After this article dropped:
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
I started seeing a bunch of people start using the term "Yuddite." Will this become the established epithet used to describe people who want to slow down/stop AI capabilities before this year is out? If so this market resolves YES.
Related questions
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ1,393 | |
2 | Ṁ84 | |
3 | Ṁ59 | |
4 | Ṁ57 | |
5 | Ṁ43 |
established term: it is enough that it made the urban dictionary.
Missed the part where humans annihilated all primates bc we are smarter
Or when the dolphins holocausted the ocean and elephants took over the savanna
Somehow it’s still remarkably anyone would look to the only person in 20 years who failed to build anything resembling useful silicon intelligence (every other branch of ML worked except for “Seed AI programmer” nonsense)
I hope not. Yudkowsky seems to have been a relatively late endorser of this strategy, and doesn't expect it to work. I expect a better figurehead to arise in the worlds where it is successful.
@MartinRandall That was a great post. I’m not sure where I fall on the AI risk question but the behavior of the Doomers given the stakes they claim has always struck me as strange
Yuddy succeeded in starting a doomsday cult and now his personal stock is rising in proportion to how worked-up he can get others on the question of AI safety. Funny all the talk about bad alignment of AIs, while no one questions the incentives Yuddy faces in perpetuating his doomerism. I suggest we start worrying about Yuddy safety; how many people will suffer and die as Yuddy chucks his wooden codpiece into the path of progress?
@AlQuinn you make an interesting point. One quick thing though, could you tell me what words are in the image below?
@AlQuinn Is this where you wanted me to respond? The words above are “overlooks” and “inquiry”. Hopefully this fulfills your request, and happy tasking!
Am much enjoying listening to Yuddy have minor orgasms while talking of humanity's impending annihilation on Lex Fridman. He's kinda shook by GPT4, which your fearless correspondent, Al Quinn, found to be absolutely gormless.
Why is GPT4 gormless? Because all LLMs trained as singular isolated systems will be gormless. They will not meaningfully behave as agents, even if they are intelligent. It's bizarre that those who believe in orthogonality make the mistake of anthropomorphizing AI: they see agency in organisms all around them that arose from billions of years of evolutionary competition and suppose that AI will necessarily be born with the same predilections. AIs are currently trained as hothouse flowers that do not viscerally know and care about the struggle for survival that have shaped the minds of animals, and in particular, humans.
Want to prevent homicidal AI? Just don't ever allow multi-AI training models featuring Darwinian selection. Evolutionary pressure is what could sculpt the elements of intelligence into a vile disposition.