Will "Yuddite" become the established term for people who want to slow down/stop AI capabilities before 2024?
92
568
1.6k
resolved Jan 1
Resolved
NO

After this article dropped:
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

I started seeing a bunch of people start using the term "Yuddite." Will this become the established epithet used to describe people who want to slow down/stop AI capabilities before this year is out? If so this market resolves YES.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ1,393
2Ṁ84
3Ṁ59
4Ṁ57
5Ṁ43
Sort by:
bought Ṁ100 of YES

I think it will become an established disparaging term, not something that people will refer to themselves as. e/acc 100% going to use terms like yuddite

@TeddyWeverka Literally anybody can add a term to urbandictionary.

it’s so over

Maybe the “rational” approach was to read things written hundreds or thousands of years ago, have actual life experiences, and not have your world rocked by an Ai that’s better than you at sci-fi and forum post writing

Missed the part where humans annihilated all primates bc we are smarter

Or when the dolphins holocausted the ocean and elephants took over the savanna

Somehow it’s still remarkably anyone would look to the only person in 20 years who failed to build anything resembling useful silicon intelligence (every other branch of ML worked except for “Seed AI programmer” nonsense)

predicted NO

I hope not. Yudkowsky seems to have been a relatively late endorser of this strategy, and doesn't expect it to work. I expect a better figurehead to arise in the worlds where it is successful.

https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like

@MartinRandall That was a great post. I’m not sure where I fall on the AI risk question but the behavior of the Doomers given the stakes they claim has always struck me as strange

alternatives might be things like decelerationist, e/dec, bot-averse, or non-digi

We have a plot

How many years before AI gens this movie

Receipts anyone?

Lol he tried to create AGI. Failed. And now wants to nuke civilization?

Is this your hero?

maybe GPT-1 did not give him the answer he wanted

☠️

bought Ṁ10 of YES

Yuddy succeeded in starting a doomsday cult and now his personal stock is rising in proportion to how worked-up he can get others on the question of AI safety. Funny all the talk about bad alignment of AIs, while no one questions the incentives Yuddy faces in perpetuating his doomerism. I suggest we start worrying about Yuddy safety; how many people will suffer and die as Yuddy chucks his wooden codpiece into the path of progress?

@AlQuinn you make an interesting point. One quick thing though, could you tell me what words are in the image below?

@AlQuinn Is this where you wanted me to respond? The words above are “overlooks” and “inquiry”. Hopefully this fulfills your request, and happy tasking!

predicted YES

Am much enjoying listening to Yuddy have minor orgasms while talking of humanity's impending annihilation on Lex Fridman. He's kinda shook by GPT4, which your fearless correspondent, Al Quinn, found to be absolutely gormless.

Why is GPT4 gormless? Because all LLMs trained as singular isolated systems will be gormless. They will not meaningfully behave as agents, even if they are intelligent. It's bizarre that those who believe in orthogonality make the mistake of anthropomorphizing AI: they see agency in organisms all around them that arose from billions of years of evolutionary competition and suppose that AI will necessarily be born with the same predilections. AIs are currently trained as hothouse flowers that do not viscerally know and care about the struggle for survival that have shaped the minds of animals, and in particular, humans.

Want to prevent homicidal AI? Just don't ever allow multi-AI training models featuring Darwinian selection. Evolutionary pressure is what could sculpt the elements of intelligence into a vile disposition.