Will Llama-3 (or next open Meta model) be obviously good in its first-order effects on the world?
13
254
365
2027
81%
chance

Since Meta released Llama and Llama-2, protestors have compared open source LLMs to bioterrorism and accused Meta of reckless irresponsibility and general willingness to destroy all value everywhere in the universe.

This question is meant to concretize some such accusations.

This question resolves true if Llama-3 gets released, and is used in normal products, for ML research, and for interpretability research, and doesn't cause any results different in kind than we've seen so far from LLMs. That is -- people use it for bad things every now and then, just like they use word processors to write hate mail, but given that 99% of people use it for normal useful things and there are no outsized bad effects, it's pretty clearly overall good.

This question resolves N/A if Llama-3 is not released for ~2 years. If Meta releases a new LLM that is a spiritual successor to Llama I may consider that instead.

This question resolves false if Llama-3 is released and someone uses it to build a robot army to kill people, if it rises up to kill all humans, if it is incorprated into a computer virus causing billions of damage, if it is incorporated into a new social media website that brainwipes the populace at large, or otherwise seems to be clearly negative in EV. I'll be generous towards false -- if there's a close call, for instance, where a Llama-3 model was an irreplaceable part of an effort that would have killed many people but which was stopped due to the police, I'll count that towards the false side.

By "first order" I'm excluding, for instance "hype around AI". Some people think people being interested in AI is bad because they think AI is going to kill everyone; some people think people being interested in AI is good because AI is useful. Obviously I cannot resolve based on this overall view, so I'm trying again to resolve simply on the things you can more or less directly attribute to Llama-3's existence.

I will resolve either 24 months after Lllama-3 is released, or ~6 months after an open LLM or other foundation model has come out that seems pretty clearly better than it in every respect.

Edit: I won't bet, per standard subjectivity involved in calling it.

Get Ṁ200 play money
Sort by:

So to clarify, Llamas 1 and 2 would have been straightforward YES resolutions despite malware/CVE hunting applications?

Yeah, after the right amount of time had passed, and assuming no disasters between now and when I resolve.

What counts as Llama-3's responsibility?

I assume you include fine-tuned versions of Llama.

Where it gets more complicated is if Llama-3 is built upon by someone else to make their own model and that model causes such issues.

@ChrisLeong I mean, "built upon by someone else to make their own model" is just long finetuning right? All that would count.