
Should be from a credible source, if wikipedia would accept it it's fine.
Update 2025-02-01 (PST) (AI summary of creator comment): Clarification on Resolution Criteria:
Providing a credible source, such as the OpenAI article, that confirms an accusation of a large language model being used for disinformation during the 2024 US Presidential Election will resolve the market to yes.
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ50 | |
2 | Ṁ24 | |
3 | Ṁ11 | |
4 | Ṁ5 | |
5 | Ṁ5 |
People are also trading
@EMcNeill Or since all that’s needed is an accusation, would it be enough to read: “some users on Twitter suggested the troll may have been using AI to write some of his comments”?
@EMcNeill For scale, it needs to be reasonable to refer to it as a large language model, anything above 2B parameters trained on at least 10B tokens is definitely in. Below that we would need it to be referred to as a large language model by another source.
For the accusation question, assuming the article does not refute the claim, it would resolve yes.
@dmayhem93 Oh, I meant “scale” in terms of the number of people involved. Like, would it count if there was just some lone wolf accused of disinformation rather than, say, a big coordinated campaign. But it sounds like the answer is “yes”, and it’s good to know the info about LLM scale too!
@EMcNeill Got it, I thought you were referring to the original GPT 😅. Yeah if a single troll makes it into a reputable source I would resolve as YES.