Resolves based on general vibes and agreement. It doesn't have to lead to universal marginalization, only marginalization within AI capabilities groups.
For instance, currently some AI capabilities groups have written out some statements saying that maybe there's some existential risk and we vaguely need to be careful. If they retract those statements and argue that alignment people are crazy or incompetent, this question resolves yes.
There's already been some capabilities people arguing that alignment people are crazy or incompetent, so in order to resolve yes they have to start doing so to a greater degree, without also starting defending them to a greater degree.
I won't be betting in this market for objectivity reasons, but I suspect it will resolve to yes for the following reasons:
I call it "high-energy memes". I assume that people here are familiar with the concept of a meme; an idea that can be shared from person to person, and spread throughout society. By "high-energy", I mean a meme that in some sense demands a lot of action, or shifts the political landscape a lot, or similar. For instance, one high-energy meme is "AGI will most likely destroy civilization soon"; taken seriously, it demands strong interventions on AGI development, and if such interventions are not taking, it recommends strong differences in life choices (e.g. less long-term planning, more enjoying the little time we have left).
One can create lots of high-energy memes, and most conceivable high-energy memes are false and harmful. (E.g. "if you masturbate then you will burn in hell unless you repent and strongly act to support our religion".) Furthermore, even if a high-energy meme originates from a source that is accurate and honest, it may be transformed along the process of sharing, and the original source may not be available, which may make it less constructive in practice.
Since high-energy memes tend to be bad, lots of social circles have created protections to suppress high-energy memes. But these protections also suppress important high-energy memes such as AGI risk. And they also tend to be irrational and exploitable, and to be able to protect the people in power from being held accountable.
(This model was originally written for a different context than AI safety, but it is partly inspired by AI safety.)
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ87 | |
2 | Ṁ50 | |
3 | Ṁ40 | |
4 | Ṁ36 | |
5 | Ṁ22 |