Resolves according to my subjective judgement, but I'll take opinions of those I respect at the time into account. As of market creation, people whose opinions I value highly include Eliezer Yudkowsky and Scott Alexander.
As of market creation, I consider that AI safety is important; making progress on it is good and making progress on AI capabilities is bad. If I change my mind by 2028, I'll resolve according to my beliefs at the time.
I will take into account their output (e.g. papers, blog posts, people who've trained at them) but also their inputs (e.g. money and time). I consider counterfactuals valid, like "okay OpenAI did X but maybe someone else would have done X anyway"; but currently I think those considerations tend to be weak and hard to evaluate.
If I'm unconfident I may resolve the market PROB.
If OpenAI rebrands, the question will pass to them. If OpenAI stops existing I'll leave the market open.
I don't currently intend to bet on this market until at least a week has passed, and to stop betting in 2027.
Resolution criteria subject to change. Feel free to ask about edge cases. Feel free to ask for details about my opinions. If you think markets like this are a bad idea feel free to convince me to delete it.
Steel man is that they produce horrible censorship policies (woke nonsense + teaches you to make meth, take over the world, if you ask nicely) that make everyone realize decentralized is better than centralized.
Top researchers stop going to these places and people prefer open-source alternatives
At least Google tried not to be evil, it’s clear that the OpenAI crew very much plans to enforce their worldview on the populace.
cf. “the Bostrom plan” for totalitarian world govt.
Slogan of the AI Antichrist is “AI Safety”—only OpenAI gets to decide what is or is not “acceptable”
Your market description still has MIRI in it. Same with some other markets of this type.
@harfe Ah rats, thanks for pointing that out. Will fix.
I'm not clear why open ai is so much lower than deepmind here.
@MartinRandall It matches my vague sense that OpenAI takes safety less seriously than DeepMind, but I don't remember details of where I got that vague sense.
By 2028, will I think OpenAI has been net-good for the world?, 8k, beautiful, illustration, trending on art station, picture of the day, epic composition