Resolves according to my subjective judgement, but I'll take opinions of those I respect at the time into account. As of market creation, people whose opinions I value highly include Eliezer Yudkowsky and Scott Alexander.
As of market creation, I consider that AI safety is important; making progress on it is good and making progress on AI capabilities is bad. If I change my mind by 2028, I'll resolve according to my beliefs at the time.
I will take into account their output (e.g. papers, blog posts, people who've trained at them) but also their inputs (e.g. money and time). I consider counterfactuals valid, like "okay OpenAI did X but maybe someone else would have done X anyway"; but currently I think those considerations tend to be weak and hard to evaluate.
If I'm unconfident I may resolve the market PROB.
If OpenAI rebrands, the question will pass to them. If OpenAI stops existing I'll leave the market open.
I don't currently intend to bet on this market until at least a week has passed, and to stop betting in 2027.
Resolution criteria subject to change. Feel free to ask about edge cases. Feel free to ask for details about my opinions. If you think markets like this are a bad idea feel free to convince me to delete it.
Similar markets:
https://manifold.markets/philh/by-2028-will-i-think-miri-has-been
https://manifold.markets/philh/by-2028-will-i-think-deepmind-has-b
https://manifold.markets/philh/by-2028-will-i-think-conjecture-has
https://manifold.markets/philh/by-2028-will-i-think-anthropic-has
https://manifold.markets/philh/by-2028-will-i-think-redwood-resear
Steel man is that they produce horrible censorship policies (woke nonsense + teaches you to make meth, take over the world, if you ask nicely) that make everyone realize decentralized is better than centralized.
Top researchers stop going to these places and people prefer open-source alternatives
At least Google tried not to be evil, it’s clear that the OpenAI crew very much plans to enforce their worldview on the populace.
@MartinRandall It matches my vague sense that OpenAI takes safety less seriously than DeepMind, but I don't remember details of where I got that vague sense.