Resolves to whichever of these is closest. If there are multiple separate reasons, I may resolve to % on several but strongly disfavour this.
A group of mods are going to resolve this, because it's pretty subjective. Currently the mood of mods is that if it were to resolve today (2023 Nov 30) it would resolve to "we don't know". I don't intend to trade on this market any more.
A month ago I made a version of this question that resolves on public manifold polls rather than mod polls, with independent options instead of resolving to 1. The polls for it are now live, if anyone would like to vote and help us see what the overall manifold's consensus is so far on what was a significant factor.
One big possibility being proposed is that altman tried to oust another board member, or some similar power struggles, and it backfired. That could be interpreted as misalignment between profit and nonprofit, if altman was trying to oust members that were not as focused on nearer term funding for scaling research, or somesuch. Would it resolve to that, or to ‘other’, with the sense of ‘failed ousting is far enough from the current options that it resolves other’?
@TheBayesian But why was he trying to oust Helen? Because of safety disagreements (and her publishing about them). No safety disagreements, no board action.
@Joroth It doesn't make any difference, if the reason the board acted was not because of the safety disagreements, but because of perceived bad behavior on Sam's part.
Then the reason is the bad behavior, not safety disagreements.
@DavidBolin Depends on how credulous you are on stated reasons I suppose.
Bad behavior is fairly subjective; the winning choice right now talks about a reason not coveted by another answer. If not for safety concerns, the behavior wouldn’t be perceived “bad”.
On a more traditional board, Sam being really timid about pushing the limits of AI models would be considered bad behavior. It would seem odd if the market resulted in this way but I’m still a newbie 🤷♂️
@jacksonpolack I think Nathan’s planning on picking just one though.
https://manifold.markets/NathanpmYoung/why-was-sam-altman-fired-mod-free-r#qg3QGszz5OEMFDeZs3wL
This at least gives me some confidence that if @NathanpmYoung and I have the same conception of the facts (and nothing changes by the time this this resolves) then it would resolve AI Safety.
If he was not consistently candid, I'm understanding that to resolve to "other". (Like if it wasn't about a specific dishonesty.) I guess unless the dishonesty seems to be clearly forwarding profit motives. Curious if that's wrong.
@KatjaGrace Yeah I guess, but like we have to be sure it was just that. Which I dunno, is pretty wild.
https://twitter.com/eshear/status/1726526112019382275
"PPS: Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."
Saw this interview with Emmet Shear (new interim CEO of OpenAI) and he's a full-on doomer, so now I buy that it was about safety
https://x.com/eshear/status/1673016996555014147?s=20
Why option semantics can be tricky.
Let's say (hypothetically) I want to bet on the possibility that the conflict was about the distribution of strategic focus between productionization/monetization vs deep research. What is the option I should select?
Some current trends exogenous to this question, would tag that as an AI safety issue -- that Sam was more focused on profit or products, and not enough on risks. However if I believe that this tagging is based on a poor assumption or some contingent facts, then it is less clear if I should bet on AI safety option. (As an example, it's possible that OpenAI focussing less on products on more on research is more likely to scale their models faster without early feedback from reality, increasing the chances of sharper emergence of takeover-worthy artificial agency)
@TJ I think that depends on the deep research. Most situations like that sounds like AI safety. But if it was capabilities research, then probably it would resolve to misalignment.
Lol... see this is the problem with a market where the creator:
-Controls the options
-Is the only one who can add options
-Bets in their own market
AI safety was the "closest option" (at least judging by the near-consensus in the larger market), until you created a more specific option which was closer. Conveniently you're holding a large NO share on AI safety.
Don't think I'll be giving a high review on this market, and I'm surprised to see this from a moderator.
@benshindel there are big advantages to the creator controlling the options though, like there are less silly options and less will result in NA (making the opportunity cost of bets higher). it is unfortunate that the (currently) winning option got added after all the others tho
@TheBayesian Implies slowing down or stopping of current capabilities, which isn’t inherent from “alignment coup” at all
@benshindel That's quite the accusation. Have you even read it?
"Misalignment between profit and non-profit goals that isn't covered by another explicit answer"
If it's misalignment related to AI safety then it resolves AI safety.
@NathanpmYoung Your bio says “tell me if I have done something you think is a bad norm.” This is a bad norm.
@benshindel I am not paid to do this, nor do I get much benefit at all, other than closing markets sooner. I felt attacked and still don't understand what your issues is. I don't feel obliged to explain myself to you if our interactions are like this. I am a person, just like you, who doesn't like being accused of stuff. If you want to give me a bad rating, do so. I doubt I'll respond to you in future if our interactions keep on like this.
The other mods will choose the resolution to this market. If you can make a clear case, I'll happily dump my shares somehow, but that should be costly to you, not aspersions I don't understand.
@NathanpmYoung You’re welcome to dm if you don’t understand my issue with how you’ve run this market. I don’t personally care about profit (in fact, I’ve gained mana in this market) but your own bio explicitly says to raise situations like this to your attention. I probably won’t give you a bad review, tbh, because I think you’re acting in good faith. But you’re a moderator on this site so I think you should be held to a high standard when running markets.
@benshindel I agree that Nathan should be held to higher standards and often falls short, but I’m failing to see the issue here.