Will it be revealed by 2030 that Bing Sydney's release was partially a way to promote AI safety?
39
237
770
2031
6%
chance

Microsoft Bing gained some spooky behavior after generative AI was added. Will it be revealed by the end of 2030 that at least one person involved (1) in Sydney's development or (2) in the decision to release Sydney was influenced by the fact that Sydney's behavior would help promote the concept of AI safety?

Get Ṁ200 play money
Sort by:
bought Ṁ0 of NO

This is microsoft we're talking about, they can barely manage to maintain a working volume slider.

bought Ṁ20 of YES

That said, at this point, instead of just saying stop, I would say we should speed up the work that needs to be done to create these alignments. We did not launch Sydney with GPT-4 the first day I saw it, because we had to do a lot of work to build a safety harness. But we also knew we couldn't do all the alignment in the lab. To align an AI model with the world, you have to align it in the world and not in some simulation.

From https://www.wired.com/story/microsofts-satya-nadella-is-betting-everything-on-ai/

(I won't count this as resolution (they wanted to learn more about safety, my criteria is promoting safety), but I thought it was relevant to share.)

Also see https://arstechnica.com/tech-policy/2023/06/report-microsoft-launched-bing-chatbot-despite-openai-warning-it-wasnt-ready/

predicts NO

@ChristopherKing This really seems like evidence against them having some vaguely principled pro-safety position.