Microsoft Bing gained some spooky behavior after generative AI was added. Will it be revealed by the end of 2030 that at least one person involved (1) in Sydney's development or (2) in the decision to release Sydney was influenced by the fact that Sydney's behavior would help promote the concept of AI safety?
That said, at this point, instead of just saying stop, I would say we should speed up the work that needs to be done to create these alignments. We did not launch Sydney with GPT-4 the first day I saw it, because we had to do a lot of work to build a safety harness. But we also knew we couldn't do all the alignment in the lab. To align an AI model with the world, you have to align it in the world and not in some simulation.
From https://www.wired.com/story/microsofts-satya-nadella-is-betting-everything-on-ai/
(I won't count this as resolution (they wanted to learn more about safety, my criteria is promoting safety), but I thought it was relevant to share.)
@ChristopherKing This really seems like evidence against them having some vaguely principled pro-safety position.