
The Case for to take less Risk with AGI
tl;dr: We cannot know for certain, but I lean towards that Sam was prioritizing AI commercialization over safety, while Ilya wanted to slow down development and place a greater emphasis on safety.
I think the board (specifically Ilya) believed that Sam was moving too quickly to commercialize AI models without ensuring safety came first. This concern seemed to be shared by others, including the founders of Anthropic who left OpenAI under similar circumstances. There were also disagreements among leadership over how to manage the company. For instance, see what transpired with Wilmer below.
This leads me to conclude that there was a significant divide within OpenAI regarding the pace of development and insufficient resources dedicated to ensuring AI system safety. Ilya convinced the board he believed the company should move more cautiously and allocate resources differently. The board made their decision.
Excerpts from articles that influences my opinion:
Anthropic Founders left OpenAI because they believed that the company was moving too fast to commercialize its technology
Executives disagreed over how many people OpenAI needed to moderate its consumer-facing products. By the time OpenAI launched ChatGPT in late 2022, the trust and safety team numbered just over a dozen employees, according to two people with knowledge of the situation …. Some employees worried OpenAI didn’t have enough employees tasked with investigating misuse of the platform … By July, when Willner left the company, citing familily reasons
Under Willner, a former Meta Platforms content moderation executive, the trust and safety team sometimes ran into conflicts with other executives at OpenAI. Developers working on apps built on OpenAI’s technology had complained, for instance, that the team’s vetting procedures took too long. That caused executives to overrule Willner on how the app review process worked.
Sam Altman alluded to a recent technical advance the company had made that allowed it to “push the veil of ignorance back and the frontier of discovery forward.”
In a sign Sutskever had become increasingly concerned about AI risks, he and Leike in recent months took on the leadership of a new team focused on limiting threats from artificial intelligence systems vastly smarter than humans. In a blogpost, OpenAI said it would dedicate a fifth of its computing resources to solving threats from “superintelligence,” which Sutskever and Leike wrote “could lead to the disempowerment of humanity or even human extinction.”
At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns. You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do. (Another person said Sutskever may have misinterpreted a question related to a potential hostile takeover of OpenAI by other parties.)
Altman had spoken to Toner about a paper she co-wrote for Georgetown's Center for Security and Emerging Technology, where she is a director of strategy, because it seemed to criticize the company's safety approach and favor rival Anthropic, according to The New York Times. Toner defended her decision, according to the report. The disagreement over the paper led senior OpenAl executives, including chief scientist Ilya Sutskever, to discuss whether Toner should be removed as a board member. After those discussions, Sutskever joined the rest of the board in firing Altman, communicating it to the former chief executive himself over a Google Meet conference call.