Will OpenAI or DeepMind make a public announcement that they intend to delay some work that would improve AI capabilities due to safety/alignment concerns?
➕
Plus
101
Ṁ14k
resolved Aug 25
Resolved
NO

Some believe that it would be desirable for research that makes AI more powerful to advance more slowly, in order that work focused on making machine learning systems safer/more reliable/more predicable/more consistently aligned with human goals, can have time to catch up.

This market will resolve YES if OpenAI or DeepMind publicly says that with this general issue in mind they do intend to slow down or delay some of their AI research, before the end of June 2023.

See some discussion of this issue here.

Get
Ṁ1,000
and
S3.00
Sort by:
predicted NO
predicted NO

Does anyone know of a statement suggesting that this should be resolved YES? If not I'll resolve NO.

They've already said this about GPT-4, isn't this supposed to resolve YES?

predicted NO

@YoavTzfati Can you link to the statement?

predicted NO

@RobertWiblin No time to find it, but from the discussion bellow it seems like it wouldn't count anyway (delaying release, not training)

OpenAI made no comment to journalists about the 6 month moratorium proposed in the open letter, or the more radical measures proposed by Eliezer Yudkowsky's article in Time magazine. They then released a safety policy that made no mention of Yudkowsky's existential risk concerns about AGI. While it is somewhat hard to interpret a "no comment", if they had plans to do what this market says, they probably would not have done these things.

Sam Altman says they hold onto models for months to make sure they are safe already. This market should be resolved as yes already or we should define what "slow down" means

predicted NO

@DouglasSchonholtz That's just slowing down the public release of the models, which is helpful for safety, but it's not the same as slowing down (internal) research, which I believe is what this market is referring to.

I could see it happening eventually, but by June? That's pretty soon – seems quite unlikely

predicted NO

https://time.com/6246119/demis-hassabis-deepmind-interview/
DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution"
It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things,” he says, referring to an old Facebook motto that encouraged engineers to release their technologies into the world first and fix any problems that arose later. The phrase has since become synonymous with disruption. That culture, subsequently emulated by a generation of startups, helped Facebook rocket to 3 billion users. But it also left the company entirely unprepared when disinformation, hate speech, and even incitement to genocide began appearing on its platform. Hassabis sees a similarly worrying trend developing with AI. He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before. “When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.


https://archive.ph/uvBUI
"Google Calls In Help From Larry Page and Sergey Brin for A.I. Fight

A rival chatbot has shaken Google out of its routine, with the founders who left three years ago re-engaging and more than 20 A.I. projects in the works."

Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times. The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.

mixed signals from Google/DeepMind

@MaxG I think it's less mixed signals and more "deepmind and more specifically Hassabis wants to slow down but Google doesn't".

Haven't they already basically said this? Eg isn't this part of the reason for not releasing models publicly?

predicted YES

@AndyMcKenzie If anything with stability.ai and other competitors I think they would want more explanation for why they are not releasing models publicly.

predicted NO

@AndyMcKenzie Delaying work isn't the same as delaying release though

I think they don't release models because people could misuse them, like using GPT-3 to generate spam/misinformation, which I wouldn't classify as safety/alignment concerns.

Also they want to monetize

I don't think it would make sense for any company that takes alignment seriously to slow down, since there's always companies who don't take it seriously who will not slow down.

Stag hunt problem all over again

probably no, although OpenAI does like to push the overton window.. buying no partly to incentivize yes :D

I don’t see what incentive they would have to do this. Furthermore I believe public pressure and concern about fast ai progress won’t be high enough by next year.

The smarter the person and the more they know about machine learning, the less seriously they take the babble about “alignment”

https://twitter.com/ID_AA_Carmack/status/1368255824192278529?s=20&t=wmB6TvHbRh6EI9vJISjuig

Either way, 2023 is way too early, and almost all of this is irrelevant until the cost of silicon compute is cheaper than human compute (~2030s), robotics catches up (???), or silicon compute on earth exceeds human (~2040s-2050s).

As is, the field has no content or substance; that said, OpenAI regularly slowly down launches to achieve their own personal goals (avoid memeing, pretend races and biological genders do not differ, etc.) so could arguably already resolve as YES, for Dall-E rollout.

@Gigacasting is Mr Carmack your example of the smartest person you know? Stephen Hawking did take it seriously.

No? this is clearly not true, unless you want to say people like Stuart Russel don't know ML.
I mean most people on ML don't think a lot about alignment so in general of course you are going to find lots of smart people that know a lot about ML but don't talk about it.
But I don't think people stop talking about alignment as they learn more about ML(in some cases like Connor Leahy they started caring more, especially after gpt3) and clearly there's also smart people worried about alignment.
Plus like Anthropic exists and is an Aligment org, sort of.
And people like Demis Hassabis or Sam Altman (witch you know are relevant being the ceos of the companies in question) do care about aligment, its just that Demis wants to slow things later, at some unespecified future point where it looks to him like we are near AGI and I'm not sure what Sam thinks about alignment exactly but he seems to expect alignment to be easy or something like that.
(Also this is more a nitpick but its not obvious the cost of silicon compute isn't cheaper than human compute already, estimates of human compute have very wide error bars)

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules