Will OpenAI announce a dedicated grant program for external AI alignment research projects? (2024)
46
245
2.5K
resolved Dec 14
Resolved
YES

This market predicts whether OpenAI will announce a dedicated grant program specifically for external AI alignment research projects by December 31, 2024.

Resolves YES if:

  • OpenAI officially announces or confirms the establishment of a dedicated grant program for external AI alignment research projects on or before December 31, 2024.

Resolves NO if:

  • OpenAI does not announce a dedicated grant program for external AI alignment research projects by December 31, 2024.

Resolves as NA if:

  • OpenAI ceases to exist, undergoes significant restructuring, or experiences any other event that renders the original intent of the market unclear or irrelevant.

Definitions:

  • "Dedicated grant program" refers to a funding initiative, formally announced and organized by OpenAI, with the specific aim of supporting external research projects that focus on AI alignment. This program should have clear guidelines, application processes, and funding allocation criteria. The program must not have been announced before this market's creation.

  • "External AI alignment research projects" are research projects conducted by individuals or organizations outside of OpenAI, focusing on understanding and solving the alignment problem in AI, ensuring that AI systems reliably learn and follow human values and intentions.

  • "OpenAI" mainly refers to OpenAI as an organization but also includes announcements made by executives or heavy investors. If OpenAI is acquired, the term refers to the acquiring entity. If OpenAI dissolves, this market resolves NA.

Get Ṁ1,000 play money

🏅 Top traders

#NameTotal profit
1Ṁ2,394
2Ṁ74
3Ṁ50
4Ṁ47
5Ṁ43
Sort by:
bought Ṁ2,500 of YES

@Mira I'm pretty sure this fulfils all the requirements of the market

https://openai.com/blog/frontier-model-forum-updates

@firstuserhere I'll close the market while I read this.

@firstuserhere

Resolves YES if: OpenAI officially announces or confirms the establishment of a dedicated grant program for external AI alignment research projects on or before December 31, 2024.

It's clearly an announcement. It occurred before December 31 and after the creation of this market. Is it a dedicated grant program for external AI alignment research projects?

Today Forum members, in collaboration with philanthropic partners, the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Eric Schmidt, and Jaan Tallinn commit over $10 million for a new AI Safety Fund to advance research into the ongoing development of the tools for society to effectively test and evaluate the most capable AI models.

It's a funding initiative formally announced by OpenAI. It mentions it will issue its first "call for proposals" in a few months, which presumably will have an guidelines, processes, and funding allocation criteria.

One could argue it doesn't yet have these, so doesn't yet constitute a grant program. But this market only requires an announcement, and we have that. And they make it pretty clear what they want: Model testing/evaluation/red teaming. So the funding allocation is going to be for testing models.

So I think it counts as a Dedicated Grant Program.

Over the past year, industry has driven significant advances in the capabilities of AI. As those advances have accelerated, new academic research into AI safety is required. To address this gap, the Forum and philanthropic partners are creating a new AI Safety Fund, which will support independent researchers from around the world affiliated with academic institutions, research institutions, and startups.

It qualifies for the "external", "research", and "projects" part of "external AI alignment research projects".

One could argue: "AI safety is a different thing from 'The Alignment Problem'. They're going to test models for naughty language, or finding porn; but they're likely going to ignore deception, or agents, or all sorts of other issues. So this doesn't constitute Alignment Research."

Their organization writes in a July post:

https://openai.com/blog/frontier-model-forum

Advancing AI safety research: Support the AI safety ecosystem by identifying the most important open research questions on AI safety. The Forum will coordinate research to progress these efforts in areas such as adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors and anomaly detection. There will be a strong focus initially on developing and sharing a public library of technical evaluations and benchmarks for frontier AI models.

Scalable oversight and Mechanistic Interpretability are mentioned in a taxonomy of AI alignment on the AI Alignment forum:

https://www.alignmentforum.org/posts/zaaGsFBeDTpCsYHef/shallow-review-of-live-agendas-in-alignment-and-safety

So I think their definition of AI Safety has intersection with AI alignment.

So we have "External AI alignment research projects".

So this market resolves YES.

predicted YES

@firstuserhere yes, that's the one I was thinking of when I made this market. The AI safety one technically qualified, but I also was expecting Superalignment to have a program. I'll have to be more careful defining it next time...

bought Ṁ340 of NO

Who's gonna fund it?

More related questions