Will mitigation of AI propaganda & "botpocalypse" concerns get $10m+ from EA funders before 2030?
6
107
205
2030
60%
chance

In this question I am looking for grants that are SPECIFICALLY focused on helping society adapt to / mitigate the kinds of concerns mentioned in the ACX blog post "Mostly Skeptical Thoughts On The Chatbot Propaganda Apocalypse", and the recurring section called "Deepfaketown and Botpocalypse Soon" in Zvi's AI newsletter.

This will inevitably be a fuzzy category, but here's some stuff that doesn't count:
- any kind of technical work or lobbying/advocacy effort that's directly motivated by X-risk / superintelligence alignment / "AI notkilleveryoneism"
- anything primarily motivated by "AI not-say-bad-wordism" and harms like algorithmic bias, including using tools like RLHF to make AIs less likely to generate "misinformation"
- anything primarily about labor market impacts / economic disruption, or about the balance of military/strategic capabilities between competing nations
- attempts to mitigate the impact of far-superhuman persuasion abilities (ie, trying to make a research lab more robust against "break out of the box"-style persuasion that would reliably work on even the smartest & most competent AI researcher, as opposed to the societal-level concerns below that would be a big problem even if just 10% of people were susceptible.)

Stuff that does count:
- efforts to address problems of humans being overly influenced by AI on a personal level (whether from ideological propaganda, or from virtual boyfriends/girlfriends, or from AI scamming people, or etc)
- efforts to help human institutions adapt to an influx of bots (ie helping social media ensure that users are really human, or helping government / legal systems resist being gummed up by bot-generated paperwork spam / vexatious litigation.
- attempts to certify / label / watermark AI-generated content as such, or otherwise come up with a systemic solution to deepfake-related concerns

Anyways, to resolve this question, in 2030 I will look at publicly-available grantmaking documentation (like this Openphil website, for example), adding up all the grants between October 2023 and January 2030 that support "botpocalypse" mitigation, and resolve YES if the grand total exceeds ten million US dollars.

"EA funders" means places like OpenPhil, LTFF, SFF, Longview Philanthropy, Founders Fund, GiveWell, ACX Grants, etc. Some example "EA-adjacent" funding sources that wouldn't count, even if their money goes directly to this cause area: Patrick Collison, Yuri Milner, the Bill & Melinda Gates Foundation, Elon Musk, Vitalik Buterin, Peter Thiel. This is obviously a fuzzy distinction (what if one of the aforementioned billionares becomes noticeably more EA-influenced by 2030? etc), but I'll try my best to resolve the question in the spirit of reflecting how the EA community has grown over time.

For markets about other cause-area-candidates (like PauseAI-style protests and human intelligence augmentation!), check out the "New EA Cause Area?" tag!

Get Ṁ200 play money

More related questions