How much will AI advances impact EA research effectiveness, by 2030?
Basic
8
Ṁ282
2030
1.4%
Dramatic Loss (<-100%)
1.4%
Medium Loss (-100% to -20%)
16%
Roughly Neutral (-20% to +20%)
23%
Medium Gain (+20% to +100%)
50%
Dramatic Gain (+100% to +1000%)
8%
Enormous Gain (>+1000%)

In 2030, how much will I think that AI advances since Jan 2023 have made EA researchers more productive?

AI advancements are happening quickly. This might be very good for our epistemics, very bad, or somewhere in between.

"% Effectiveness" here means roughly, "For every hour of research done by the EA community, will this produce X% more or less results?"

Effectiveness could decline by more than 100% if work became actively harmful. For example, maybe EAs become brain-hacked by some conspiracy-promoting AIs.

If transformative AI happens, and if all human research activities are effectively pointless, then this would resolve as ambiguous.

"EA Researchers" here refers to people developing strategy and writing blog posts. It does not include ML researchers using AI for direct ML experiments, as that is a very separate use case. Here, we're mainly interested in broad epistemic changes.

Get
Ṁ1,000
and
S3.00
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules