In 2030, how much will I think that AI advances since Jan 2023 have made EA researchers more productive?
AI advancements are happening quickly. This might be very good for our epistemics, very bad, or somewhere in between.
"% Effectiveness" here means roughly, "For every hour of research done by the EA community, will this produce X% more or less results?"
Effectiveness could decline by more than 100% if work became actively harmful. For example, maybe EAs become brain-hacked by some conspiracy-promoting AIs.
If transformative AI happens, and if all human research activities are effectively pointless, then this would resolve as ambiguous.
"EA Researchers" here refers to people developing strategy and writing blog posts. It does not include ML researchers using AI for direct ML experiments, as that is a very separate use case. Here, we're mainly interested in broad epistemic changes.