Will I focus on the AI alignment problem for the rest of my life?
Will I focus on the AI alignment problem for the rest of my life?
62%
chance

Background:

I have spent about 2500 to 3800 hours on AI alignment since Feb 2022. This is a rough 75% confidence interval derived from a cursory inspection of activity data I have collected in that period (mostly browser and conversation history).

This works out to around 5.0 to 7.6 hours per day, which seems a little high to me, but I cast a wide net for what counts (anything done with the intention of reducing AI risk: thinking, reading, planning, talking, executing subgoals), so I'm not surprised.

If this seems off to you, let me know whether you believe it would be worthwhile to perform a more rigorous analysis, and what exactly that would involve.

Resolution criteria:

  • Resolves YES if I continue to be committed to AGI notkilleveryoneism (whether directly or indirectly) till the end of my natural lifespan. Examples:

    • Doing technical research, engineering, advocacy, field-building, etc.

    • Broadly, anything aimed towards increasing "dignity points" counts.

  • Resolves NO if I decide to stop considering "what is going to reduce AI related x-risk?" as a motivating factor in all major career decisions until I die.

    • This would be if I no longer consider AI risks a main personal priority.

    • Broadly, "losing will": whether due to lack of hope, interest, money, etc.

  • Resolves Ambiguous if I am alive but rendered incapable of contributing.

    • This probably won't happen, but to be prepared for the worst, I have notified the beneficiaries of my insurance policies of my wishes: to distribute my assets as they see fit towards meeting my family's needs, and allocating the rest towards funding efforts to mitigate AI x-risk.

Please let me know if these criteria seem unclear or vague and I'll update them with the help of your suggestions. In particular, "focus on" is hard to judge (what if I'm doing something adjacent or tangentially linked? what if I'm burnt out and do something unrelated for a few weeks to recharge? what is the upper bound on compromise/tradeoff against competing goals? what if I retire but am still casually involved on an ad-hoc basis?) so I'm accepting input on how flexible of a definition to use for the purposes of this market.

Get
Ṁ1,000
to start trading!


Sort by:
1y

I think you should keep the market open much longer.

predictedYES 1y

@NicoDelon thanks, extended

1y

How will you ensure someone resolves YES if you die while working on the problem?

2y

End of history illusion. Betting NO.

2y

What if AI alignment is solved in your lifetime?

predictedYES 2y

@ampdot Resolves YEEEES

predictedYES 2y

@ampdot Btw how do you define “AI alignment solved”? Have you written a post anywhere?

What is this?

What is Manifold?
Manifold is the world's largest social prediction market.
Get accurate real-time odds on politics, tech, sports, and more.
Or create your own play-money betting market on any question you care about.
Are our predictions accurate?
Yes! Manifold is very well calibrated, with forecasts on average within 4 percentage points of the true probability. Our probabilities are created by users buying and selling shares of a market.
In the 2022 US midterm elections, we outperformed all other prediction market platforms and were in line with FiveThirtyEight’s performance. Many people who don't like betting still use Manifold to get reliable news.
ṀWhy use play money?
Mana (Ṁ) is the play-money currency used to bet on Manifold. It cannot be converted to cash. All users start with Ṁ1,000 for free.
Play money means it's much easier for anyone anywhere in the world to get started and try out forecasting without any risk. It also means there's more freedom to create and bet on any type of question.
© Manifold Markets, Inc.TermsPrivacy