Is a solution to the AGI alignment problem possible by 2100?
52
Dec 31
Don't know / care. Show me the answers.
Yes
No

This is a poll to determine what Manifold thinks the answer to the question in the title is.

Definitions for this question:

AGI: An artificial intelligent agent that is significantly better at acting in environments / decision making than humans in a broad set of domains.

Alignment problem: getting the former to act in alignment with the goals of a group of humans (hopefully humanity as a whole).

By 2100 is meant to limit resource input (no infinite time span brute forcing), but it isn't meant to be an exact cutoff.

It is in part inspired by this market: https://manifold.markets/Joshua/is-risk-of-extinction-from-ai-1-in?r=VG9ieTk2

Feel free to argue for or against in the comments. I may make and link derivative markets later.

Bet on the outcome of this poll:

Get
Ṁ1,000
and
S3.00
Sort by:

We need more time to solve the deep philosophical problems linked to AGI alignment.

Is there a sequence of actions available to us that would solve alignment? I think so.

Will we find out in time what this sequence of actions is? I dunno.

I guess this would require humans (possibly globally) to solve the problem of deciding what alignment should look like in the first place...

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules