MANIFOLD
If a huge alignment effort is part of the reason for ASI having an OK outcome, will it involve a new AI paradigm?
2
Ṁ100Ṁ110
2100
45%
chance

This resolves N/A if ASI does not have an OK outcome or if a huge alignment is not part of the reason, and otherwise YES if a new AI paradigm is part of the reason for ASI having an OK outcome, and NO if it is not.

Definition of OK outcome:

An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.

A huge alignment effort is an unusual amount of effort being put specifically into alignment, as a separate project blocking capabilities, above business-as-usual, corresponding to a willingness to give up on two years of capabilities lead time in favour of alignment; "making alignment your top priority and working really hard to over-engineer your system for safety".

If MIRI's agent foundations had worked perfectly, that would count; if there was a global ASI Project that took as long as it needed, that would count; if Anthropic RLHF'd Opus to support CEV, that would not count (the alignment is institutionally an optional extra on a capabilities project, aiming for an acceptable threshold of safety, rather than overengineering it).

A new AI paradigm is any AI paradigm that is not deep learning. This can include one that is already known to science - there would still be novelty in making it ASI-capable, when currently it only appears that deep learning can do that first.

Market context
Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy