Will Yann LeCun *publicly* endorse AI alignment research in 2023?
Basic
19
Ṁ2274
resolved Jan 9
Resolved
NO
Get
Ṁ1,000
and
S3.00
Sort by:

⚠Creator became partially active by liking a resolution I had done, but still ignored pings to resolve the others.

📢Resolved to NO


@NicholasKross Please resolve. Thank you.

@YoavTzfati The original market I based this on depends on the market-maker's subjective judgement. My subjective judgement says this isn't the kind of about-face I made the market for, so no.

(To add a bit more precision into it, based on the clip: he claims or implies that the existing AI alignment methods are useful and the remaining challenges are engineering-based, so more research on better ways of alignment is implied to be superfluous and unneeded.)

predicted YES

@NicholasKross I disagree with your interpretation, I think he's saying that alignment itself is an engineering challenge. Admittedly he doesn't give almost any probability to us messing it up, but that's not because he thinks it's easy - it's because he thinks "no one will be stupid enough to build a superintelligence before they know how to align it"

Regarding the market you're basing this one on - I think its meaning is very different. You can endorse alignment research without thinking there's an existential risk from AI, and vice versa.

If you're going for the original market's meaning I suggest you change this one's wording :)

@NicholasKross Can you clarify what 'alignment research' means? Do RLHF-style things count? Because if so - it would be kind of funny if he didn't endorse it (it's clearly economically useful right now).

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules