Similar to https://manifold.markets/PeterWildeford/will-yann-lecun-change-his-mind-abo , but with a shorter timespan. Has to be public (e.g. Twitter), not in like a private convo. I'll subjectively judge edge cases (e.g. he shows up at an EAG or something).
⚠Creator became partially active by liking a resolution I had done, but still ignored pings to resolve the others.
📢Resolved to NO
I think this is an endorsement of alignment research https://youtube.com/clip/Ugkx53svuXfIHa20spwBXzHZ6uK5emc-TFzT
@YoavTzfati The original market I based this on depends on the market-maker's subjective judgement. My subjective judgement says this isn't the kind of about-face I made the market for, so no.
(To add a bit more precision into it, based on the clip: he claims or implies that the existing AI alignment methods are useful and the remaining challenges are engineering-based, so more research on better ways of alignment is implied to be superfluous and unneeded.)
@NicholasKross I disagree with your interpretation, I think he's saying that alignment itself is an engineering challenge. Admittedly he doesn't give almost any probability to us messing it up, but that's not because he thinks it's easy - it's because he thinks "no one will be stupid enough to build a superintelligence before they know how to align it"
Regarding the market you're basing this one on - I think its meaning is very different. You can endorse alignment research without thinking there's an existential risk from AI, and vice versa.
If you're going for the original market's meaning I suggest you change this one's wording :)
@NicholasKross Can you clarify what 'alignment research' means? Do RLHF-style things count? Because if so - it would be kind of funny if he didn't endorse it (it's clearly economically useful right now).