Will "Summary of our Workshop on Post-AGI Outcomes" make the top fifty posts in LessWrong's 2025 Annual Review?
3
1kṀ306
2027
11%
chance

As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2025 Review resolves in February 2027.

This market will resolve to 100% if the post Summary of our Workshop on Post-AGI Outcomes is one of the top fifty posts of the 2025 Review, and 0% otherwise. The market was initialized to 14%.

Get
Ṁ1,000
to start trading!
Sort by:
filled a Ṁ6 YES at 24% order

@Krantz One of these talks is for you.

Ryan Lowe of the Meaning Alignment Institute spoke on "Co-Aligning AI and Institutions".  Their Full-stack Alignment work argues that alignment strategy needs to consider the institutions in which AI is developed and deployed.  He also mentioned “Thick Models of Value”, outlining the practical problems of specifying values through unstructured text (i.e. system prompts or constitutions) or preference orderings.

https://www.youtube.com/watch?v=8AUDmo4F3_A

@Chumchulum I appreciate you thinking about me.

If you think there's a lecture I'll find important, you should add it to this list to see how it compares to the other videos I'm trying to draw attention towards.

https://manifold.markets/Krantz/what-will-i-believe-is-the-most-imp?r=S3JhbnR6

© Manifold Markets, Inc.TermsPrivacy