I'm mildly interested in having someone distill rationality lessons from planecrash and such onto LessWrong; see here for why. I just added a $300 bounty for someone doing this in a way that gets 100 karma.
In order to qualify for this post and the bounty:
The post MUST be largely about a rationality lesson or AI alignment insight.
The ideas in the post MUST explicitly be from something originally on glowfic.com. At least 35% of the text should be quoted material or thoughts directly derived from it.
The post MUST not be by Eliezer Yudkowsky. Authors of the source material can be whoever.
The post is likely tagged dath ilan or Fiction (topic) but this is not required.
The post MUST have been posted after market creation.
The market resolves YES as soon as I become aware that a qualifying post has accumulated 100 karma, or NO if I check LessWrong and this market's comments after close date and see no qualifying posts. I may ask the LW team to check for suspicious voting behavior and disqualify posts accordingly.
Examples of "on-topic" posts that didn't get 100 karma:
- https://www.lesswrong.com/posts/eS7LbJizE5ucirj7a/dath-ilan-s-views-on-stopgap-corrigibility
- https://www.lesswrong.com/posts/AvANsxR88iiZziKPt/how-dath-ilan-coordinates-around-solving-alignment