In 2023, will OpenPhilanthropy post a blog post/research update describing concrete progress on AI alignment due to one of their grants?
46
282
1.1K
resolved Jan 1
Resolved
NO

Based on links posted here: https://www.openphilanthropy.org/research/

Concrete progress will count as any claim that the grant recipient has significantly furthered our understanding of one of Holden's important and actionable reasearch questions plus an explanation of how the grant recipient furthered our understanding relative to previous work: https://forum.effectivealtruism.org/posts/zGiD94SHwQ9MwPyfW/important-actionable-research-questions-for-the-most#A_high_level_list_of_important__actionable_questions_for_the_most_important_century

As far as I can tell, OpenPhil has never posted anything of this nature in the past for the 'Potential risks of advanced AI' though it has done it for other areas such as farm animal welfare.

OpenPhil has made 134 grants worth 272 million in the 'Potential risks of advanced AI' cause area.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ76
2Ṁ45
3Ṁ36
4Ṁ27
5Ṁ26
Sort by:
bought Ṁ141 of NO

Betting against because of the phrasing of this question, not because I don't think OP's grants are useful

Furthering progress on one of holdens questions is a very narrow conception of useful alignment work

@RobertCousineau Hey, do you think any of these qualify? Or is this just sharing relevant information ?

predicted NO

@DismalScientist just sharing that nothing counted yet, lazily.

More related questions