
Related to :
I don't really trust people to self-evaluate on this, especially not in public where it might affect their future funding
Since they have not really produced anything yet, I am not sure I will even agree with them on what constitutes meaningful progress.
Alignment research I think is meaningful: most of Anthropic's work, ARC's work, a lot of the work that comes from the OpenAI and DeepMind safety teams. I am generally pretty negative on MIRI-style alignment work, which is unlikely to be relevant to this market.
I will be fairly generous with "meaningful progress" - a paper introducing a novel algorithm, a new interpretability result, any kind of theoretical result (as long as it's definitely relevant - I won't be generous about that). A paper making incremental improvements on top of existing work (e.g. a better version of constitutional AI) might count but it will be subject to my inscrutable whims.
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ537 | |
2 | Ṁ206 | |
3 | Ṁ121 | |
4 | Ṁ113 | |
5 | Ṁ100 |