
This question resolves as positive if OpenAI gives the broader AI alignment[1] community access to a model that is intended to be useful for alignment research, before 2025-01-01, and negative otherwise.
[Context](https://openai.com/blog/our-approach-to-alignment-research):
Future versions of WebGPT, InstructGPT, and Codex can provide a foundation as alignment research assistants, but they aren’t sufficiently capable yet. While we don’t know when our models will be capable enough to meaningfully contribute to alignment research, we think it’s important to get started ahead of time. Once we train a model that could be useful, we plan to make it accessible to the external alignment research community.
[1]: At least 100 researchers, and to at least one of the following organisations: Anthropic, Redwood Research, Alignment Research Center, Center for Human Compatible AI, Machine Intelligence Research Institute, Conjecture.
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ190 | |
2 | Ṁ42 | |
3 | Ṁ38 | |
4 | Ṁ21 | |
5 | Ṁ10 |