On July 5, 2023, OpenAI announced their "Superalignment" initiative:
https://openai.com/blog/introducing-superalignment
Our goal is to solve the core technical challenges of superintelligence alignment in four years.
While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem
In a footnote to this passage, OpenAI states
If we fail to have a very high level of confidence in our solutions, we hope our findings let us and the community plan appropriately
This question resolves yes if OpenAI publicly states that it has "very high level of confidence" in their "Superalignment" solutions sometime within the next 4 years (at or before July 5, 2027). The wording does not have to match exactly, but a mere "we are confident" or "confidence level: high" would not suffice. A wording at least as strong as the original wording, like "we are certain" or "beyond any reasonable doubt", would suffice.
The solutions need to address the
core technical challenges of superintelligence alignment
and the "very high level of confidence" needs to refer to that "incredibly ambitious goal". A "very high level of confidence" in partial solutions or solutions to subproblems or solutions which only apply to a certain model or a certain type of model, like an LLM, will not suffice.
Suggestions on how to improve those resolution criteria are welcome!
I consider this question identical in spirit to the Metaculus question https://www.metaculus.com/questions/17728/openai-solves-alignment-before-june-30-2027/ and will defer to their resolution (apart from the slightly different close date), if there are no relevant changes to their criteria until close date.