EDIT
Let's operationalize it this way:
"If Eliezer either expresses that the model may be very helpful for alignment research, or Eliezer strongly implies that he feels this way (eg. by indicating that it is more useful than an additional MIRI-level researcher), then we consider this market resolved to YES."
I think this may be able to resolve yes.
Eliezer Yudkowsky was impressed by the work OpenAI published with "Language Models can Understand Neurons in Lamguage Models": https://openai.com/research/language-models-can-explain-neurons-in-language-models
https://twitter.com/ESYudkowsky/status/1656146336687828992.
I'm quite confident he's said something about being impressed by GPT 4 in the past.
@RobertCousineau He's encouraged that people at OpenAI are trying. That's different from thinking their work or the models they are using are effective.
@MartinRandall He does not behave as if chances were 0%. Are you rounding? I have trouble imaginining Yudkowski assigning pure 100% or 0% even to a mathematical theorem being correct/incorrect.
@MikhailDoroshenko OK, lol. What percent of nonsense on this site is joke and what serious?
@askdf My read of the linked article was and is that it is serious, and I continue to be amazed that others read it differently.
@MartinRandall The date gave him an opportunity to say some things he knew will not be received well. It is good to mix those with silly jokes, so that the whole text can be interpreted as humor.
@EzraSchott Thank you. I apologize for being rude back. I may be wrong about why he did it, but given what he preached in Sequences, he is unlikely to claim he is infallible.
@askdf If a specific large language model doubles our hopes of solving alignment according to Eliezer, then this market would almost certainly resolve Yes.
That’s definitely not necessary, though.
@agentofuser Just helpful.
Would Copilot/ChatGPT helping researchers run experiments or communicate results faster, etc. count?
Only if the speedup is significant.
Let's operationalize it this way:
"If Eliezer either expresses that the model may be very helpful for alignment research, or Eliezer strongly implies that he feels this way (eg. by indicating that it is more useful than an additional MIRI-level researcher), then we consider this market resolved to YES."