Mini
4
122
2027
16%
chance

The New York Times, in its lawsuit against OpenAI, seeks the deletion of all language models trained with data that includes any Times articles. Should that happen, it is likely that a precedent will be set that prevents other companies from including articles from any newspaper in their training data.

At the same time, it will be impossible to enforce such a provision against open source models, some of which will have anonymous authors. Effective altruists like @EliezerYudkowsky have often criticized open source AI as dangerous and believe that limitations on the distribution of model architectures and weights are preferable.

This market resolves to YES if:

  • The lawsuit is decided in the New York Times' favor, either by judgment or by settlement

  • The judgment or settlement requires the deletion of models, the degredation of training data quality, or software additions or changes that degrade the output data

  • An open source model takes the worldwide lead across all models, as measured using the metrics reported on the Huggingface LLM Leaderboard at the time of the measurement, for at least one full day during the six months after the final appeal has been exhausted or declined

If the judgment or settlement is in OpenAI's favor, and no more appeals are available or used, the market resolves to NO. It also resolves to NO if closed-source models retain the lead for six months after no more appeals are available or used.

Get Ṁ1,000 play money
Sort by:

Are the conditions an "AND" or an "OR"?

Unless I'm missing something, there's a discrepancy between the title and resolution criteria. It is assumed that "an open source model takes the worldwide lead across all models..." is equivalent to "a setback for AI safety". Regardless of what Yudkowsky believes, this is not proven, and there are many who believe transparent, open source models are better for AI safety than opaque, closed source models.