Will the New York Times v. OpenAI suit cause a setback for AI safety?
Will the New York Times v. OpenAI suit cause a setback for AI safety?
4
110Ṁ121
2027
16%
chance

The New York Times, in its lawsuit against OpenAI, seeks the deletion of all language models trained with data that includes any Times articles. Should that happen, it is likely that a precedent will be set that prevents other companies from including articles from any newspaper in their training data.

At the same time, it will be impossible to enforce such a provision against open source models, some of which will have anonymous authors. Effective altruists like @EliezerYudkowsky have often criticized open source AI as dangerous and believe that limitations on the distribution of model architectures and weights are preferable.

This market resolves to YES if:

  • The lawsuit is decided in the New York Times' favor, either by judgment or by settlement

  • The judgment or settlement requires the deletion of models, the degredation of training data quality, or software additions or changes that degrade the output data

  • An open source model takes the worldwide lead across all models, as measured using the metrics reported on the Huggingface LLM Leaderboard at the time of the measurement, for at least one full day during the six months after the final appeal has been exhausted or declined

If the judgment or settlement is in OpenAI's favor, and no more appeals are available or used, the market resolves to NO. It also resolves to NO if closed-source models retain the lead for six months after no more appeals are available or used.

Get
Ṁ1,000
to start trading!


Sort by:
1y

Are the conditions an "AND" or an "OR"?

1y

Unless I'm missing something, there's a discrepancy between the title and resolution criteria. It is assumed that "an open source model takes the worldwide lead across all models..." is equivalent to "a setback for AI safety". Regardless of what Yudkowsky believes, this is not proven, and there are many who believe transparent, open source models are better for AI safety than opaque, closed source models.

What is this?

What is Manifold?
Manifold is the world's largest social prediction market.
Get accurate real-time odds on politics, tech, sports, and more.
Or create your own play-money betting market on any question you care about.
Are our predictions accurate?
Yes! Manifold is very well calibrated, with forecasts on average within 4 percentage points of the true probability. Our probabilities are created by users buying and selling shares of a market.
In the 2022 US midterm elections, we outperformed all other prediction market platforms and were in line with FiveThirtyEight’s performance. Many people who don't like betting still use Manifold to get reliable news.
ṀWhy use play money?
Mana (Ṁ) is the play-money currency used to bet on Manifold. It cannot be converted to cash. All users start with Ṁ1,000 for free.
Play money means it's much easier for anyone anywhere in the world to get started and try out forecasting without any risk. It also means there's more freedom to create and bet on any type of question.
© Manifold Markets, Inc.TermsPrivacy