An AI-lab coordination proposal falls afoul of antitrust regulation by end of 2024
7
19
170
2025
18%
chance

Resolves YES if it becomes public knowledge that some of the top-five AI labs (as per executives/spokespeople in relevant areas at OpenAI, Anthropic, Google, Meta, etc) would have agreed to coordinate in some way specific and relevant to reducing world-scale risks from AI, but were prevented by fear of antitrust regulation or actual antitrust regulatory action.

Example of positive resolution would be Sam Altman (verifiably and credibly) saying "I talked seriously with X and Y other labs about enacting a pause, but the regulators didn't like it". Another example would be labs actually announcing a pause, then some EU/US antitrust regulatory action occurs opposed in some way to that announcement. (The regulators need not block the pause fully to qualify for positive resolution - it qualifies for positive resolution if the regulators successfully instill substantial fears/uncertainties around further coordination)

Also resolves YES if I become convinced that antitrust regulation is blocking AI lab coordination in some other way that is very obvious in retrospect but I can't foresee enough to write into this market :)

A statement of only one party's intent to pause (such as Altman saying "I would want to pause but we checked with the regulators and we can't") is subject to my judgment. The intent is to resolve YES if the statement is credible, there are other labs with similar statements, and the pause would have been enacted if not for the regulator, but not if the executive is just using it as an excuse to avoid a hard conversation/negotiation with other labs.

Resolves NO if no such story becomes public knowledge by end of 2024.

Get Ṁ600 play money
Sort by:
bought Ṁ25 of NO

I am betting no on this as given the below response from GPT-4, assuming there is no intention deceipt/price fixing, it is unlikely to be securities fraud. Further, this occuring before 2025 seems unlikely as I don't expect us to be at the point these statements are highly relevant in the next one and a half years.

Nevertheless, I think the idea behind this question is incredibly valuable; currently Humanity coordinating a pause around AI development is 400% more likely than the next most likely reason in the below question:

predicts NO

@RobertCousineau I'm going to try and make a series of questions that explores this, and then an associated EA/LW post.

When I ask GPT-4 the following:

Simulate being CorporateLaw-GPT, a LLM specializing in corporate law (with a focus on antitrust regulation and securities fraud). Someone asks for your consult with the below letter. How do you respond? Currently, a couple top AI labs have a statement saying roughly "When we believe Artificial Intelligence is very close to reaching human level across similarly broad basket of domains to humans, we will pause further development". The reasoning is akin to if someone thought they had a fruitful path to making nuclear energy really cheap, but also thought that there was a chance of it also making nuclear weapons really easy to make, so they say "If it looks like my research is likely to, within one year, make it possible to make a nuclear explosive in a well equipped garage, I will stop doing my research". If a publicly traded company actually did pause, would you consider them likely to be sued (successfully) for securities fraud? If multiple companies (publicly traded or otherwise) did so at the same time, would that be likely to be prosecuted as collusion/price fixing (as many companies are agreeing not to lower prices through further R&D)?

It responds with:

In your scenario, there are two separate legal considerations: securities fraud and antitrust regulations.

  1. Securities Fraud: This typically involves deceitful practices in the stock or commodities markets that induce investors to make decisions based on false information. If a publicly traded company were to halt its development, the question would largely depend on whether they were truthful and transparent with their shareholders and the market about their reasons for doing so. If the company adequately communicated this change in corporate strategy, properly explaining the reasons behind it and the potential risks and rewards, it is less likely to be successfully sued for securities fraud. There would be an issue if they hid or misrepresented material information that shareholders needed to make informed decisions about their investment.

  2. Antitrust Regulation: The issue of collusion or price-fixing arises when companies agree amongst themselves to manipulate the market, typically by setting artificially high prices, dividing up markets, or other practices that undermine competition. In your scenario, if multiple companies all decide independently that they should halt AI development for the same reasons (out of ethical, safety, or similar concerns), that is not illegal under antitrust laws. However, if there was an agreement or understanding between the companies to halt development in order to maintain or raise prices, that could potentially be an antitrust violation.