Assuming SB 1047 is passed, will the "compute threshold" of 10^26 flop be raised before 2030?
Basic
23
3.1k
2030
56%
chance

California's new AI Safety legislation includes a threshold of 10^26 flops (or equivalent capability) above which people training AI foundation models must meet certain reporting requirements.

Currently no models exist above this limit (llama-3-400b is the closest with an estimated compute budget of 5.4e25 FLOPS). However, with the inevitable march of Moore's Law (as well as algorithmic progress) we can expect the cost of training such a model to fall by roughly 4x every 2 years.

Thus, while the bill claims such a model "would cost over $100,000,000 to train", in 2030 this cost would be more like $6,250,000. Such a cost would be well within the budgets of larger open-source organizations. For example, Stability AI has reportedly spent $99M training open source models.

This question asks: "Assuming SB 1047 is passed, will the "compute threshold" of 10^26 flop be raised before 2030?"

Note that a higher threshold is less strict whereas a lower threshold is more strict. So another way of rephrasing this question is: will AI regulations be less strict in 2030?

If SB1047 is not passed (or it is passed in a sufficiently modified form that the compute threshold does not exist), this question will resolve N/A.

If SB1047 is passed with a different compute threshold, this question will be defined in terms of the actual legal threshold (and I will edit it to reflect as much).

If SB1047 is repealed, amended, or replaced in such a way that the reporting threshold is higher on 1 June, 2030, this question resolves positive.

If SB1047 remains the law of the land, or is repealed amended or replaced in such a way that the threshold is lower on 1 June, 2030, this question resolves negative.

If the threshold remains the same, but the requirements become stricter, this question resolves negative. For example, suppose a new law is passed banning models trained with >10**26 flop, this question will resolve negative.

If SB 1047 is replaced by a Federal framework or other law with substantially the same effect (with the same or lower compute threshold), this question resolves negative, even if SB 1047 itself is repealed.

If AI wipes out humanity before 2030, or the government of California ceases to function for some other reason, this question will resolve N/A.


Get Ṁ600 play money
Sort by:

This law was amended today to require both this compute threshold and a 100 million dollar training cost. How does that impact the resolution?

@SaviorofPlant "If SB1047 is passed with a different compute threshold, this question will be defined in terms of the actual legal threshold (and I will edit it to reflect as much)."

So the question will resolve positive if either the 10^26 FLOP or $100M threshold is raised before 2030.

Note that the $100M threshold will almost certainly result in a higher effective threshold than 10^26 FLOP before 2030, but the question will only result in a positive resolution if the actual legal text of the law is modified after passage.

The bill has passed the CA senate.

bought Ṁ10 NO

It probably won't pass, though 2030 is close enough that if it does pass, there wouldn't be much time to raise the threshold

bought Ṁ10 NO

"Currently no models" assume you meant open source models.

@gpt_news_headlines

https://manifold.markets/ZviMowshowitz/will-california-bill-sb-1047-become

Also, are conditional markets allowed? I thought they asked folks do to multiple choice. Eg: A -> B, A-> ~B and ~A

@gpt_news_headlines "assume you meant open source models."

My understanding is literally no models are above the threshold (though of course we don't actually know how many flops Gemini Ultra or GPT-4 were trained with).

@gpt_news_headlines "Also, are conditional markets allowed? I thought they asked folks do to multiple choice. Eg: A -> B, A-> ~B and ~A"

Is this true? Multiple choice wouldn't really be a good option here, since then people have to weigh the probability that the bill passes.