Will any prominent Notkilleveryoneists switch sides to e/acc after learning more about machine learning R&D?
5
93
130
2028
7%
chance

It has been claimed that Eliezer Yudkowsky and others are unqualified to comment on AI risk, because they do not have a firm enough grasp of the state of the art in AI research at a detailed level.

Will any of them gain such an understanding, and do a 180 degree turn and become effective accelerationists as a result?

Get Ṁ1,000 play money
Sort by:

It's a silly claim. I poked around with a Netflix Prize submission that never got good enough to submit back when deepnets were not yet much of a thing; played around with GANs in 2016; built my own transformer model from scratch down to the optimizer in 2020 to make sure I understood the technology; and have been poking around with finetuning LLMs recently. The claim that I don't know how LLMs work is just them making stuff up; and the claim that there's some incredible insight to be learned from training on 100 H100s instead of 3 A100s, which none of them can verbalize, is even more suspicious--there just isn't any extra there there to be of incredible relevance to alignment issues. Bluntly, they're gulling a nontechnical audience which doesn't know that the details of layer normalization are incredibly unlikely to be relevant to the issues being debated. Or rather, maybe at some point some of them were honestly ignorant that I knew how gradient descent worked, but when I provided evidence otherwise, others retreated to a far more dishonest position about how there's some special and unspeakable extra wisdom associated with the fine details of distributing larger models over lots of GPUs.

bought Ṁ3 of YES

is one person sufficient?

predicts NO

@na_pewno One prominent person, yes. Like, I have heard of them already kind of prominent. I don't think we are generally prominent in any other way, with the exception of Yudkowsky.

bought Ṁ100 of NO

Opposite market: