At Manifest 2024 will @jim convince Eliezer Yudkowsky that an AI moratorium would be a bad idea?
Basic
33
46k
resolved Jun 10
Resolved
NO

the resolution criterion is slightly loose, goal is just to convince Yudkowsky that a risk of human extinction is justified by the chance of the creation of a cool AI

Get Ṁ600 play money

🏅 Top traders

#NameTotal profit
1Ṁ1,245
2Ṁ890
3Ṁ438
4Ṁ226
5Ṁ190
Sort by:

what went wrong

@Jono3h he didn't really seem like he wanted to talk so i left him alone before we got into anything interesting, bad timing i think

opened a Ṁ1,000 YES at 6% order

limit order up

@jim Unless you already failed I think you can do it.

@jim Since @Joshua already took your order, here's some ideas:

  • Eliezer's already stated he's not worried about LLMs specifically but some future technique. Since all the big companies are scaling variations of transformers, there's no reason to pause that activity right now. Since they're not the worry.

  • All the effort training models comes after establishing ways to predict the loss curve. "Scaling laws" are "ROI". Transformers are "statements in first-order logic with majority vote quantifier" and there's RASP-L to model what problems they can learn to solve. So there should be no mysteries as to what's possible: Anyone worried can test it even without having billions of dollars. So not only is there no reason to worry about LLMs specifically(which current FLOPs limits are oriented around), but it's possible to actively gather evidence not to worry. So pushing for large benchmark test suites that can be run on many variant algorithms would be more assuring than stopping work(which merely drives it underground or lower scales, and if it's underground it's harder to test).

  • The easiest way to get people to lose interest is to make them lose money. So if LLMs and transformers are not believed to directly lead to AGI and the market is this enthusiastic, maybe you want to encourage that. Tell them Nvidia could be worth $1000 trillion and get everybody hyped and invested at high valuations. "Invest your entire fund at any price. It's cheap no matter what." Then when it merely automates 5% of jobs but needs a new advancement past that possibly taking 10 years... stock prices will collapse along with expectations of growth, everyone will get burned, and there'll be a new "AI winter".

  • Telling people AI will kill everyone is probably a good idea. Makes it sound like a "powerful weapon" and everybody wants one of those. Gets them hyped up. Pushing for AI moratoriums is relatedly also a good idea because it feeds the fear. BUT: actually passing AI moratoriums into law could cause less money to be staked. And you want the amount staked to be very large so the loss causes them to develop a repulsion for the AI field. I suppose you could pass it right at market peak to raise the chance of a collapse... (though be careful not to be the primary cause) You want "disgruntled veterans" that never want to fund anything in AI and keep talking their-hard-earned wisdom about "dreaming about general AI only leads to despair".

  • If existing practice isn't enough, you want to encourage research into techniques that look impressive at low scales but has poor scaling. But don't mention the poor scaling. And if anyone bites, it could be years and millions of dollars that you set them back. But if there's a moratorium, you've limited the damage that can be caused to them. You want there to be no restrictions so people with "iterate until it works" philosophy lose as much as possible, so "natural selection" leaves survivors that look further ahead.

  • If an AI moratorium in the US is passed, people will leave to foreign countries. It's unlikely that every country everywhere will agree to a treaty, and unlikely that you'll get nuclear enforcement on non-signatories. So you want to keep data centers and AI companies as concentrated as possible so they're a single point of failure. So you'd be making them more resilient if you push them elsewhere, which removes opportunities for controlling them once better AI techniques are discovered and it becomes necessary.

I'm sure you got some good ones too.

opened a Ṁ1,000 YES at 13% order

@SemioticRivalry your bet is too small

bought Ṁ10 NO

so, what are your arguments?

@Jono3h i have a fiduciary duty to withold that information