AI discourse has a lot of repetitive, terrible arguments, from both sides, often from pretty intelligent people. I was thinking of writing up a list of them along with in-depth rebuttals. Not sure if people would use it though, since presumably most of the people making bad arguments are doing so because they don't want to think too hard or read a lot of text.
This market resolves YES if the post gets more positive engagement than most of my blog posts. For example, 10 people all say "this is cool, I'm going to use this" or similar. Or one popular person says they found it useful. It resolves NO if there's crickets and my effort seems to have been wasted.
People are also trading
There already is one with good AI arguments lol: https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug
@Johne5ee The Stampy resource seems like it's kinda what I was thinking of, so I'm not sure there'd be a point to making a second one.
Also I did recently write an article addressing a particular AI misconception, but I didn't get any notable feedback on it.
Yes, do it!
And by it, I mean improve the existing encyclopaedia-like tools for discussing AI arguments?
I know that Rob Miles Discord would appreciate writers for stampy.
https://stampy.ai/
@AgenticLondoner I'm Stampy's content lead and I endorse this message - see our Get Involved page. Though having this be a separate thing would also be valuable, and may make it easier to write.
@StevenK that site is not an AI Safety FAQ it's a collection of EA talking points, that cite only sources related to EA or people who use EA as a tool, it's an ideological ouroboros.
@MADGAMBLER6969 "This isn't a physics FAQ, it's a collection of physicist talking points that only site sources related to physics". Come on man, try to have a shred of intellectual dignity here.
@IsaacKing AI alignment/interpretability and especially the topics of scale/how capabilities emerge is not only the domain of people related to EA but only those are cited when there is a point to be made.
Some things are demonstrably false outside of that perspective examples being the pages about scaling. It's more or less the consensus that you can't get to AGI only by scale, yet the site completely omits that perspective I assume because of relying on sources that were written in 2022 (around chinchilla), another aspect of the scaling laws that the site omits is that scaling curves for data/compute are at best logarithmic and just calls whatever growth they see "growth" without any magnitude attached to it with I assume a kurzwelian/futurist dreams of superexponential growth/fast takeoff magically occurring and solving that.
It's more of an aesthetic preference but the site also has a bad habit of namedropping people who disagree with the points being made in the text while not interacting with their polemics.
EA is an ideological/political bubble, not a field of science, it's takes some ideological dogmas like musings about takeoffs (and especially the fast takeoff scenarios), then uses statistical methods on those opinions and it ends up looking more like a social science than an ideology, akin to macroeconomics where it's still just those unverifiable sci-fi inspired musings under all that math and graphs.
@MADGAMBLER6969 Don't get me wrong ideology/politics is not bad, it usually ends up creating a lot of change, if it respects the reality (and doesn't dehumanize) that change often is good, my point is that extinction from AI perspectives don't respect reality enough.
@IsaacKing https://arxiv.org/pdf/2404.04125 was a recent paper about scaling up diffusion based models (not llm transformers but still strong evidence to the contrary of "scale is all you need"), it's notoriously hard to prove stuff about consensus though, it was more of a vibe.
Also pretty sure OpenAI has vested interest in preaching scale, you know investors make compute happen and even if compute is only a tiny part of the puzzle not having it makes it impossible to work on new stuff.