
I will resolve this based on some combination of how much it gets talked about in elections, how much money goes to interest groups on both topics, and how much of the "political conversation" seems to be about either.
People are also trading
@Dulaman I suspect it was based on bad guesses about the technical situation, not the political situation. In 2023 a surprising number of people seemed to have an implicit assumption that short-term technical progress wouldn't be a big deal, talking about near-term AI impacts in terms of 2023-AI. Today the situation is even worse: Some nontrivial fraction of people haven't even kept up with current progress, so their expectations for social impacts are essentially based on 2024-AI.
@Dulaman Same reason most people thought AI Safety as a research field to prevent the threat of human extinction due to AI was laughable and sci-fi 10 years ago. Humans are just terrible at extrapolating trends. Covid-19 being an excellent example of how dumb we are when it comes to computing mere exponential curves and preparing accordingly. Today, by contrast, AI Safety is discussed in the Pentagon as a legitimate threat with massive long-term military and geopolitical implications.
The early pioneers are always the crazies at first. Respect to them because I was huge sceptic back then as well and would have lost $1000s betting that we would never see AI capabilities we have today 10 years ago.
@CornCasting I think many of the people who were talking about AI Safety 10 years ago did a lot of damage to the cause of AI Safety
@Dulaman I bet no in 2023 and I'm still betting no. I think LLM progress is a logistic curve that we're already slowing down on, and better architectures will take time & research to find.
@DanW I think LLM progress is going to continue for some time but there will also be other architectures that appear. Are there other questions on manifold that provide more granularity regarding these elements?
@Dulaman Couldn't agree more with a huge caveat: blaming them for problems coming home to roost sounds like blaming Galileo for not using the right cultural language that became relevant 10 years later when describing the giant meteor he discovered hurling towards earth. Like yes, 10 years later people look up and see possible signs there might be a meteor and are like "bro, why did you not say this in a language we understand and can take seriously 10 years ago" but also now do something about it dipshits instead of whining about the pioneer's off-putting communication 10 years ago, otherwise the blame is fully on you.
Which is why aisafety.dance might save us all. AI Safety concepts explained by a dancing cat-robot femboy maid. Truly the accepted language of the modern age. No silly outdated millenarianism in sight.
AI importance seems to have surpassed abortion in this chart:
I have sold out of my NO position. We will likely see more of this going forward https://www.nytimes.com/2025/11/25/us/politics/ai-super-pac-anthropic.html
@Xizted Personally, I take it to mean that some major factor going into voting decisions is about the candidates' positions relevant for AI (not just the whole party, and not just pro or con)
I think it's plausible that the current landscape would be sufficient to resolve this YES if there were an election today. Think about how much energy+water usage is discussed, children's use of AI, the copyright issues, the economic implications, job loss, etc etc. I think we'll probably see in the midterms but I'd suspect AI-related interest group funding to exceed abortion interest group funding.
@bens I would estimate that about 10x the number of NYT articles are about AI as about abortion today, fwiw
@bens I would estimate that about 10x the number of NYT articles are about AI as about abortion today, fwiw
@bens That’s an indication that it’s a bigger part of the national conversation, but not necessarily that it’s a bigger political issue
@JimHays That's why I worry about how this will be resolved. It seems unclear, because there are different ways of quantifying it. AI is certainly more novel and interesting, so there's more talk about it, but abortion still has far more single issue voters, and for any given politician, switching their position on abortion would probably be a much bigger deal for them politically.
There's probably more NYT articles about bitcoin than abortion. That doesn't mean bitcoin is a bigger political issue, because the bitcoin discussion is not especially political. Likewise, AI may be more discussed than abortion right now, but most of the discussion is cultural/economic rather than political. Most people probably have a strong opinion on how the government should handle abortion, but few have even thought about how the government should handle AI.
@bens There are coalitions of people in American politics that vote purely on the basis of candidate's positions on abortion. AI is certainly topical, but isn't remotely as politically consequential today.