I will resolve this based on some combination of how much it gets talked about in elections, how much money goes to interest groups on both topics, and how much of the "political conversation" seems to be about either.

In the future, we'll see
AI and abortion, which will it be?
Robots or fetuses, which will we defend?
In politics, it's the battle that never ends.

This question is complicated, but I think the answer will be "yes" for several reasons. First, I think AI will start taking very noticeable and increasing numbers of jobs over the next 5 years. I don't think this, in and of itself, is likely to be sufficient; I'm not sure whether offshoring of manufacturing jobs or manufacturing jobs being taken by robots has been a bigger political issue than abortion, for instance. I think the reason why AI taking people's jobs will likely become a very large political issue is that it's going to take a lot of white-collar jobs. It's going to put a lot of very educated, previously middle and upper class people out of work or out of business. And I think those people and their supporters are going to get very vocal about it and very politically active. I'm not sure how quickly job losses will take place, or how big the issue will become in 5 years vs. 10, but I think that at the pace at which AIs are currently being released for public use, I think sooner is more likely than later. It will take some time for companies to figure out how to use AI to their best advantage and then to lay off employees, but it's coming.
I mentioned AIs to an elderly relative the other day and talked to her briefly about various potential consequences, and her first reaction was that, "they should make laws to ban AI being used to take the jobs of doctors and lawyers and other jobs like that." She said this both from the point of view of someone who would want to consult a real human, not an AI, and from the point of view of preventing so many jobs from being lost. I can fully picture many people having this point of view and wanting to outlaw some kinds of AI. I can also imagine a wide array of other potential angles for AI to become a political issue.

@belikewater if you are expecting a number of jobs will be lost to AI in the next few years, do you think unions might become relevant again ?
White collar unions specifically would be a new thing that we could expect happen in the next few years.
I've made a market about it here: /Odoacre/us-whitecollar-union-membership-rea-a5116dbb61f4

@Odoacre I think that's a great question. Even close to 10 years ago, economists were predicting that somewhere in the next 2-3 decades, 50% of all jobs could be lost to AI, with only 20% new jobs created by AI, for a net -30%. I'm not sure what the latest predictions are. That sort of situation can only lead to big political changes, and I agree that it's likely that some will join unions. College-educated people in lower-class jobs have certainly tried to start unions in recent times. Beyond that, I can imagine all sorts of political extremes gaining strength. I think it's likely that, if all else were held constant, wealthier societies will end up either with a hollowed out middle class and everyone else impoverished, or with universal basic income. Of course, many other dire trends are happening at the same time and will have their own effects, so I don't think this future is a certainty, but I do think it's likely. I'm not sure about the time scale, though. I'm not sure whether 5 years will be enough time to see such large changes. I suspect it will, but it could take longer. Changes in AI capabilities will come fast, but humans tend to pivot much more slowly.

AI is literally not a political issue at all today. How does it overtake the biggest political issue or enter the top 5 even in the timeframe proposed here? Nonsense fantasy about AGI imo.


@BTE Because it's going to take white-collar jobs. A lot of them. I can picture a lot of educated, politically active voters having very strong opinions about AI in the next 5 years. Forget AGI; each individual task an AI can do as well as or better than the average human - and infinitely faster - can put someone out of a job. And when an AGI is here, which will be very soon, I think AI will be an even bigger issue. While it's going to take companies some time to figure out how to use AIs to their advantage, and new AI capabilities are being released constantly, I think that by even 2 years from now we're going to start to see real-world consequences.

@belikewater AI is not replacing doctors for a very long time. This is my area of expertise and it's not even remotely close. In fact, the easiest way to tell Sam Altman is full of shit is when he says he would rather have AI rear his medical images than a human. That is complete insanity, that is not what research says. He is conflating statistical performance with clinical outcomes. AI is incredible at classification, has absolutely no impact on outcomes at all. At least not yet. Maybe itbreplaces lawyers, but literally nobody will give two shits except lawyers.
https://arxiv.org/pdf/2212.13138.pdf
> For example, a panel of clinicians judged only 61.9% of Flan-PaLM long-form answers to be aligned with scientific consensus, compared to 92.6% for Med-PaLM answers, on par with clinician-generated answers (92.9%). Similarly, 29.7% of Flan-PaLM answers were rated as potentially leading to harmful outcomes, in contrast with 5.8% for Med-PaLM, comparable with clinician-generated answers (6.5%).
What do you think of this? (I haven't read the paper myself)

@NoaNabeshima Impressive statistics. But at first glance the dataset is a combination of 6 open source medical question datasets so possibility of it being tested on data in distribution is high. I will assemble a panel of physicians to generate true independent question set and then judge the answers. This is actually exactly the type of project I am currently looking for. Thank you for sharing!

@NoaNabeshima So in other words it's impossible to know if they cheated by training on the test data.

@belikewater Almost everything you are saying is predicated on the assumption that white collar works will just willfully "train" their replacements without some guarantees. Like these are the people who have lawyers negotiate their employment contracts. They aren't going to be hustled out the door by their employers. Nor are the blue collar unions. This narrative also does not match up with the other popular narrative among people who think this way, which is that AI is going to lead to an era of super-profits where companies literally print money with bots. So who gets the money the company makes?? Like why not just keep everyone around on the payroll and instead of doing work, idk, microdose and learn to paint or whatever. You are suggesting that AI is going to end the cultural norm of working for a company because the tippy top bosses are going to fire the management and presumably all the support staff to make money, checks notes, for themselves?!? To be totally frank I think you and the other here arguing for YES are living in a fantasy and you have fallen prey to the "magic" conjured by the potent brew of statistics and supercomputers. ELIZA all over again. And you also probably have only heard about abortions on the news or in church or whatever and never met anyone who is an activist on either side of the issue. It is so tech bro-ey to think christians and womens rights advocates are just gonna stop caring about that at this moment. ROE V WADE WAS OVERTURNED LIKE 6 MONTHS AGO!!! Abortion hasn't been this big a deal in 50 years!!!!

@BTE What good does it do to make wild and unjustified assumptions about me? How does that help your argument? Good arguments do not require resorting to an ad hominem fallacy.
I absolutely agree that the abortion issue isn't going to magically disappear. The question asks whether AI will be "at least as big a political issue as abortion" (emphasis mine) in 2028. You and I will have to agree to disagree about the prospects for this. In the meantime, consider just 2 minor examples:
* Language editing is a US industry that has employed many thousands, probably tens of thousands or more. Much work focuses on cleaning up text sentence by sentence to make it readable at a basic level. It is hard to see how most of those jobs won't soon disappear. Who would pay hundreds or thousands of dollars to hire a language editor when you can use GPT-4? These companies are going to lose nearly all customers in short order.
* Contracts are increasingly being analyzed and managed by AIs. This is going to gradually decrease the number of people kept on or hired for such purposes. See, e.g.,
https://www.forbes.com/sites/joemckendrick/2023/03/17/your-next-negotiating-partner-artificial-intelligence/
And try a google search for "ai read contracts."
I'm sure one can think of many more. And as AIs are developed with many more and better capabilities, the list will grow.
You wrote, "Almost everything you are saying is predicated on the assumption that white collar works will just willfully "train" their replacements without some guarantees." No such training is needed whatsoever.

@belikewater This is by far the easiest money in the history of manifold. I am sorry for being flippant. Your arguments here are usually pretty pursuasive, we usually agree more than we disagree, but this time you make zero sense. Like less than zero sense. Like you are effectively arguing that AI will be more important than religion, since that is the basis for many of the most passionate abortion activities. Once again someone elevating AI to godlike power. LMFAO.

@belikewater Is it really an ad hominem to say you are living in a fantasy? Did I not back this up with genuine arguments? Like several of them? And suggesting that you are falling prey to the hype is definitely NOT ad hominem fallacy. The true fallacy in this thread is all the anthropomorphism. The leap from "LLMs can read contracts" to "AI is going to replace white collar workers" is SO HUGE it is hard to take seriously considering you don't fill the gap at all with your arguments. You just make one small claim and then fallaciously say that because of "small claim" this other "humongous claim is probably true". You don't seem to see any problem with the fact that your assertion would result in a very small number of people (maybe millions) getting absurdly wealthy while refusing to employ or even give a shit about the billions of people who can't get jobs anymore. Like what will those wealthy people do with that money? What do the political institutions have to say about this?? You are not addressing any of the most difficult counter arguments, and instead just saying "but look at this cool thing LLMs can do sorta kinda that they couldn't do last year" which is not an argument at all.

@belikewater If there are no white collar high paying jobs, who is gonna buy the shit AI produce? What you are predicting is the decimation of the economy, not the future of hypergrowth or whatever. People need to be thriving for AI to succeed. Period.
@BTE You seem to be reasoning via feels rather than reals. People do not need to be thriving for AI to succeed. AIs will be able to rely on each other. Large AIs will be able to do the work of many, many people.

@RobinGreen This is hilarious and is going in my slides as the most cogent argument for AGI I have yet seen - I am reasoning via feels rather than reals. It took me a minute to notice the double entendre. Cogito Ergo Sum. Feels > reals, my friend. Intelligence without experience with be close to worthless because nobody will want to use it. Because it will "feel weird".

@Nikola For reference, that question's definition of weak AGI is a program that can pass a text-based Turing Test, get 90%+ on a "more robust version of the Winograd Schema Challenge", score 75% on a SAT exam from just images of the exam and having less than 10 SAT tests in it's training data, and be able to fully explore Montezuma's Revenge with less than 100 hours of training. I don't think the minimum viable AI that could pass those requirements would cause it to be a bigger political issue than abortion, unless abortion stops being a political issue.
The first three things are all basically within the scope of current LLMs. Doing Montezuma's Revenge efficiently is difficult, and having an integrated model that does both language tasks and can play games is currently really difficult. 50% chance of achieving it by 2028 seems maybe reasonable to me then? I would expect the "true" Metaculus odds to be lower if people updated their odds, currently they say 7% chance of achieving it by February 2023.
@horse Wouldn't you need to couple the LLM with image recognition module for the SAT point?

@b575 Reading text from images is trivial, but I forgot that the test would likely have figures and graphs. I just checked a practice test and there's not many questions that have figures or graphs, so only having to only score at least 75 percentile might allow it to safely skip those. But yeah, you're right. It would already need image recognition anyway for Montezuma's Revenge.
@horse I have to mention that I love how we have moved from "reading text from images plain doesn't work" to "reading text from images is trivial" in what, eight years since my high school?

@horse I think the odds of this are practically zero because I can't imagine someone putting in the money to train a weak-agi-grade model and not including data sources that would, in effect, have many more than 10 SAT tests in their training data (either directly or indirectly in the form of e.g. stackoverflow/quora questions, including the contents of SAT tests piecemeal).

@Adam I think the spirit of that criteria is that it's efficient and it's not just memorizing how SAT tests work and nothing else. It could be that whatever architecture is used isn't as reliant on having massive amounts of data, ie it scales even better, maybe in exchange for spending even more compute.
It's unlikely that the first such model that has those capabilities will fulfill that requirement unless using less data is an important goal in creating the model, but that seems like a fairly likely eventual goal.
@horse Question: does an LLM count if it has read online walkthroughs on Montezuma's revenge somewhere but has only played the game itself <100 hours?

@GeraldMonroe Interesting question, I'm not sure. If it's read over 100 hours worth of walkthroughs that's certainly a violation of the letter of the rule. If it's read a single walkthrough, it seems to violate the spirit (unsupervised learning) but not the letter of the question.
@horse this bet also seems like other AI bets, where the system easily exceeds several conditions but from a technicality may not have solved it. For example, the LLM owners may not have bothered to scrub the model of Montezuma's revenge walkthrough so we have no data on whether it needed the walkthrough when it aces it the first try.
Similarly it may easily pass the turing test except for "As a large language model..."
@Nikola Now it it much closer on Metaculus. However, if you carefully operationalization of the question you referred to, you will find that it is already probably possibly to create a system which will resolve the question yes. It is not going to be an actual AGI, and is not necessary to be much more impressive than GPT-4
I think the question is useful only if you will consider dynamics, not absolute values

Signed up to this website to bet 200 against. I'd put my odds at around 10-15%
Over the past few years, AI has been developing at an insane rate, that probably ought to concern everybody. But until just recently only nerds have really cared about it. Now artists and some other people care about it, but people still really only care about a specific aspect of it in a specific context, and there doesn't seem much political thrust.
Crypto is another thing that only a niche group of people really care about. A lot of people completely ignore it, a lot of people hate it with a passion, and a lot of people love it with a passion. But despite being about money, and being extremely polarized, and insinuating a wide gap between the haves and have nots, crypto seems to barely be a political issue. Policymakers are either too old to understand tech, or they are too middle ground to get caught up by polarization and make any decision but the obvious one (regulate it sensibly, without trying to either outright kill it or hype it up). There's just no political drama right now in Crypto, and Crypto has been around for about 5 years longer than the modern
Abortion policy is by far the biggest political issue in America period. I think it's safe to say it has by far the most single issue voters on both sides, it is almost singlehandedly responsible for the clownification of the supreme court - once seen as the least partisan and most respectable branch, and it is almost 100% polarized with almost no room for anyone to express any sort of middle ground opinion or propose some form of compromise. Guns and immigration are both huge political issues in the USA, and I don't think either comes close in any of these departments to abortion policy. Automated cars are a big deal in society - potentially saving over a million lives a year (globally) if they're implemented well, potentially killing a lot of people if implemented poorly, but they come nowhere close to the level attention that abortion gets in the politics.
tl;dr - there's a good chance AI will be enormously impactful in 2028 - but that doesn't necessarily translate to enormously political
@Nick5bab well I'm convinced (unfortunately this doesn't close until 2028, but hopefully this market is liquid enough to let me partly cash out before that)
I wasn't expecting covid masks and vaccinations to get super-politicized in march 2020, but it happened. If it does get politicized it'll probably be along the lines of AI making accurate predictions about people and leftists complaining about disparate impact or self-fulfilling prophecies. Current battles over affirmative action will spillover into AI that is used in any sort of gatekeeper role.

@BTE eh, can also be rephrased as: a topic that impacts almost everyone's livelihood vs a topic that impacts half the population

@Dreamingpast Are you serious?! LMAO. I don’t know if there are words that capture the ignorance and naivety in your statement. Lack of abortion rights only impacts women you say??? Clearly you don’t know any unwanted children but let me tell you they can be both girls AND BOYS! You have also clearly never gotten someone pregnant on a one night stand. Or had a daughter who was forced to have a baby to her high school crush that she never talks to again leaving the responsibility for helping take care of the child probably to her parents, very often one of whom is, you guessed it, A MAN!! Come on dude.
@BTE the question is about comparison, no? I am all for making sure abortion is not only legal but has a proper system with proper care and aftercare. All the examples you give are about importance of abortion rights which i obviously agree with. But that doesn't mean that ... AI will not be an important issue politically. Obviously abortion rights are extremely imp for both men and women, but they don't have a monopoly on being important issues politically.

@Dreamingpast I did not say AI won’t be a an important political issue. I absolutely think it will be a major issue. AI policy happens to be a big part of what I do for a living, so I get it, but I also understand very well that the biggest risks today by far are not posed by the AI but rather the people designing and deploying it. The hypothetical issues that will arise from future more capable AI should not and will not be as important as women for a long time. There just isn’t any there there yet.

@BTE there's lots of issues that are more important politically than how important they "objectively" are. Like, "who will be the 2024 republican nominee" compared to "how much are we spending on mosquito nets in sub Saharan Africa".
Abortion and AI aren't like those extreme examples because abortion and AI are both politically and "objectively" important, but you can't use arguments about objective importance to predict political importance. Or you can, but it's a very limited approach.
Put another way, political importance is about how much the public cares; not just about direct impact.
@BTE for sure. politicians love talking about jobs and if there's a significant economic change in the next 5 years due to AI (even speculative), then that'll be all that's talked about for a while. You make a good point about hypothetical issues vs real issues, but the problem is that just because an issue is important doesn't mean it's politically popular, and vice versa.

@Dreamingpast AI like every other technology paradigm before it will create tens of millions MORE jobs than it kills. I guess I am assuming you are anticipating some sort of panic about economic displacement so I apologize if I misunderstood. I don’t think we disagree fundamentally, just I think you are assuming politics happens at the pace of technology or even business and there is literally zero evidence of that being the case.
@BTE yeah thats exactly what i mean - though eventually we'll see an emerging market, initially in the next few years there'll be a rush of people panicking and prompting politicians to talk all they can to soothe the public and create promises on jobs arising due to tech and AI. And you're right on politics and adoption happening at a much slower pace, so I am not confident on this timeframe, though we did see smartphone adoption happen within 15 years (0->85+% of the world)
@Fion Exactly, even in 2022 “The economy/ inflation” beat abortion in pretty much every poll, and by a big margin too. Quite a stretch to take that bit of information and go “People think money is more important than Women”. And that is the best case scenario, in most previous elections abortion was never ranked as a top issue and if it did, it was largely from people against it.


@BTE I feel like your comments neglect the fact that many of the most prominent political issues are not at all the most important, yet they are popular because they create anger or fear.
@BTE
"AI like every other technology paradigm before it will create tens of millions MORE jobs than it kills"
I... think you could benefit from Scott's piece on technological unemployment and underemployment. Particularly, from its very beginning: these arguments work right until they don't, and horses have already felt that on themselves a century ago: steam engine actually supported them but then internal combustion didn't. And from the very end:
> This is a very depressing conclusion. If technology didn’t cause problems, that would be great. If technology made lots of people unemployed, that would be hard to miss, and the government might eventually be willing to subsidize something like a universal basic income. But we won’t get that. We’ll just get people being pushed into worse and worse jobs, in a way that does not inspire widespread sympathy or collective action. The prospect of educational, social, or political intervention remains murky.

@b575 Wow what an exciting future. Fortunately there is the first sentence disclaimer that he doesn't know what he is talking about.
@BTE Thank you for expressing your opinion on this prediction market, I guess we'll see who's right in 2028.





















