In 2028, will AI be at least as big a political issue as abortion?
1.1K
6.4K
3K
2028
39%
chance

I will resolve this based on some combination of how much it gets talked about in elections, how much money goes to interest groups on both topics, and how much of the "political conversation" seems to be about either.

Get Ṁ500 play money

Related questions

Sort by:
MP avatar
MP

https://www.nytimes.com/2023/11/15/world/americas/argentina-election-ai-milei-massa.html
Related. Massa campaign used an AI generated ad to attack Milei for saying good things about Tatcher, Argentina's enemy during the Malvinas' war

WalterRijneveld avatar
Walter Rijneveldbought Ṁ10 of YES

By 2028, AI could indeed be as big a political issue as abortion. The rapid advancement of AI technologies is posing critical challenges in areas like employment, privacy, security, and ethics. As AI becomes more integrated into everyday life, its governance and regulation will likely become key political concerns. Issues like AI-driven job displacement, bias in AI algorithms, surveillance, and the ethics of autonomous systems could escalate into major political debates, rivaling the intensity and significance of discussions around abortion.

DanielaMayorgab230 avatar
Daniela Mayorgabought Ṁ10 of NO

I will say NO. Abortion is a subject that goes beyond normal knowledge. We talk that this is a subject that encompasses religious, social, and moral contexts, on the other hand, artificial intelligence empowers human beings to enlighten themselves and know what is around them. So far, we know that artificial intelligence supports human beings in areas that are highly difficult, such as medicine.

Amarjit Sen in his article called ¨The Impact of Artificial Intelligence on Society: Opportunities, Challenges, and Ethical Considerations¨ mentions that IBM created Watson Health, an AI development to help doctors diagnose and treat cancer patients. The system can analyze vast amounts of data from medical records, research studies, and other sources to provide doctors with personalized treatment recommendations.

Of course, we know that all this information will be based on data received, so the real political issues are Data privacy and data collection, and how we can find unbiased data with the correct protection of the individual.

https://linkedin.com/pulse/impact-artificial-intelligence-society-opportunities-challenges-sen/

dionisos avatar
dionisospredicts YES

@DanielaMayorgab230 So, it seems you will be on the pro-AI side of the political issue :-p

TRIV avatar
Triv

@DanielaMayorgab230 I dunno, if the speed in AI development continues or even increases, four years from now there very well may be religious, social and moral contexts to seriously consider. This last year in AI has been phenomenal.

8aca avatar
森本康仁bought Ṁ30 of NO

My answer is No. This is because abortion and AI are thought to have different problem definitions in politics. While abortion can attract voters' attention by raising it as an issue, there is a risk that AI will surpass human intelligence and take control of politics itself.

The issue of abortion has a huge influence on how people vote because many women are interested in the abortion issue, and women have a larger population than men in the United States (167,500,000 women compared to 164,380,000 men), and women vote more often than men—in the 2020 presidential election, women constituted 52% of the electorate compared to 48% for men.

On the other hand, Philosopher Nick Bostrom, a professor at the University of Oxford, believes that if AI's objectives cannot be made to follow human objectives, AI that exceeds human intelligence will go out of control and wipe out humanity.

In 2022, Det Syntetiske Parti, headed by an artificial intelligence, was born in Denmark. AI chatbot Lars as the leader and artificial intelligence in charge of policy. In elections, highly refined generative AI quickly provides the content necessary for voting decisions, covers all the information necessary for voting, conveys it to voters in an easy-to-understand manner, and makes voting decisions based not only on policy but also on various values.

If "intelligence" is eroded by artificial intelligence, it is quite possible that the person making important decisions will shift from humans to artificial intelligence.

If artificial intelligence takes over politics and economics, the world ruled by humans will end. Therefore, rather than a political issue, it is thought that the structure itself will change.

Referrence:

The abortion issue in the 2022 midterms—unlike any other issue | Brookings. (2022, September 29). Brookings. https://www.brookings.edu/articles/the-abortion-issue-in-the-2022-midterms-unlike-any-other-issue/

Ethical Issues In Advanced Artificial Intelligence. (n.d.). https://nickbostrom.com/ethics/ai

Diwakar, A. (2022, August 22). Can an AI-led Danish party usher in an age of algorithmic politics? Can an AI-led Danish Party Usher in an Age of Algorithmic Politics? https://www.trtworld.com/magazine/can-an-ai-led-danish-party-usher-in-an-age-of-algorithmic-politics-60008

BenjaminIkuta avatar
Benjamin Ikutapredicts NO

Scott, what do you think of, as an indicator of political significance, how voters themselves rank the issue?

BTE avatar
Brian T. Edwardspredicts NO

@BenjaminIkuta I love her blog. Thanks for sharing.

BenjaminIkuta avatar
Benjamin Ikutapredicts NO

"how much money goes to interest groups on both topics"

This seems the most quantifiable. Do we have a measure for this?

YuxiLiu avatar
Yuxi Liu

@BenjaminIkuta I'm sure the EA folks have a list of AI lobbying organizations and thinktanks.

firstuserhere avatar
firstuserhere

If we can get a continuum, it would be helpful

BTE avatar
Brian T. Edwardsbought Ṁ1,000 of NO

Presumably, at least one state will have to try to outlaw AI for this to resolve. Yes. Though it should probably be more like 15 to 20 states.

Joshua avatar
Joshuapredicts NO

What if anti-AI sentiment is spread out such that 1/3rd the voting population wants to outlaw AI, but no one state has a majority that wants to? I'm a NO voter, but I think that would count. Banning AI in a single US state would be pretty silly anyways, if that state isn't California or Washington. Much more likely for people to lobby for a national ban, ideally an international one.

I don't think it happens until the 2030s or later, though.

BTE avatar
Brian T. Edwardsbought Ṁ95 of NO

@Joshua Washington DC isn’t a state. But things like self driving cars are already regulated at the state level so that is really the only option.

Joshua avatar
Joshuapredicts NO

I meant Washington State because of Seattle being a tech hub 😅

BTE avatar
Brian T. Edwardspredicts NO

@Joshua Ha. I should have known.

Frogswap avatar
Frogswappredicts NO

@BTE I don't know, if we start seeing substantial unemployment from AI, it could become the most significant political issue pretty easily. People care about their rights, but they really care about their livelihoods. 2028 might be too short a timeline, but it will get very important very quickly at some point.

BTE avatar
Brian T. Edwardspredicts NO

@Frogswap I simply don’t understand the unemployment argument at all. At some point there is nobody left to buy the things the companies are making more efficient with AI. I have never heard anything even close to a good argument for why that might happen.

MartinRandall avatar
Martin Randallpredicts YES

@BTE It's distributional. The rich buy all the things, the poor starve.

MartinRandall avatar
Martin Randallpredicts YES

@BTE Or, the poor miss out, get jealous, vote.

BTE avatar
Brian T. Edwardspredicts NO

@MartinRandall Wealth inequality is already at a point where if that was gonna happen would be now.

Frogswap avatar
Frogswappredicts NO

@BTE I think the point at which there is nobody left to buy things is way further down the line than the point at which unemployment becomes a major concern. 20% unemployment is catastrophic, but you still have 80% of your customers and a fraction of your costs.

BTE avatar
Brian T. Edwardspredicts NO

@MartinRandall But again, buying things from yourself isn’t going to work. How do you grow wealth if the only source of new growth are the wealthy that are getting smaller in numbers??

Odoacre avatar
Odoacrepredicts NO

@Frogswap

People care about their rights, but they really care about their livelihoods.

Do you think kids are cheap?

Frogswap avatar
Frogswappredicts NO

@Odoacre That's a fair point, but some rough back of the envelope math (20k/year to raise a child x 1M abortions per year vs. 40k/year median wage x 160M employed persons per year) suggests that the financial impact of abortion is less than that of a third of a percent unemployment.

Factor in that many (most?) abortions are not at issue unless the federal gov't takes a pro-life stance instead of remaining neutral, people will probably have a little less unsafe sex when that option is taken away, some people abort today because they don't want kids for a few years rather than never want kids, and illegal abortions could still happen, and the number is lower (by some amount, my envelope isn't that big).

BTE avatar
Brian T. Edwardspredicts NO

@Frogswap Lack of access to abortion is a compounding factor. Most people don’t have just one kid. And being unemployed with children you didn’t want is objectively worse than being unemployed with no children at all. So even if AI driving unemployment is really bad, it will be MUCH WORSE for those who are forced to have children under those circumstances. There is no “I’m poor and jobless” exception to the abortion banes unfortunately.

Frogswap avatar
Frogswappredicts NO

@BTE Largely agreed, but I'm pretty sure my estimate accounts for people having more than one kid; it just doesn't account for people having more than one abortion in a year.

bigmac avatar
Macpredicts YES

@BTE This doesn’t make any sense to me

BTE avatar
Brian T. Edwardspredicts NO

@bigmac What doesn’t?

bigmac avatar
Macpredicts YES

@BTE The idea that IF ai does drive unemployment, abortion will still be more of an issue because being unemployed with a child is objectively worse than just being unemployed? Have I misunderstood?

BTE avatar
Brian T. Edwardspredicts NO

@bigmac Yes, but unlike AI, there is much more driving abortion as a political issue than simply kids being expensive to care for. There are states that basically have no exceptions for their abortion ban, which has and will continue to result in child rape victims being forced to have babies, or victims of incest, or most egregious women who have complications that may threaten their own lives that can’t be treated because physicians have to worry about their own freedom before concerning themselves with their patient. Like maybe AI is an economic nightmare, it’s entirely speculative right now though, the abortion issue is as viscerally real as any political issue can be for everyone regardless of whether or not they are rich or poor or unemployed. Plus there are literally decades of political organizing behind the abortion interest groups.

I really want someone to make a serious non-speculative argument for YES so I can take it seriously.

BTE avatar
Brian T. Edwardspredicts NO

@bigmac I am also very skeptical that people will even blame AI for them not having jobs. They are going to blame the humans who decided to choose the AI over them. By and large, if AI has major positive impact on society generally, it will probably be the first thing someone turns to after they lose their job to an AI because they will be compelled to adapt and use the thing for their own ends. People blame people, not technology for their problems.

Frogswap avatar
Frogswappredicts NO

@BTE People totally blame technology for their problems, at least as far back as the industrial revolution. There's already a ton of this in the art community, where the criticism is directed at OpenAI/Stable Diffusion/etc. for using art as training data, rather than at potential employers for using the AI art instead of theirs.

BTE avatar
Brian T. Edwardspredicts NO

@Frogswap Who is saying we should outlaw AI?

BTE avatar
Brian T. Edwardspredicts NO

@Frogswap It’s not the AI the artists are mad about, it’s the choice of the AI developers to use their IP to train the AI that makes them mad.

Joshua avatar
Joshuapredicts NO

@BTE The Butlerians are!

Frogswap avatar
Frogswappredicts NO

@BTE I think getting mad at the tech company for using an ostensibly legal process to build a technology that they can only build with that process, and demanding that that process be ("is"?) found to be illegal should satisfy your constraints. If not, I don't think I can do better.

BTE avatar
Brian T. Edwardspredicts NO

@Frogswap What legal process are you talking about?

Frogswap avatar
Frogswappredicts NO

@BTE Training an AI on IP

BTE avatar
Brian T. Edwardspredicts NO

@Frogswap That is absolutely NOT a legal process. The AI companies are in fact the ones doing what you are suggesting the artists are doing. The established legal process for IP requires licensing and royalties. You will probably remember the music industry and the same arguments you are making coming from Napster and the others and in court the industry CRUSHED THEM INTO OBLIVION. So history is with me my dude.

Frogswap avatar
Frogswappredicts NO

@BTE I mean, it's clearly transformative, and has not yet been tried in court. I don't see the connection with Napster, seems like a totally different animal IMO.

On the offchance that it's unintentional, this reads as pretty condescending, and I don't have the wherewithal to keep engaging if that's going to be the norm- not trying to be the tone police, I just have had enough of those conversations to last a lifetime, and I feel they suck the life out of me.

You can read OpenAI's position, which I find completely convincing, here: https://www.uspto.gov/sites/default/files/documents/OpenAI_RFC-84-FR-58141.pdf

BTE avatar
Brian T. Edwardspredicts NO

@Frogswap I am not trying to be condescending. Thank you for sharing that. OpenAI has themselves very recently changed their position to allow for artists to opt out of their IP being used for training. I think that is smart and hopefully more follow. A better approach would be to create a dataset from scratch that artists opt-in to and then can earn royalties from any future monetization of the derivatives.

Frogswap avatar
Frogswappredicts NO

@BTE No worries then! Sorry, I'm maybe a little oversensitive to that stuff.

I think opt-out doesn't offer OpenAI any legal protections as far as copyright, and so it's probably more of a gesture of goodwill. I wouldn't expect them to go any further unless they lose their lawsuits, as that could maybe be used against them in court (not a lawyer).

bigmac avatar
Macpredicts YES

@BTE  As you mentioned, there are very real and visceral problems surrounding abortion with a long history of political interests. It is already a massive political issue. I don’t, however, see it fluctuating much. It’s not going away, but it’s also not going to become an exponentially larger problem.

You seem to dismiss the AI argument mainly as it’s speculative. I completely agree, it does require a lot of speculation. However, discussing 2028 inherently involves speculation, which is also the point of Manifold. So, I'm going to do some speculating.

If we extrapolate current trends, AI is almost certainly going to cause greater:

  • unemployment (World Economic Forum during its Growth Summit 2023 highlighted that 40% of all working hours could be affected by AI)

  • existential risk fears (historically, fears around nuclear technology during the Cold War era drove significant political action, and similar fears could arise with AI advancements)

  • international competition (the ongoing technological race between the US and China is likely to intensify, stoking nationalism and geopolitical tensions)

  • legal issues (OpenAI already facing multiple legal challenges)

  • moral/ethical concerns (debates around autonomous vehicles' decision-making in critical situations, and military AI applications are ongoing).

There's been some pretty considerable research on the above already. Yes, most of it involves a longer timeframe than 4 years. But, a lot of this research was done before the last year. I think most would agree things have moved towards a faster take-off.

These issues will impact everyone. Yes, the extent to which we have no idea and could be massively over-estimated. Do you think these issues are a long way off / not likely? If so, what would it take to change your mind?

But even if it was solely the issue of unemployment, I struggle to see this not being a huge political issue. Unemployment due to technological progress has a long history of causing political instability/change (The Great Depression, Weimar Germany, Thatcher, Arab Spring etc). I don't see why AI would be any different?

When the majority of the political base in the US is so angry already (plus add in the above speculation) I can't see this going down well. Add in a Trump type candidate who stokes the fears and worries of people impacted by the above and runs on the 'evil AI taking our jobs and corrupting the children' ticket and it suddenly becomes a very big political issue.