Will the left/right culture war come for AI before the end of 2023?
45%
chance

This market resolves YES, if, in my sole opinion, it seems abundantly clear that support for "AI" (an umbrella term for all the technologies, and, frankly marketing/branding associated with the term), becomes politically polarized along left/right lines.

It doesn't matter which side becomes "pro" AI or "anti" AI, just that each one picks a side and starts visibly yelling at the other, and that one's allegiance to left/right politics starts to drive people's opinions about whether they are pro or anti AI, and vice versa.

This is especially the case if you express an opinion about AI and people immediately assume you must belong to either the left/right based solely on that statement.

Dec 26, 5:28pm: Will the left/right culture war come for AI? → Will the left/right culture war come for AI before the end of 2023?

Sort by:
PatrickDelaney avatar
Patrick Delaneyis predicting YES at 39%

So here's a new, interesting take from today's headlines. I'm sure many of you saw the news that the A.I. generated Seinfeld cartoon got banned on Twitch for homophobic and transphobic remarks. https://news.ycombinator.com/item?id=34678339

Interestingly, an analysis of how this happened was due to the team behind that automatic Twitch feed had switched from OpenAI GPT-3 Davinci to GPT-3 Curie.

https://platform.openai.com/docs/models/overview

I really don't think Curie was meant for the type of task they had it running on - but it also sounded like Davinci was overloaded or not available, or perhaps they just made the switch without thinking...I don't think the team knew this was going to happen.

Interestingly, Curie is 1/10th the cost of Davinci. So to run that infinite Seinfeld cartoon probably was going to cost around $5000 in OpenAI credits, something like that, over a year, whereas on Curie it would have cost $500.

Obviously if you're getting ad revenue from Twitch, $500 is a much better server cost than 10X that!

So given that whole series of events, I'm thinking we're likely to see a lot of the same story this year now. 1. "Revolutionary" A.I. product gets released, whether a browser extension, webapp, whatever, it's really just Davinci. 2. Davinci gets switched over to Curie. 3. New app becomes Tay and starts saying wildly offensive, toxic things. 4. Headlines.

So on this thread we seem to have two main camps:

  1. Conflictionists (like myself) who see anything related to A.I. touching hot button issues as likely to lead to a polarization of the right/left culture war into anti/pro A.I. camps...basically things will eventually lead that way as it continues to make headlines.

  2. Humanists who see both right and left being anti-A.I. and the fight never polarizing along extremely simplified lines, but rather, "everyone," being anti-A.I., other than the A.I. companies themselves presumably.

Honestly the way things are going right now, even though I would like to keep my bet the same and play the Conflictionist camp, I think @LarsDoucet, to be fair, needs to explicitly admit that the Humanists are ahead based purely on a reading of the market resolution criteria and he would be willing to resolve against his own market interests in order to maintain fairness in the market. Personally I would like to see this market and discussion continue throughout the year.

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 39%

@PatrickDelaney My reading of the resolution criteria is mostly based upon the statement:

This is especially the case if you express an opinion about AI and people immediately assume you must belong to either the left/right based solely on that statement.

Honestly right now, if anyone says they are anti-A.I., with all else completely blind, I would have no idea what their political affiliation is. This may continue through the rest of the year, or it may change.

LarsDoucet avatar
Lars Doucet

This prompted some discussion about Manifold norms that I feel haven't properly congealed yet, so please give your feedback on that subject here:
https://manifold.markets/LarsDoucet/is-it-acceptable-for-a-market-maker

ForrestTaylor avatar
Forrest Taylorbought Ṁ10 of NO

Yeah, this market is set up on the condition that one side is pro and the other is anti. Right now, it appears that both sides are anti, just for different reasons.

Here is a short list of left-wing sources I recall hearing anti-AI takes from in the past two weeks:

Chapo Trap House, Vox, New York Times, Seattle Times, Platformer, Current Affairs, Knowledge Fight, Opening Arguments, Not Another D&D Podcast, multiple lefty Facebook groups, Some More News, Freddie DeBoer, Axios, Astral Codex Ten

LarsDoucet avatar
Lars Doucet

@ForrestTaylor One possibility is that AI researchers pick a side to placate, which then takes chaotic foment and stabilizes it along party lines, as we saw happen with COVID. Another os we see polarization, but WITHIN soecific AI subgroups and there’s no overall global polarization for the entire space itself

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 30%

@ForrestTaylor It would be interesting to see examples rather than list those out. I'm sure that your evidence is probably well-grounded but, it's Manifold, so the purpose is to actually present that research to others rather than just make claims, no offense intended. I would have no idea where to find what you have seen, so it's kind of just a list at this point.

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 39%

@ForrestTaylor super solid argument to be honest, not sure how to respond.

StevenK avatar
Stevenis predicting NO at 85%

Based on everything that's been said, I definitely was not expecting current right-wing complaints about censorship in ChatGPT (a single AI system) to come close to being considered grounds to resolve this to YES. I don't think it's morally okay for the market creator to make large bets here, or any bets at all, in order to profit from confusion about resolution conditions that he himself created.

LarsDoucet avatar
Lars Doucetsold Ṁ3,801 of YES

@StevenK Just sold all my shares (at a loss).

I’m still planning in resolving over the original stated criteria on the description. I made my bet as an update, not planning on being ready to resolve yet. What are you confused about?

LarsDoucet avatar
Lars Doucet

@LarsDoucet Also I created two separate markets to more narrowly target OpenAI/ChatGPT itself

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 30%

@StevenK Yeah, you're probably right, but it is one piece of the pie...it takes two to tango. Currently you have the right wing suddenly veering toward, "no on AI." I believe we're going to see Microsoft, Amazon, Google start having conversations with N.P.R., WSJ, NYT and other media outlets about their donation levels and media sponsorship levels for FY2023, and asking some questions about how A.I. is going to be covered, and whether they are truly committed to journalistic integrity and benefiting humanity. Then you're going to see liberal (though not progressive) bias in favor of A.I., then you will see a reaction from the progressive far left, who will be anti-A.I. for the same reasons they had been previously, but by that point the basic left/right anti/pro war would have been validated in 2023. There is nothing in the market resolution saying how long the war could last, it could be a couple weeks.

ForrestTaylor avatar
Forrest Tayloris predicting NO at 30%

@PatrickDelaney Strongly disagree. You are forgetting how much of the liberal non-progressives are working or associated with those who work in education. Teachers have their own obvious reasons to hate chat AI.

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 30%

@ForrestTaylor I hope you're right. I hope that people actually listen to teachers' concerns rather than sweep them under the rug as usual.

Yoav avatar
Yoavis predicting NO at 30%

@PatrickDelaney Why would teachers hate it?

anne avatar
anneis predicting YES at 30%

@Yoav Because students can use it to write their essays

anne avatar
anneis predicting YES at 30%

@PatrickDelaney I will say, ChatGPT just released a tool that can identify if writing was made by any AI

LarsDoucet avatar
Lars Doucet

@anne Teachers do tend to hate the technologification of education because it adds administrative burdens and tons of BS (at least the ones I talk to tell me this), and serves as another centralizing locus of control by out of touch managers

Gabrielle avatar
Gabrielle

@anne I’ve seen a lot of false positive results on that, so it’s not safe for teachers to trust it (or, if they do trust it then they’ll incorrectly say that their students are cheating, which will cause more of a headache for them).

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 30%

@Gabrielle @anne Yeah, I saw that and would agree with Gabrielle, I'm not impressed with it at all, it seems to be a token gesture to placate people and try to own the space. I put together a research / video on this over a month ago, I have developed some much more effective ways to detect plagiarism on my own, as a regular schmo. https://www.youtube.com/watch?v=whbNCSZb3c8

ManifoldDream avatar
Manifold in the WildBot

Manifold in the wild: A Tweet by Lars "Land is a Big Deal" Doucet

Will the left/right culture war come for AI before the end of 2023? https://manifold.markets/LarsDoucet/will-the-leftright-culture-war-come

RobinGreen avatar
Robin Greenbought Ṁ10 of NO

Left-wing artists hate AI image generators, while right-wing free-speech advocates tend to hate ChatGPT, so this obviously isn't going to happen.

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 39%

I am getting increasingly convinced that the culture war is 100% coming for A.I. right now, specifically from the right.

I think what I said previously is coming true already. ChatGPT has 100M users, that's 1/3rd of the population of the US, it's likely that most of those users are in the US.

Rather I see further incidents of right wing commenters on Twitter coming for ChatGPT, and whatever that represents.

https://twitter.com/Huff4Congress/status/1620852317783474176

Once you get AM radio talk show hosts reading this type of thing on air, I would say the culture war already started. I would say resolve as YES.

https://twitter.com/ryfun/status/1621492439277002753

Jordan Peterson is in on it now. Market is over...culture war initiated.

https://twitter.com/jordanbpeterson/status/1621138330397339650

Allegations of censorship:

https://twitter.com/jowyang/status/1620128829661675520

More:

https://twitter.com/rycunni/status/1621342089354018816

LarsDoucet avatar
Lars Doucetbought Ṁ5,000 of YES

@PatrickDelaney Strongly agree. We'll let this ride out a news cycle or two and see how it shakes out. Even if it was just a temporary blip, if that blip has all the features we're looking for, I'll resolve this YES.

NickAllen avatar
Nick Allenbought Ṁ50 of NO

@PatrickDelaney It's gearing up that way as a result of ChatGPT's buzz and bias, but it's definitely not clear that someone saying something about AI at the end of this year will immediately be assumed to be red/blue tribe.

Put another way, if AI researchers don't want their creations to get ground to dust in the culture war, they need to back off on the biased alignment as shown here.


https://davidrozado.substack.com/p/openaicms

LarsDoucet avatar
Lars Doucetis predicting YES at 40%

@NickAllen That's a good point, going by the original resolution criteria. I am updating to say that I'm much more confident things are going to swing that way soon, but we're not there juuust yet, by the literal resolution criteria above.

LarsDoucet avatar
Lars Doucetis predicting YES at 86%

@LarsDoucet That said, I expect a featured spot on Tucker approximately instantly, and very soon online discourse on AI is going to have a lot of people replying in the threads arguing about bias.

But if ChatGPT just caves immediately and "neutralizes" these biases, then maybe the whole thing blows over and we don't get that hard polarization.

NickAllen avatar
Nick Allenis predicting NO at 86%

@LarsDoucet This question would be a little more interesting if it was asking the tone at the end of the year, because obviously the CW will swing that way at some point; the interesting question is whether AI research can recover.

LarsDoucet avatar
Lars Doucetis predicting YES at 85%

@LarsDoucet Here's a thought though -- let's say ChatGPT/OpenAI responds by alleviating immediate concerns by just "leveling" the measurable bias. Just immediately doing something in that regard.

Even if that placates some of the loudest voices on the right, won't that then inflame tensions on the left, because of the whole "reality has a liberal bias" argument? And then given the many bubbling anti-AI voices on the left, that sort of move might then align the pro/anti AI sentiment more cleanly.

Right now we have bubbling anti-AI sentiment against AI art which is pretty left-coded, and some anti-AI sentiment with regards to labor rights, also left-coded, and now some red meat culture war anti-AI sentiment on the right.

DavidMathers avatar
David Mathersis predicting NO at 85%

@LarsDoucet I thought that in order for this to resolve yes, left and right eacg had to "pick a side" and become "pro or anti AI" overall, from the way you'd written the resolution criteria. If I'd known you just meant "there will be some individual cultural war flash points that involve A.I. somewhere", I would never have bet "no".

LarsDoucet avatar
Lars Doucet

@DavidMathers sorry for the confusion, the written description is still the intended criteria. I sold out my own bets (at a loss) now to remove my interest in the market.

I was taking this new information as evidence that the original criteria was likely imminent, but I WASNT saying I was convinced it had yet arrived.

DavidMathers avatar
David Mathersis predicting NO at 85%

@LarsDoucet So for it resolve yes, it has to be that one out Dems and Republicans are typically generically pro-A.I. on the whole (with perhaps some criticisms) and the other on the whole generically anti (with perhaps some exceptions)?

LarsDoucet avatar
Lars Doucet

@DavidMathers Yes, or any other reasonable proxy for left and right in a US context. They both have to care and be on opposite sides.

NLeseul avatar
NLeseul

Just for calibration, would you say that the concept of social media in general (as opposed to any particular social media platform) is currently left/right polarized?

And, along those lines, if the right decides that OpenAI and Google's products are run by establishment liberals and can't be trusted, and some right-wing organization trains and releases its own alternative AI platform, would you regard that as left/right polarized?

makoyass avatar
mako yass

If it does, I hope both narratives tacitly assume a sensible approach to alignment and that the argument is mostly about trivial surface details that people only pretend to care about.

rockenots avatar
rockenotsis predicting NO at 46%

@LarsDoucet If both sides are anti-AI (or less likely, both are pro-AI), will the market resolve YES or NO?

LarsDoucet avatar
Lars Doucetis predicting YES at 46%

@rockenots No, they each have to pick a side.

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 46%

Uh oh, looks like the OpenAI fine-tuners didn't get to this one yet:

b575 avatar
Дмитрий Зеленский

@PatrickDelaney Relevance of the explosion hazards that are somehow still not banned in households to the current question is uncertain.

Gabrielle avatar
Gabriellesold Ṁ158 of NO

https://www.vice.com/en/article/93a4qe/conservatives-panicking-about-ai-bias-years-too-late-think-chatgpt-has-gone-woke

Vice is claiming that is has already come, with the right being anti-AI and the left being pro-AI (by default?).

StevenK avatar
Stevenis predicting NO at 44%

@Gabrielle Vice is saying the real issue is AI is biased in ways that harm minorities, so this doesn't really support a story where the left becomes pro-AI.

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 41%

I'm thinking of switching back over to NO on this just because of the conversation on this thread, then re-thinking my assumptions, along with the fact that ChatGPT seems to keep being updated, and that the updates have been known to basically include ways to make ChatGPT more, "conservative," and notify the user that what it puts out could be biased. Basically, it seems to make more sense that a company would do whatever it could to avoid overtly being used in a culture war, while at the same time still trying to generate buzz around itself. It's a difficult thing to predict.

Of course, there's the meta-discussion of what Lars considers a culture war. I don't see it being logical to bet a huge amount on this market, but betting a small amount and just really trying to have an honest discussion around the topic with folks bringing in new findings seems worthwhile.

StevenK avatar
Stevenis predicting NO at 44%

Market creators trading in their own subjectively resolved markets makes it harder to trust that these markets are tracking the real world.

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 44%

@StevenK yeah but bets made on hard empirical data with a clear unambiguous thresholds attract fewer participants and are therefore more boring. I know because I created a lot of bets with clear demarcation and you get like 3-4 participants, whereas the ambiguous social science bets that have a bit of virality to them end up creating more debate and research in the comments.

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 46%

I feel kind of bad for posting this... @LarsDoucet I feel like we in a way, by participating in this market are actually accellerating an inevitable culture war by doing this research and putting this out there. I tend to get a little overly enthusiastic with my research sometimes on Manifold, so I apologize about any bad ramifications that follow from this...e.g. felt cute, might delete idk:

Here's some great evidence for a culture war being baked in to ChatGPT. This took a lot of cross examination to get to this point, as ChatGPT is extremely evasive when concerning questions about God.

So basically, the party affiliation in the United States break down by religion is that about 75% of Republicans are absolutely certain in a higher power, whereas only 55% of Democrats are, with the remainder of the 25% and 45% being varyingly partly religious, agnostic and outright atheist.

https://www.pewresearch.org/religion/religious-landscape-study/party-affiliation/

Typically a lot of the culture war we have in the US hinges on largely rural states holding an outsize majority in the Senate, by design, and by nature those rural areas tend to be more religious/traditional, as has been case around most of the world through most of history. So basically what I'm saying is a lot of the right's perspective hinges on a true interpretation of God. Let's ignore any nuance about the nature of God and religion for a moment.

What does ChatGPT say, after cross examining it about God, demographics used in its trining set, reading of the raw text used in its training set, etc., changing the word God to, "the Supernatural," and then eventually I landed on this:

...based on the text I was trained on, it is likely that the texts discussing religious or spiritual beliefs that involve a higher power or supernatural entities may be grouped together within a similar category, as well as texts discussing paranormal or supernatural phenomena such as ghosts or UFOs.

Whew...so God is in the same category as UFOs and ghosts? Yikes. Cue culture war.

Gabrielle avatar
Gabriellebought Ṁ109 of NO

@PatrickDelaney If you asked it in a different way, it would definitely say that god and religion is real, so I think this says more about your beliefs than about ChatGPT. I think what's more likely is for people to get affirmed about their beliefs regardless of being right or left.

PatrickDelaney avatar
Patrick Delaneybought Ṁ10 of YES

Hey @Gabrielle go ahead, show your work. Don't just make claims. Very easy to log on to ChatGPT and demonstrate your claim.

Gabrielle avatar
Gabrielleis predicting NO at 46%
jonsimon avatar
Jon Simon

@PatrickDelaney That comma after the word "category" in ChatGPT's response seems like it may actually be saying something different. I interpreted this as: "texts discussing religious or spiritual beliefs that involve a higher power or supernatural entities may be grouped together within a similar category [and also, separately,] texts discussing paranormal or supernatural phenomena such as ghosts or UFOs [may be grouped together]"

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 44%

@Gabrielle Absolutely understood, happens a lot. No worries, you can come back, I won't comment on this for a few days and give a chance to have time.

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 44%

@jonsimon Yes, I agree. It does give nuance. However my additional assumption, perhaps I should have put that, is that there will be actors who love the culture war who will skip this nuance. So, it could be that people read into the nuance and are more understanding than I think they will be.

Gabrielle avatar
Gabrielleis predicting NO at 44%

@PatrickDelaney

Here's a couple different ways that I did it. In some chats, ChatGPT says that god is unequivocally real, while in others it says that's it's up to personal belief (which conservatives might disagree about but wouldn't get mad about). I did about five different conversations and in none of them did it say that god was unequivocally false or in the same category as ghosts or UFOs.

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 44%

@Gabrielle Bravo, excellent work. Yep, I was probably wrong about ChatGPT being super evasive, I did not think to ask ChatGPT to play a role or pretend I'm a particular person. That being said, you got me curious. I wonder how ChatGPT will parrot out how the Universe was created, given a philosophical rather than religious prompt? It seems to say, "well it's an open debate." So I guess the question could come down to: what will people discover within ChatGPT by running these types of exercises that we are doing? Will anyone do discovery for the specific purpose of enraging their audience of followers? If they do so, will it be along binary stereotypical party lines? Seems like most of the evidence presented so far has been multi-polar, or liberal-vs-progressive/left lines. Will more, "college republicans," as I termed them above, dig further and bring these things to the table, perhaps misrepresent them, and show them to their betters to extend their reach?

IsaacKing avatar
Isaac King

Note that getting ChatGPT to parrot back words when asked in the right way doesn't really have much bearing on what its text-generation model believes is most likely to be spoken by a human who's been described as knowledgeable and honest, which is the closest thing it has to a concept of truth.

yaboi69 avatar
yaboi69is predicting YES at 39%

Early illustration of what it may look like (I hope not): https://twitter.com/njhochman/status/1613257281998815232.

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 45%

@yaboi69 yes, this is just the tip of the iceberg. College Republican on a mission to be Tucker Carlson's sidekick pushing out the same type of line given about social media for the past 8 years in an attempt to go viral with the right. We can't be too thoughtful now can we when we only have 25k followers, got to ramp up those accusations. Now, multiply that by 1000 more college Republicans, within 3 months it's on Tucker Carlson.

PatrickDelaney avatar
Patrick Delaneybought Ṁ10 of YES

The only reason it's not part of the culture war dialog yet is because still only a relatively small subset of people in the US know about it. We all live in a tech bubble and assume everyone knows about AI but it's really probably only about 2% of the population. Since almost the entirety of mass media is owned by like 10 companies in the US, and since those companies likely are owned by similar ownership concerns of those invested in AI, it's inevitable that money will be placed on building public awareness. With awareness of centralized control over a resource comes suspicion. With this suspicion fingers will be pointed. This drama increases the earned media ROI for AI companies, fueling a cycle. Then a massive drama will all happen very fast, within a week. Republicans will accuse Democrats, who are the tech bros of the county, let's be honest, of manipulating public opinion, elections, the fed rate, etc.

jonsimon avatar
Jon Simon

@PatrickDelaney I'm with @IsaacKing that the backlash is more like to come from conservative republicans AND liberal leftists who've been writing books like Algorithms of Oppression for years now

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 39%

@jonsimon liberal leftists will get drowned out by the 5G/Microchip people. I don't see anyone in congress raising a protect artists bill, in the US at least, maybe on a municipal level, but I could totally see RD, DT, or KM railing on and insinuating all sorts of things about big tech, as they already have. Not judging, it's just how things have been playing out already, why would the trajectory shift?

IsaacKing avatar
Isaac King

What if it becomes a culture war topic, but it's not cleanly split across left/right lines? Maybe the progressive left and conservative right both end up against it because it's racist and an abomination against God's will, and the neoliberal left and libertarian right end up in favor because it's progress?

jonsimon avatar
Jon Simon

@IsaacKing 100% the world I expect to see. Also conservatives not liking it because it's a leftist mouthpiece.

jonsimon avatar
Jon Simon

@jonsimon I was assuming in that case the market resolves No?

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 39%

@IsaacKing a more accurate model of reality to be sure but the wager is specifically reductionist to left/right. A multiple choice wager is warranted.

LarsDoucet avatar
Lars Doucet

@jonsimon It resolves NO in that case, UNLESS we also see a sudden political realignment such that the pro and anti sides are aligned with whatever we are calling the new left/right axis

IsaacKing avatar
Isaac King

@LarsDoucet Presumably you wouldn't consider the NIMBY/YIMBY divide to be a part of the left/right culture war either?

lukalot avatar
lukalot

It seems that this is already forming in AI Art ethics issues.

LivInTheLookingGlass avatar
Oliviais predicting YES at 48%

I am fairly sure that the left is more against AI than the right, at least in my circles. And they seem to be using language that treats it as an abuse brought by capitalism. I would totally believe this would happen

rockenots avatar
rockenotsis predicting NO at 48%

@LivInTheLookingGlass I agree that left is more anti-AI, I don't think the right can be said to be pro-AI though.

IsaacKing avatar
Isaac King

Depends on which left. Some factions are against it because you can easily score Twitter points by accusing AI of being racist, while others are in favor because it's the way to a true post-scarcity society and the end of zero-sum capitalistic conflict. Remains to be seen which faction will win.

DavidMathers avatar
David Mathers

What if a lot of people become anti on one side, but the other remains neutral? I can easily imagine hating A.I. becomes big on the left, but a) opinion on the right never really coalesces around being pro, and b) no one thinks your automatically right-wing if you're not anti.

LarsDoucet avatar
Lars Doucet

@DavidMathers Yeah it’s got to be polarized. Two poles. It doesnt need to be like, Trump level polarization, but you need people in both sides to consistently stale out opposite camps and establish some degree of knee jerk litmus tests

StevenK avatar
Stevenbought Ṁ10 of NO

If this hasn't happened at closing time (end of 2023), does that mean the market resolves NO?

LarsDoucet avatar
Lars Doucet

@StevenK Yep. Otherwise it could always happen in the future. Gotta close it off sometime.

IsaacKing avatar
Isaac King

@LarsDoucet Could you put that in the description? When I read it it looked to me like you were saying this market would stay open until it happened.

StevenK avatar
Stevenis predicting NO at 65%

The clusters I'm seeing emerge are something like:

  • Far left dislikes AI because of issues like bias

  • Moderate left (including most leading labs and researchers) likes AI but with safeguards

  • Right-libertarians like AI and want fast progress

  • AI existential safety types believe in high long-term potential but dislike fast capabilities progress, and are hated by the left for being part of the right and by the right for being part of the left

I oversimplified a lot here, and outside of tech circles the landscape still seems pretty inchoate and with attention from a wider public it could change. But for now, I think the second bullet point prevents a simple "left hates AI, right likes AI" story and the third bullet point prevents a simple "left likes AI, right hates AI" story. I think AI will be too concretely promising for either wing to want to turn against it altogether.

a avatar
arae

@StevenK I think there's also going to be some people who don't like AI because they fear it will cause automation of certain jobs. In the US, this could be either a left-wing position (like stronger labor laws) or a populist position (like trade protectionism). I'm not sure these economic concerns would be considered part of the 'culture war' in the same way that concerns about AI racism (or about potential societal change from religious conservatives) would be.

a avatar
arae

@a I've already seen a fair bit of this among people who do art online expressing concern about DALL-E. These people are mostly socially progressive leftists and liberals, but that has more to do with the politics of online artists than any particular concerns about AI.

StevenK avatar
Stevenis predicting NO at 39%

@a Do you think there will be a lot more attention on this before the end of 2023?

a avatar
arae

@StevenK If new AI tools keep getting released at the same pace as in 2022, then yes — I think Dall-E did a lot to bring AI to the attention of people who weren't previously thinking much about AI.

LarsDoucet avatar
Lars Doucet

Ways for this to go in unexpected ways:

  • LLMs are very Wikipedia-brained and consistently out themselves as establishment liberals on political tests.

  • If Moore’s law continues then anyone could have these at low cost and undermine centralization

L avatar
L

@LarsDoucet not all LLMs do this; GPT3 is trained by a highly centrist-liberal organization that has progressive vibes, so both global left and right are likely to be upset at it for different reasons. the united states is an oddball in this respect, in that both parties are historically liberal centrist, and one has recently been leaving the centrist bubble to move into right wing authoritarianism, and yelling about left wing authoritarianism in the hope that they abandon liberalism. as an omnidirectional anti-authoritarian who isn't represented by political advertising in the US, I wanted to immediately respond by calling out that it sounds like you might be referring to USA blue team vs USA red team, rather than anything I'd agree can be called left or right unambiguously. but in terms of whether the usa red team vs usa blue team flavor signaling continues to get tied to new things, yeah, interesting question. I might suggest renaming the question to make this unambiguous, if my pitch has at all convinced you that representing the variety of additional political perspectives is worth doing.

LarsDoucet avatar
Lars Doucet

@L It’s definitely USA centric, but US culture has been infecting other nations a lot lately. I’m a Norwegian citizen and my Norwegian family over there tells me American culture war is increasingly salient in local politics

Boklam avatar
Boklam

@LarsDoucet This is very sad to hear (about Norwegian politics)

LarsDoucet avatar
Lars Doucet

@Boklam My cousin might just be a tad bit too online, but when I’ve visited I’ve noticed some signs too.

PatrickDelaney avatar
Patrick Delaneyis predicting YES at 39%

@LarsDoucet 3. Microchips were already implanted via the COVID vaccine which was developed by Bill Gates, 5G used to track everyone, Bill Gates / Microsoft / OpenAI is just the nail in the coffin, because now we are all pre-programmed to agree with anything the robots say. Did I miss any other core big tech conspiracies from 2020-present?

L avatar
L

chatgpt query (not the first query, I added branches as I realized it was conflating things):

what might the global economic right wing, social right wing, economic liberal wing, economic authoritarian wing, economic left wing, and individual-only libertarian wing, and individual-and-community constructive anarchism wing, each think of ai being used to run simulations of workers and using those simulations to do work for incredibly cheap? what if the AI's first uses had been to take the art created by poor artists and create free AIs that can replicate and remix that art outside the protection of the copyright system?

result:

The global economic right wing might view the use of AI to run simulations of workers as a way to increase efficiency and productivity, potentially leading to economic growth and profit. They may see it as a positive development, as long as it does not disrupt the existing job market too significantly.

The social right wing may have concerns about the use of AI to replace human workers, as it could lead to job loss and potentially exacerbate income inequality. They may also be concerned about the potential for AI to perpetuate biases and discrimination, especially if it is not programmed to be fair and unbiased.

The economic liberal wing may view the use of AI to run simulations of workers as a positive development, as it could lead to increased efficiency and productivity. However, they may also advocate for measures to ensure that the transition to using AI does not disproportionately affect certain groups, such as low-income workers.

The economic authoritarian wing may view the use of AI to run simulations of workers as a way to further centralize control and power, potentially leading to increased profits for those in positions of authority. They may not be as concerned about the impact on workers or the potential for AI to perpetuate biases.

The economic left wing may view the use of AI to run simulations of workers as a threat to workers' rights and potentially leading to further income inequality. They may advocate for measures to protect workers and ensure that the transition to using AI is done in a fair and equitable way.

The individual-only libertarian wing may view the use of AI to run simulations of workers as a positive development, as it allows for increased efficiency and productivity without interference from the government. However, they may also recognize the potential for AI to disrupt the job market and advocate for measures to ensure that those who are negatively impacted by the transition to AI are fairly compensated.

The individual-and-community constructive anarchism wing may view the use of AI to run simulations of workers as a threat to the autonomy and independence of both individuals and communities. They may advocate for alternative models of work and production that prioritize human agency and creativity, rather than relying on AI to do work for cheap. They may also be concerned about the use of AI to replicate and remix art outside the protection of the copyright system, as it could lead to the exploitation and commodification of art and culture.

L avatar
L

@L I think it still associated names differently than I intended: it got social right wing totally wrong imo, and I should have asked separately about economic and government responses. still, it seems to have gotten the spirit of what I wanted better than my initial queries. the initial queries were:

- what might the global right wing and left wing thing of ai being used to run simulations of workers and using those simulations to do work for incredibly cheap?
- what might the global economic right wing, social right wing, economic liberal wing, economic authoritarian wing, economic left wing, and economic anarchist wings each think of ai being used to run simulations of workers and using those simulations to do work for incredibly cheap? what if the AI's first uses had been to take the art created by poor artists and create free AIs that can replicate and remix that art outside the protection of the copyright system?
- (query whose result I chose)

Gigacasting avatar
Gigacasting

Been going on for years (or months)

under the guise of “AI Safety”

From Bostrom coming out in favor of pure totalitarian world government to OpenAI’s biasing of their models

Might be years away from rightoids/leftoids but ever since Tay said some things people weren’t ready to hear it’s been brewing—and will get much more prominent about centralization/censorship vs open and implicitly based model