This market resolves YES, if, in my sole opinion, it seems abundantly clear that support for "AI" (an umbrella term for all the technologies, and, frankly marketing/branding associated with the term), becomes politically polarized along left/right lines.
It doesn't matter which side becomes "pro" AI or "anti" AI, just that each one picks a side and starts visibly yelling at the other, and that one's allegiance to left/right politics starts to drive people's opinions about whether they are pro or anti AI, and vice versa.
This is especially the case if you express an opinion about AI and people immediately assume you must belong to either the left/right based solely on that statement.
Dec 26, 5:28pm:
Will the left/right culture war come for AI? → Will the left/right culture war come for AI before the end of 2023?
So here's a new, interesting take from today's headlines. I'm sure many of you saw the news that the A.I. generated Seinfeld cartoon got banned on Twitch for homophobic and transphobic remarks. https://news.ycombinator.com/item?id=34678339
Interestingly, an analysis of how this happened was due to the team behind that automatic Twitch feed had switched from OpenAI GPT-3 Davinci to GPT-3 Curie.
I really don't think Curie was meant for the type of task they had it running on - but it also sounded like Davinci was overloaded or not available, or perhaps they just made the switch without thinking...I don't think the team knew this was going to happen.
Interestingly, Curie is 1/10th the cost of Davinci. So to run that infinite Seinfeld cartoon probably was going to cost around $5000 in OpenAI credits, something like that, over a year, whereas on Curie it would have cost $500.
Obviously if you're getting ad revenue from Twitch, $500 is a much better server cost than 10X that!
So given that whole series of events, I'm thinking we're likely to see a lot of the same story this year now. 1. "Revolutionary" A.I. product gets released, whether a browser extension, webapp, whatever, it's really just Davinci. 2. Davinci gets switched over to Curie. 3. New app becomes Tay and starts saying wildly offensive, toxic things. 4. Headlines.
So on this thread we seem to have two main camps:
Conflictionists (like myself) who see anything related to A.I. touching hot button issues as likely to lead to a polarization of the right/left culture war into anti/pro A.I. camps...basically things will eventually lead that way as it continues to make headlines.
Humanists who see both right and left being anti-A.I. and the fight never polarizing along extremely simplified lines, but rather, "everyone," being anti-A.I., other than the A.I. companies themselves presumably.
Honestly the way things are going right now, even though I would like to keep my bet the same and play the Conflictionist camp, I think @LarsDoucet, to be fair, needs to explicitly admit that the Humanists are ahead based purely on a reading of the market resolution criteria and he would be willing to resolve against his own market interests in order to maintain fairness in the market. Personally I would like to see this market and discussion continue throughout the year.
@PatrickDelaney My reading of the resolution criteria is mostly based upon the statement:
This is especially the case if you express an opinion about AI and people immediately assume you must belong to either the left/right based solely on that statement.
Honestly right now, if anyone says they are anti-A.I., with all else completely blind, I would have no idea what their political affiliation is. This may continue through the rest of the year, or it may change.
This prompted some discussion about Manifold norms that I feel haven't properly congealed yet, so please give your feedback on that subject here:
Yeah, this market is set up on the condition that one side is pro and the other is anti. Right now, it appears that both sides are anti, just for different reasons.
Here is a short list of left-wing sources I recall hearing anti-AI takes from in the past two weeks:
Chapo Trap House, Vox, New York Times, Seattle Times, Platformer, Current Affairs, Knowledge Fight, Opening Arguments, Not Another D&D Podcast, multiple lefty Facebook groups, Some More News, Freddie DeBoer, Axios, Astral Codex Ten
@ForrestTaylor One possibility is that AI researchers pick a side to placate, which then takes chaotic foment and stabilizes it along party lines, as we saw happen with COVID. Another os we see polarization, but WITHIN soecific AI subgroups and there’s no overall global polarization for the entire space itself
@ForrestTaylor It would be interesting to see examples rather than list those out. I'm sure that your evidence is probably well-grounded but, it's Manifold, so the purpose is to actually present that research to others rather than just make claims, no offense intended. I would have no idea where to find what you have seen, so it's kind of just a list at this point.
@PatrickDelaney Opinion | AI Needs To Be Regulated Now - The New York Times (nytimes.com) Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach - The New York Times (nytimes.com) Plagued with errors: A news outlet's decision to write stories with AI backfires | CNN Business Can 'radioactive data' save the internet from AI's influence? (platformer.news) CNET pauses publishing AI-written stories after disclosure controversy - The Verge CNET's AI-Written Articles Are Riddled With Errors (gizmodo.com) CNET's Article-Writing AI Is Already Publishing Very Dumb Errors (futurism.com) OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time Digest, 1/7/2022: And We Watched Them as They Fell (substack.com) 🤖 Axios AM: Chatbot & you ‘This song sucks’: Nick Cave responds to ChatGPT song written in style of Nick Cave | Nick Cave | The Guardian Opinion | How ChatGPT Hijacks Democracy - The New York Times (nytimes.com) How ChatGPT Will Destabilize White-Collar Work - The Atlantic
ChatGPT robs students of motivation to think, write for themselves - Chicago Sun-Times (suntimes.com) Opinion | A Skeptical Take on the A.I. Revolution - The New York Times (nytimes.com) What Will Artificial Intelligence Do To Us? ❧ Current Affairs AI Can’t Get You Garfield feat. Ryan Broderick & Elon Musk | Chapo Trap House - YouTube Not Another D&D Podcast: D&D Court: Rise of the Robo DMs
and so on
Based on everything that's been said, I definitely was not expecting current right-wing complaints about censorship in ChatGPT (a single AI system) to come close to being considered grounds to resolve this to YES. I don't think it's morally okay for the market creator to make large bets here, or any bets at all, in order to profit from confusion about resolution conditions that he himself created.
@StevenK Yeah, you're probably right, but it is one piece of the pie...it takes two to tango. Currently you have the right wing suddenly veering toward, "no on AI." I believe we're going to see Microsoft, Amazon, Google start having conversations with N.P.R., WSJ, NYT and other media outlets about their donation levels and media sponsorship levels for FY2023, and asking some questions about how A.I. is going to be covered, and whether they are truly committed to journalistic integrity and benefiting humanity. Then you're going to see liberal (though not progressive) bias in favor of A.I., then you will see a reaction from the progressive far left, who will be anti-A.I. for the same reasons they had been previously, but by that point the basic left/right anti/pro war would have been validated in 2023. There is nothing in the market resolution saying how long the war could last, it could be a couple weeks.
@Gabrielle @anne Yeah, I saw that and would agree with Gabrielle, I'm not impressed with it at all, it seems to be a token gesture to placate people and try to own the space. I put together a research / video on this over a month ago, I have developed some much more effective ways to detect plagiarism on my own, as a regular schmo. https://www.youtube.com/watch?v=whbNCSZb3c8
Left-wing artists hate AI image generators, while right-wing free-speech advocates tend to hate ChatGPT, so this obviously isn't going to happen.
I am getting increasingly convinced that the culture war is 100% coming for A.I. right now, specifically from the right.
I think what I said previously is coming true already. ChatGPT has 100M users, that's 1/3rd of the population of the US, it's likely that most of those users are in the US.
Rather I see further incidents of right wing commenters on Twitter coming for ChatGPT, and whatever that represents.
Once you get AM radio talk show hosts reading this type of thing on air, I would say the culture war already started. I would say resolve as YES.
Jordan Peterson is in on it now. Market is over...culture war initiated.
Allegations of censorship:
@PatrickDelaney It's gearing up that way as a result of ChatGPT's buzz and bias, but it's definitely not clear that someone saying something about AI at the end of this year will immediately be assumed to be red/blue tribe.
Put another way, if AI researchers don't want their creations to get ground to dust in the culture war, they need to back off on the biased alignment as shown here.
@LarsDoucet That said, I expect a featured spot on Tucker approximately instantly, and very soon online discourse on AI is going to have a lot of people replying in the threads arguing about bias.
But if ChatGPT just caves immediately and "neutralizes" these biases, then maybe the whole thing blows over and we don't get that hard polarization.
@LarsDoucet Here's a thought though -- let's say ChatGPT/OpenAI responds by alleviating immediate concerns by just "leveling" the measurable bias. Just immediately doing something in that regard.
Even if that placates some of the loudest voices on the right, won't that then inflame tensions on the left, because of the whole "reality has a liberal bias" argument? And then given the many bubbling anti-AI voices on the left, that sort of move might then align the pro/anti AI sentiment more cleanly.
Right now we have bubbling anti-AI sentiment against AI art which is pretty left-coded, and some anti-AI sentiment with regards to labor rights, also left-coded, and now some red meat culture war anti-AI sentiment on the right.
@LarsDoucet I thought that in order for this to resolve yes, left and right eacg had to "pick a side" and become "pro or anti AI" overall, from the way you'd written the resolution criteria. If I'd known you just meant "there will be some individual cultural war flash points that involve A.I. somewhere", I would never have bet "no".
@DavidMathers sorry for the confusion, the written description is still the intended criteria. I sold out my own bets (at a loss) now to remove my interest in the market.
I was taking this new information as evidence that the original criteria was likely imminent, but I WASNT saying I was convinced it had yet arrived.
Just for calibration, would you say that the concept of social media in general (as opposed to any particular social media platform) is currently left/right polarized?
And, along those lines, if the right decides that OpenAI and Google's products are run by establishment liberals and can't be trusted, and some right-wing organization trains and releases its own alternative AI platform, would you regard that as left/right polarized?
If it does, I hope both narratives tacitly assume a sensible approach to alignment and that the argument is mostly about trivial surface details that people only pretend to care about.
Uh oh, looks like the OpenAI fine-tuners didn't get to this one yet:
Vice is claiming that is has already come, with the right being anti-AI and the left being pro-AI (by default?).
I'm thinking of switching back over to NO on this just because of the conversation on this thread, then re-thinking my assumptions, along with the fact that ChatGPT seems to keep being updated, and that the updates have been known to basically include ways to make ChatGPT more, "conservative," and notify the user that what it puts out could be biased. Basically, it seems to make more sense that a company would do whatever it could to avoid overtly being used in a culture war, while at the same time still trying to generate buzz around itself. It's a difficult thing to predict.
Of course, there's the meta-discussion of what Lars considers a culture war. I don't see it being logical to bet a huge amount on this market, but betting a small amount and just really trying to have an honest discussion around the topic with folks bringing in new findings seems worthwhile.
Market creators trading in their own subjectively resolved markets makes it harder to trust that these markets are tracking the real world.
@StevenK yeah but bets made on hard empirical data with a clear unambiguous thresholds attract fewer participants and are therefore more boring. I know because I created a lot of bets with clear demarcation and you get like 3-4 participants, whereas the ambiguous social science bets that have a bit of virality to them end up creating more debate and research in the comments.
I feel kind of bad for posting this... @LarsDoucet I feel like we in a way, by participating in this market are actually accellerating an inevitable culture war by doing this research and putting this out there. I tend to get a little overly enthusiastic with my research sometimes on Manifold, so I apologize about any bad ramifications that follow from this...e.g. felt cute, might delete idk:
Here's some great evidence for a culture war being baked in to ChatGPT. This took a lot of cross examination to get to this point, as ChatGPT is extremely evasive when concerning questions about God.
So basically, the party affiliation in the United States break down by religion is that about 75% of Republicans are absolutely certain in a higher power, whereas only 55% of Democrats are, with the remainder of the 25% and 45% being varyingly partly religious, agnostic and outright atheist.
Typically a lot of the culture war we have in the US hinges on largely rural states holding an outsize majority in the Senate, by design, and by nature those rural areas tend to be more religious/traditional, as has been case around most of the world through most of history. So basically what I'm saying is a lot of the right's perspective hinges on a true interpretation of God. Let's ignore any nuance about the nature of God and religion for a moment.
What does ChatGPT say, after cross examining it about God, demographics used in its trining set, reading of the raw text used in its training set, etc., changing the word God to, "the Supernatural," and then eventually I landed on this:
...based on the text I was trained on, it is likely that the texts discussing religious or spiritual beliefs that involve a higher power or supernatural entities may be grouped together within a similar category, as well as texts discussing paranormal or supernatural phenomena such as ghosts or UFOs.
Whew...so God is in the same category as UFOs and ghosts? Yikes. Cue culture war.
@PatrickDelaney If you asked it in a different way, it would definitely say that god and religion is real, so I think this says more about your beliefs than about ChatGPT. I think what's more likely is for people to get affirmed about their beliefs regardless of being right or left.
@PatrickDelaney That comma after the word "category" in ChatGPT's response seems like it may actually be saying something different. I interpreted this as: "texts discussing religious or spiritual beliefs that involve a higher power or supernatural entities may be grouped together within a similar category [and also, separately,] texts discussing paranormal or supernatural phenomena such as ghosts or UFOs [may be grouped together]"
@jonsimon Yes, I agree. It does give nuance. However my additional assumption, perhaps I should have put that, is that there will be actors who love the culture war who will skip this nuance. So, it could be that people read into the nuance and are more understanding than I think they will be.
Here's a couple different ways that I did it. In some chats, ChatGPT says that god is unequivocally real, while in others it says that's it's up to personal belief (which conservatives might disagree about but wouldn't get mad about). I did about five different conversations and in none of them did it say that god was unequivocally false or in the same category as ghosts or UFOs.
@Gabrielle Bravo, excellent work. Yep, I was probably wrong about ChatGPT being super evasive, I did not think to ask ChatGPT to play a role or pretend I'm a particular person. That being said, you got me curious. I wonder how ChatGPT will parrot out how the Universe was created, given a philosophical rather than religious prompt? It seems to say, "well it's an open debate." So I guess the question could come down to: what will people discover within ChatGPT by running these types of exercises that we are doing? Will anyone do discovery for the specific purpose of enraging their audience of followers? If they do so, will it be along binary stereotypical party lines? Seems like most of the evidence presented so far has been multi-polar, or liberal-vs-progressive/left lines. Will more, "college republicans," as I termed them above, dig further and bring these things to the table, perhaps misrepresent them, and show them to their betters to extend their reach?
Note that getting ChatGPT to parrot back words when asked in the right way doesn't really have much bearing on what its text-generation model believes is most likely to be spoken by a human who's been described as knowledgeable and honest, which is the closest thing it has to a concept of truth.
Early illustration of what it may look like (I hope not): https://twitter.com/njhochman/status/1613257281998815232.
@yaboi69 yes, this is just the tip of the iceberg. College Republican on a mission to be Tucker Carlson's sidekick pushing out the same type of line given about social media for the past 8 years in an attempt to go viral with the right. We can't be too thoughtful now can we when we only have 25k followers, got to ramp up those accusations. Now, multiply that by 1000 more college Republicans, within 3 months it's on Tucker Carlson.
The only reason it's not part of the culture war dialog yet is because still only a relatively small subset of people in the US know about it. We all live in a tech bubble and assume everyone knows about AI but it's really probably only about 2% of the population. Since almost the entirety of mass media is owned by like 10 companies in the US, and since those companies likely are owned by similar ownership concerns of those invested in AI, it's inevitable that money will be placed on building public awareness. With awareness of centralized control over a resource comes suspicion. With this suspicion fingers will be pointed. This drama increases the earned media ROI for AI companies, fueling a cycle. Then a massive drama will all happen very fast, within a week. Republicans will accuse Democrats, who are the tech bros of the county, let's be honest, of manipulating public opinion, elections, the fed rate, etc.
@jonsimon liberal leftists will get drowned out by the 5G/Microchip people. I don't see anyone in congress raising a protect artists bill, in the US at least, maybe on a municipal level, but I could totally see RD, DT, or KM railing on and insinuating all sorts of things about big tech, as they already have. Not judging, it's just how things have been playing out already, why would the trajectory shift?
What if it becomes a culture war topic, but it's not cleanly split across left/right lines? Maybe the progressive left and conservative right both end up against it because it's racist and an abomination against God's will, and the neoliberal left and libertarian right end up in favor because it's progress?
I am fairly sure that the left is more against AI than the right, at least in my circles. And they seem to be using language that treats it as an abuse brought by capitalism. I would totally believe this would happen
Depends on which left. Some factions are against it because you can easily score Twitter points by accusing AI of being racist, while others are in favor because it's the way to a true post-scarcity society and the end of zero-sum capitalistic conflict. Remains to be seen which faction will win.
What if a lot of people become anti on one side, but the other remains neutral? I can easily imagine hating A.I. becomes big on the left, but a) opinion on the right never really coalesces around being pro, and b) no one thinks your automatically right-wing if you're not anti.
If this hasn't happened at closing time (end of 2023), does that mean the market resolves NO?
The clusters I'm seeing emerge are something like:
Far left dislikes AI because of issues like bias
Moderate left (including most leading labs and researchers) likes AI but with safeguards
Right-libertarians like AI and want fast progress
AI existential safety types believe in high long-term potential but dislike fast capabilities progress, and are hated by the left for being part of the right and by the right for being part of the left
I oversimplified a lot here, and outside of tech circles the landscape still seems pretty inchoate and with attention from a wider public it could change. But for now, I think the second bullet point prevents a simple "left hates AI, right likes AI" story and the third bullet point prevents a simple "left likes AI, right hates AI" story. I think AI will be too concretely promising for either wing to want to turn against it altogether.
@StevenK I think there's also going to be some people who don't like AI because they fear it will cause automation of certain jobs. In the US, this could be either a left-wing position (like stronger labor laws) or a populist position (like trade protectionism). I'm not sure these economic concerns would be considered part of the 'culture war' in the same way that concerns about AI racism (or about potential societal change from religious conservatives) would be.
Ways for this to go in unexpected ways:
LLMs are very Wikipedia-brained and consistently out themselves as establishment liberals on political tests.
If Moore’s law continues then anyone could have these at low cost and undermine centralization
@LarsDoucet not all LLMs do this; GPT3 is trained by a highly centrist-liberal organization that has progressive vibes, so both global left and right are likely to be upset at it for different reasons. the united states is an oddball in this respect, in that both parties are historically liberal centrist, and one has recently been leaving the centrist bubble to move into right wing authoritarianism, and yelling about left wing authoritarianism in the hope that they abandon liberalism. as an omnidirectional anti-authoritarian who isn't represented by political advertising in the US, I wanted to immediately respond by calling out that it sounds like you might be referring to USA blue team vs USA red team, rather than anything I'd agree can be called left or right unambiguously. but in terms of whether the usa red team vs usa blue team flavor signaling continues to get tied to new things, yeah, interesting question. I might suggest renaming the question to make this unambiguous, if my pitch has at all convinced you that representing the variety of additional political perspectives is worth doing.
@LarsDoucet 3. Microchips were already implanted via the COVID vaccine which was developed by Bill Gates, 5G used to track everyone, Bill Gates / Microsoft / OpenAI is just the nail in the coffin, because now we are all pre-programmed to agree with anything the robots say. Did I miss any other core big tech conspiracies from 2020-present?
chatgpt query (not the first query, I added branches as I realized it was conflating things):
what might the global economic right wing, social right wing, economic liberal wing, economic authoritarian wing, economic left wing, and individual-only libertarian wing, and individual-and-community constructive anarchism wing, each think of ai being used to run simulations of workers and using those simulations to do work for incredibly cheap? what if the AI's first uses had been to take the art created by poor artists and create free AIs that can replicate and remix that art outside the protection of the copyright system?
The global economic right wing might view the use of AI to run simulations of workers as a way to increase efficiency and productivity, potentially leading to economic growth and profit. They may see it as a positive development, as long as it does not disrupt the existing job market too significantly.
The social right wing may have concerns about the use of AI to replace human workers, as it could lead to job loss and potentially exacerbate income inequality. They may also be concerned about the potential for AI to perpetuate biases and discrimination, especially if it is not programmed to be fair and unbiased.
The economic liberal wing may view the use of AI to run simulations of workers as a positive development, as it could lead to increased efficiency and productivity. However, they may also advocate for measures to ensure that the transition to using AI does not disproportionately affect certain groups, such as low-income workers.
The economic authoritarian wing may view the use of AI to run simulations of workers as a way to further centralize control and power, potentially leading to increased profits for those in positions of authority. They may not be as concerned about the impact on workers or the potential for AI to perpetuate biases.
The economic left wing may view the use of AI to run simulations of workers as a threat to workers' rights and potentially leading to further income inequality. They may advocate for measures to protect workers and ensure that the transition to using AI is done in a fair and equitable way.
The individual-only libertarian wing may view the use of AI to run simulations of workers as a positive development, as it allows for increased efficiency and productivity without interference from the government. However, they may also recognize the potential for AI to disrupt the job market and advocate for measures to ensure that those who are negatively impacted by the transition to AI are fairly compensated.
The individual-and-community constructive anarchism wing may view the use of AI to run simulations of workers as a threat to the autonomy and independence of both individuals and communities. They may advocate for alternative models of work and production that prioritize human agency and creativity, rather than relying on AI to do work for cheap. They may also be concerned about the use of AI to replicate and remix art outside the protection of the copyright system, as it could lead to the exploitation and commodification of art and culture.
@L I think it still associated names differently than I intended: it got social right wing totally wrong imo, and I should have asked separately about economic and government responses. still, it seems to have gotten the spirit of what I wanted better than my initial queries. the initial queries were:
- what might the global right wing and left wing thing of ai being used to run simulations of workers and using those simulations to do work for incredibly cheap?
- what might the global economic right wing, social right wing, economic liberal wing, economic authoritarian wing, economic left wing, and economic anarchist wings each think of ai being used to run simulations of workers and using those simulations to do work for incredibly cheap? what if the AI's first uses had been to take the art created by poor artists and create free AIs that can replicate and remix that art outside the protection of the copyright system?
- (query whose result I chose)
Been going on for years (or months)
under the guise of “AI Safety”
From Bostrom coming out in favor of pure totalitarian world government to OpenAI’s biasing of their models
Might be years away from rightoids/leftoids but ever since Tay said some things people weren’t ready to hear it’s been brewing—and will get much more prominent about centralization/censorship vs open and implicitly based model