Related questions
Withdrawing from this market (as I have the analogous market in 2030) since too many traders have decided to treat it as non-epistemic (trading in a way that reflects money not being worth anything if the world ends, rather than trading beliefs). This is not an evil or unlawful choice, but it means there's no point in watching my net worth fluctuate around my shares here.
@EliezerYudkowsky There's non-epistemic grounds to buy YES as well. In fact. for 2100 in particular, I would argue the non-epistemic trade is to be buying YES at 11%.
See Mira's magisterial treatment of the subject in this very comments section: https://manifold.markets/jdilla/will-ai-wipe-out-humanity-before-th#WHfX49qPw7c5MO7SELZR
@EliezerYudkowsky Is this how you still hold on to a high pdoom and tell yourself the crowd is really with you in spite of betting against you?
@f last proposal from dev team was a 2x global max leverage cap. Probably won't happen in that form but still spooking some traders.
@ChristianJacques you could have an AI agent represent your rights here, likely enforced through some smart contract? who knows, maybe the mana helps the good AI’s win the war.
Does "wipe out" imply killing or out-competition?
What if all of humanity voluntarily mind-uploads as in "I, Row-Boat"?
@JamesDillard Only biological humans count? What if all living humans are genetically engineered enough to be considered a separate species?
Experts are greatly debating on whether the AI can wipe out humanity by 2100. The information given above indicates that there are opposing views. According to the Existential Risk Persuasion Tournament, AI proponents calculated a probability of about 3% that artificial intelligence will exterminate humanity within the next hundred years; however, “super-forecasters” gave the odds as low as 0.38%.
These are just estimates, however, based on existing knowledge and perceptions. The future evolution and consequences of AI rely upon several factors involving technology updates, ethics, legal systems, and how people welcome its use.
Researchers, medical practitioners and policy makers are working closely with their organization's to make sure that AI is developed responsibly and safely. The potential risks surrounding robust AI-based systems exist, but predicting their impact on mankind in the long run remains an impossibility.
@jack I think the probability that AI kills all humans conditional on killing at least 10% of them is upwards of 90%, so my incentive on this market is still to bet it down pretty low even though I think the actual probability is much higher.
This is better for people who assign a lower conditional probability though.