Will AGI be a problem before non-G AI?
➕
Plus
22
Ṁ3182
2031
21%
chance

Most AIs are not AGIs, yet; AI alignment should ideally procede AGIs by a good bit; it is unclear how close we might be to AI/AGI breakout or foom of any sort.

In my subjective judgment, will the first major problems coming from AIs be from AGI? I will consider things like wars, plagues, multiple deaths above the base rate, and really significant economic crashes to be problems, among other things.

I will not count, e.g., 1,000 deaths from car accidents following the wide adoption of self-driving vehicles to be a major problem, since this is a significant drop from the base rate. 

If no qualifying events occur before 2030, I may choose to resolve this market N/A; that'll probably depend on whether this is still an interesting issue.

Feb 19, 2:15pm: Will AGI be a problem? → Will AGI be a problem before non-G AI?

Get
Ṁ1,000
and
S3.00
Sort by:

I personally find title confusing - not sure how to articulate why though.

I interpret it as: Will General AI become an issue before Narrow AI?

@RobertCousineau Yes. I think I interpreted it backwards initially, but I now understand it to be as you say.

What about mass unemployment and social unrest?

@ahalekelly Either of those could resolve this YES, but only if I think they generally meet the benchmark of being 'about as bad as a war, a major economic crash, etc.' Social unrest is most likely to qualify if it causes lost of deaths, mass unemployment if it causes economic distress in line with a major economic crash.

I don't know why I bet "yes" previously. I assume I read it backwards.

What about stuff like these:

  • some new mental illness or behavioral problem, e.g. people forgetting to eat or drink to a dangerous extent,

  • some invention that rises existential risk assessed by prediction markets by 10% in the near/medium term,

  • a country using AI to eliminate wrongthing, severely limit freedom or manipulate people

@na_pewno

1. Mental health problems are problems, but the judgement is likely to be more subjective in that case. These would have to be severe, widespread, and well-documented.

2. Predicted x-risks won't resolve this YES, but their secondary effects (e.g. economic turmoil) might.

3. Deprivation of rights could resolve this YES, but the current mess in China did not preemptively resolve this YES, so we're looking for something worse than wide-spread facial recognition and social credit systems that are also used to help justify concentration camps.

There are a number of odd definitions of AGI running around out there, so to clarify, I am considering an AGI to be one that can do pretty much what the average human can do, with some forgiveness towards modality; e.g., a 'normal human' (where I live), can drive a car pretty well; AGI should be able to drive a car pretty well, but does not need a robot body to allow it to use the gas pedal the way a human does; a specially adapted car (e.g., a Tesla) is fine. I do not expect an AI of any sort to be recognized as legal person or to prove that they have consciousness.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules