If most of the economy becomes automated without us all dying, will Eliezer Yudkowsky admit that he was wrong?
Basic
11
Ṁ121
2300
38%
chance

Eliezer Yudkowsky is a computer scientist and science fiction author. He is best known for his theories about AI safety. In recent years, he has expressed the view that, unless breakthroughs in AI safety are made, literally every single human will die soon after the creation of smarter-than-human AI. On this basis, he advocates that the world should shut down all AI progress indefinitely until we can be confident that it can be made safely.

If at some point the human labor share of Gross World Product (GWP) reaches a value below the share of GWP paid to advanced AI systems, and yet some humans remain alive within 24 months after this event, will Eliezer Yudkowsky admit that he was wrong about AI doom?

For the purpose of this question, "advanced AI systems" are defined as AI systems that are capable of general intelligence meeting or exceeding the broad level exhibited by GPT-3. The majority of the economy is said to become automated if the human labor share of Gross World Product (GWP) is less than the share of GWP paid to advanced AI systems, according to a credible estimate.

This question will resolve positively if, after the majority of the economy first becomes automated, Eliezer Yudkowsky makes a public statement that meets all the following criteria:

  1. Yudkowsky explicitly refers to his previous predictions about AI doom, which may include:
    a. The likely inevitability of human extinction following the creation of smarter-than-human AI unless AI safety breakthroughs are achieved.
    b. The need to halt all AI progress indefinitely until safety can be ensured.

  2. Yudkowsky's statement must clearly indicate that he believes his prediction about the high likelihood of AI doom was incorrect. This may be expressed through:
    a. Directly stating that his AI doom predictions were incorrect.
    b. Acknowledging that alternative, substantially less doomy viewpoints or developments better explain the outcomes of AI advancements.
    c. Recognizing that multiple core features of his AI doom scenario have been disproven or lack empirical support.

  3. The statement must be made in a public forum, such as:
    a. A published article, blog post, or book authored by Eliezer Yudkowsky.
    b. A recorded lecture, presentation, or interview featuring Eliezer Yudkowsky.
    c. A post on a verified social media account belonging to Eliezer Yudkowsky.

  4. The statement must be unambiguous and thorough, leaving no doubt that Eliezer Yudkowsky is actually acknowledging that his predictions about AI doom were incorrect. Ambiguous, sarcastic, or partial admissions will not be considered sufficient for a positive resolution.

This question will resolve negatively if no qualifying public statement is made by Eliezer Yudkowsky that meets the above criteria within 24 months of the economy first becoming mostly automated. If there is ambiguity or debate about whether a particular statement constitutes an acknowledgment that his predictions were incorrect, discretion will be used to determine the appropriate resolution.

If every single human dies within 24 months after most of the economy first becomes automated, this question will resolve to N/A.

Get
Ṁ1,000
and
S3.00
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules