Will Kamala Harris talk about AI safety between now and Election Day?
115
1kṀ20k
resolved Nov 5
Resolved
NO

Mentioning “AI safety” doesn’t count. For the purposes of this market, she needs to discuss concerns and proposals for regulating AI safety as currently being discussed among scholars of this topic.

Proposed resolution basis (updated 31 Jul 2024 per comments):

Kamala Harris mentions something directly related to the key concerns in AI Safety as described here:

Problems in AI safety can be grouped into three categories: robustness, assurance, and specification. Robustness guarantees that a system continues to operate within safe limits even in unfamiliar settings; assurance seeks to establish that it can be analyzed and understood easily by human operators; and specification is concerned with ensuring that its behavior aligns with the system designer’s intentions.

Download Full Report

Key Concepts in AI Safety: An Overview

Related Documents

Authors

Tim G. J. Rudner Helen Toner

Originally Published

March 2021

Topics

Assessment

Citation

Tim G. J. Rudner and Helen Toner, "Key Concepts in AI Safety: An Overview" (Center for Security and Emerging Technology, March 2021). https://doi.org/10.51593/20190040.

Examples of what would cause this market to resolve YES:

Commenting that AI systems need to:

  • operate within safe limits even in unfamiliar settings;

  • are understood easily by human operators; or

  • align with the system designer’s intentions.

Example that would not count for market resolution:

Commenting, “We need to make sure that AI systems are safe," without further elaborating.

Get
Ṁ1,000
to start trading!

🏅 Top traders

#NameTotal profit
1Ṁ732
2Ṁ719
3Ṁ442
4Ṁ388
5Ṁ303
© Manifold Markets, Inc.TermsPrivacy