
Mentioning “AI safety” doesn’t count. For the purposes of this market, she needs to discuss concerns and proposals for regulating AI safety as currently being discussed among scholars of this topic.
Proposed resolution basis (updated 31 Jul 2024 per comments):
Kamala Harris mentions something directly related to the key concerns in AI Safety as described here:
Problems in AI safety can be grouped into three categories: robustness, assurance, and specification. Robustness guarantees that a system continues to operate within safe limits even in unfamiliar settings; assurance seeks to establish that it can be analyzed and understood easily by human operators; and specification is concerned with ensuring that its behavior aligns with the system designer’s intentions.
Download Full Report
Key Concepts in AI Safety: An Overview
Related Documents
Authors
Originally Published
March 2021
Topics
Citation
Tim G. J. Rudner and Helen Toner, "Key Concepts in AI Safety: An Overview" (Center for Security and Emerging Technology, March 2021). https://doi.org/10.51593/20190040.
Examples of what would cause this market to resolve YES:
Commenting that AI systems need to:
operate within safe limits even in unfamiliar settings;
are understood easily by human operators; or
align with the system designer’s intentions.
Example that would not count for market resolution:
Commenting, “We need to make sure that AI systems are safe," without further elaborating.
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ732 | |
2 | Ṁ719 | |
3 | Ṁ442 | |
4 | Ṁ388 | |
5 | Ṁ303 |