Will a major AI company publish a “responsible scaling policy” for AI consciousness by 2030?
Basic
10
Ṁ1802030
64%
chance
1D
1W
1M
ALL
A responsible scaling policy (RSP) or risk-informed development policy (RDP) is a framework adopted by companies like Anthropic and OpenAI that aims to ensure that they do not release catastrophically unsafe AIs. Such a framework defines levels of concerning capabilities and corresponding types of safety mitigations that would be adopted, and compliance is monitored through ongoing evaluations.
Will a major AI company adopt an “RSP for AI consciousness” by the end of 2029? This would involve:
Defining features or indicators of consciousness to be monitored
Outlining mitigation measures to avoid suffering or conscious AIs
See also: Project ideas: Sentience and rights of digital minds (substack.com)
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Will a major AI company acknowledge the possibility of conscious AIs by 2026?
81% chance
Will xAI AI be a Major AI Lab by 2025?
24% chance
Will there be an AI CEO by 2040?
51% chance
Will there be significant protests calling for AI rights before 2030?
42% chance
Will there be a noticeable effort to increase AI transparency by 2025?
50% chance
Will software-side AI scaling appear to be suddenly discontinuous before 2025?
18% chance
Will a major tech company announce a significant new AI regulation compliance feature by the end of 2024?
55% chance
Will any state or autonomous region switch to AI governance, or majority AI decision making before 2050?
40% chance
By end of 2028, will there be a global AI organization, responsible for AI safety and regulations?
43% chance
[Metaculus] Will an International AI regulatory agency for oversight of transformative AI be established before 2030?
73% chance