Will a major AI company publish a “responsible scaling policy” for AI consciousness by 2030?
Mini
10
Ṁ1802030
64%
chance
1D
1W
1M
ALL
A responsible scaling policy (RSP) or risk-informed development policy (RDP) is a framework adopted by companies like Anthropic and OpenAI that aims to ensure that they do not release catastrophically unsafe AIs. Such a framework defines levels of concerning capabilities and corresponding types of safety mitigations that would be adopted, and compliance is monitored through ongoing evaluations.
Will a major AI company adopt an “RSP for AI consciousness” by the end of 2029? This would involve:
Defining features or indicators of consciousness to be monitored
Outlining mitigation measures to avoid suffering or conscious AIs
See also: Project ideas: Sentience and rights of digital minds (substack.com)
Get Ṁ1,000 play money
Related questions
Related questions
Will a major AI company acknowledge the possibility of conscious AIs by 2026?
75% chance
Will xAI AI be a Major AI Lab by 2025?
55% chance
Will there be an AI CEO by 2040?
53% chance
Will there be significant protests calling for AI rights before 2030?
42% chance
Will any state or autonomous region switch to AI governance, or majority AI decision making before 2050?
40% chance
By end of 2028, will there be a global AI organization, responsible for AI safety and regulations?
43% chance
[Metaculus] Will an International AI regulatory agency for oversight of transformative AI be established before 2030?
72% chance
Before 2028, will there be a major REFLECTIVELY self-improving AI policy*?
68% chance
Will an AI be solely responsible for an AI breakthrough by the end of 2030?
76% chance
Before 2028, will there be a major self-improving AI policy*?
78% chance