Will a major AI company publish a “responsible scaling policy” for AI consciousness by 2030?
9
84
Ṁ170Ṁ170
2030
67%
chance
1D
1W
1M
ALL
A responsible scaling policy (RSP) or risk-informed development policy (RDP) is a framework adopted by companies like Anthropic and OpenAI that aims to ensure that they do not release catastrophically unsafe AIs. Such a framework defines levels of concerning capabilities and corresponding types of safety mitigations that would be adopted, and compliance is monitored through ongoing evaluations.
Will a major AI company adopt an “RSP for AI consciousness” by the end of 2029? This would involve:
Defining features or indicators of consciousness to be monitored
Outlining mitigation measures to avoid suffering or conscious AIs
See also: Project ideas: Sentience and rights of digital minds (substack.com)
Get Ṁ600 play money
AI Alignment questions
By the end of 2026, will we have transparency into any useful internal pattern within a Large Language Model whose semantics would have been unfamiliar to AI and cognitive science in 2006?
48% chance
What percentage of Manifold poll respondents will agree that weak AGI has been achieved at the end of June 2024?
0.00
Related questions
Will there be an AI CEO by 2040?
61% chance
Will an AI produce encyclopedia-worthy philosophy by 2026?
20% chance
Will there be significant protests calling for AI rights before 2030?
48% chance
Will a major AI company acknowledge the possibility of conscious AIs by 2026?
53% chance
Will Google Deepmind and OpenAI have a major collaborative initiative by the end of 2030? (1000 mana subsidy)
64% chance
Will AI create philosophy before 2030?
74% chance
Will a purely AI-based news agency exist by the year 2030?
81% chance
Will software-side AI scaling appear to be suddenly discontinuous before 2025?
23% chance
Will an AI play a pivotal role in solving an important political issue by 2033?
73% chance
Will OpenAI design and manufacture a custom AI chip by 2030?
60% chance