
Will I consider any LLM to be a moral agent by 2024?
10
348Ṁ910resolved Jan 1
Resolved
NO1H
6H
1D
1W
1M
ALL
Inspired by a recent talk by Amanda Askell at the 2023 Pacific APA.
Moral agent = morally responsible entity, an entity capable of self-determined action and understanding of the consequences and moral status of its acts.
LLM = large language models (e.g. GPT)
Please try to convince me one way or another in the comments.
Related:
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ120 | |
2 | Ṁ11 | |
3 | Ṁ8 | |
4 | Ṁ8 | |
5 | Ṁ8 |
People are also trading
Related questions
6 months from now will I judge that LLMs had already peaked by Nov 2024?
11% chance
What will be true of Anthropic's best LLM by EOY 2025?
Will LLMs become a ubiquitous part of everyday life by June 2026?
82% chance
Will an LLM improve its own ability along some important metric well beyond the best trained LLMs before 2026?
50% chance
Will there be major breakthrough in LLM Continual Learning before 2026?
26% chance
Will LLMs mostly overcome the Reversal Curse by the end of 2025?
73% chance
Will RL work for LLMs "spill over" to the rest of RL by 2026?
33% chance
In 2025, will I be able to play Civ against an LLM?
25% chance
Will an LLM do a task that the user hadn't requested in a notable way before 2026?
92% chance
Will I write an academic paper using an LLM by 2030?
65% chance