
Will I consider any LLM to be a moral agent by 2024?
10
Ṁ347Ṁ910resolved Jan 1
Resolved
NO1H
6H
1D
1W
1M
ALL
Inspired by a recent talk by Amanda Askell at the 2023 Pacific APA.
Moral agent = morally responsible entity, an entity capable of self-determined action and understanding of the consequences and moral status of its acts.
LLM = large language models (e.g. GPT)
Please try to convince me one way or another in the comments.
Related:
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ120 | |
| 2 | Ṁ11 | |
| 3 | Ṁ8 | |
| 4 | Ṁ8 | |
| 5 | Ṁ8 |
People are also trading
Related questions
Will there by a major breakthrough in LLM continual learning before 2027?
49% chance
Will LLMs become a ubiquitous part of everyday life by June 2026?
90% chance
Will an LLM improve its own ability along some important metric well beyond the best trained LLMs before 2026?
14% chance
Will LLMs Daydream by EOY 2026?
17% chance
Will I write an academic paper using an LLM by 2030?
65% chance
Will the most advanced LLM stop being from a US-based company any time before 2030?
34% chance
Will the most interesting AI in 2027 be a LLM?
70% chance
Will a frontier-level diffusion LLM exist by 2028?
30% chance
Will there be a state-of-the-art LLM that is NOT based on next raw token prediction before 2029?
55% chance
Will there be any major breakthrough in LLM continual learning before 2029?
87% chance