By 2024, a significant fraction of philosophers (>20%) take seriously the notion that language models with a size and architecture similar to GPT-3 are partially conscious
23
45
Ṁ1KṀ520
resolved Jan 2
Resolved
NO1D
1W
1M
ALL
Will be judged extremely subjectively based on where the industry discussion is at and informed by any relevant surveys. I will try to be as fair as possible.
Get Ṁ200 play money
Related questions
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ52 | |
2 | Ṁ15 | |
3 | Ṁ14 | |
4 | Ṁ14 | |
5 | Ṁ10 |
Sort by:
@IsaacKing Significant would here be >20% as stated in the question. Right now I would estimate that it's <1%.
More related questions
Related questions
By the end of 2026, will we have transparency into any useful internal pattern within a Large Language Model whose semantics would have been unfamiliar to AI and cognitive science in 2006?
50% chance
By 2025, we will have made an artificial system that is judged with high likelihood (>80%) to have some sort of conscious experience
15% chance
Will GPT, or AI systems that have GPT as their main component, become as reliably factual as Wikipedia, before 2026?
57% chance
Artificial consciousness will emerge in the course of increasing AI capabilities
56% chance
Will it be possible to disentangle most of the features learned by a model comparable to GPT-4 this decade?
39% chance
By January 2026, will a language model with similar performance to GPT-4 be able to run locally on the latest iPhone?
76% chance
By 2025, GPTs are proven to be able to infer scientific principles from linguistic data.
37% chance
Will it be possible to disentangle most of the features learned by a model comparable to GPT-3 this decade? (1k subsidy)
55% chance
By 2025 will there be a competitive large language model with >50% of the total training data generated from a large language model?
75% chance
ChatGPT (Or LLMs really) have discovered regularities in language that humans are not aware of
84% chance