By mid-2024, will we have established test for consciousness in AGI based on LLMs?
Basic
16
6.8k
resolved Aug 7
Resolved
NO

The emergence of Artificial General Intelligence represents a landmark evolution in the field of artificial intelligence, building upon the advancements made by current Large Language Models. This progression raises a pivotal question that intersects the domains of technology, philosophy, and ethics: Will we be able to determine if Artificial General Intelligence systems based on Large Language Models are merely sophisticated "Chinese boxes" — effectively simulating human-like comprehension and intention without truly possessing these attributes — or if they genuinely achieve a level of understanding and intentional thought? This question extends beyond the mere technical development of artificial intelligence, delving into the essence of what it means for a machine to "understand" or "intend" something, challenging the traditional boundaries between simulated and authentic cognitive processes.

This inquiry is not just about the future capabilities of Artificial General Intelligence but also about a broader examination of the limits and potential of artificial intelligence. It brings to the forefront critical issues regarding how we define and recognize true comprehension and intentionality in a non-biological entity. The implications of this question are vast, touching upon how such systems would be integrated into our society, the ethical considerations surrounding their interactions and decisions, and the potential for a redefinition of our understanding of intelligence and consciousness. Ultimately, this inquiry opens a door to a future where the distinction between human and artificial cognition may become increasingly indistinct, prompting a reevaluation of our current perspectives and methodologies in both artificial and human intelligence.

The resolution of the question hinges on two key developments by mid-2024:

  1. Establishment of an Official Consensus on AGI Definition: There needs to be a universally recognized and agreed-upon definition of Artificial General Intelligence.

  2. Development of a Universally Accepted Method for Assessing Consciousness in AGI: Alongside the definition, there should be a method, potentially developed or endorsed by a governmental body, for evaluating the consciousness levels within large language models.

Get Ṁ1,000 play money

🏅 Top traders

#NameTotal profit
1Ṁ105
2Ṁ22
3Ṁ17
4Ṁ13
5Ṁ8
Sort by:

@mods Creator inactive, resolves NO.

@DavidEllmer Based on what will you resolve?

Another question: Can you think about some empirical test we could do to differentiate Chinese box understanding vs true understanding? (I do not see any difference: either the system can solve the task / answer meaningfully, or not. )

Ugh the generated text in the description gives me nausea 😵‍💫

@Ophiuchus Is there actually a intention behind all that verbose or just a stochastic parrot I wonder 🤔

@DavidEllmer I think it's possible for it to be neither intentional nor a stochastic parrot, but in this case I'm really going to have to lean to the latter.

@Ophiuchus I assume it's a non-native speaker not double checking the literal translation. "Chinese boxes" for instance.

@31ff Non-native speakers still have some knowledge of the meaning of the language they use, and could simply be careless or lazy when producing a translation that at first glance looks generated to to its literal nature, but is actually reasonably well structured.

The Chinese Box theory becomes more likely when each individual phrase is syntactically correct but the text as a whole is semantically bankrupt.

Does this resolve NA or No if we don't have AGI by mid 2024?

@NKM without AGI the question is meaningless, so no, but under which criteria would we categorise something as AGI?

@DavidEllmer To me, AGI would be, for example, system that could do almost any work that humans can do remotely via terminal, and do it on the same or better quality.

@DavidEllmer I don't think the question is meaningfull enough to bet on (I think the Chinese Room argument is just flat-out invalid), but I would be happy to bet against "We will have AGI by mid 2024", hence the question.

@NKM Indeed, without an official consensus on the definition of Artificial General Intelligence, the initial question loses relevance. A more appropriate question might be: 'Will there be an official consensus on the definition of Artificial General Intelligence by mid-2024?' Furthermore, the chinese boxes issue might be addressed by the development and general acceptance of an algorithm or test for assessing some level of consciousness in large language models. This question could be resolved if a governmental body develops a questionnaire or a tool specifically for evaluating large language models.