Will AGI be interpretable due to CoT and reflection and similar methods?
30
1kṀ1534Jun 20
28%
chance
1H
6H
1D
1W
1M
ALL
If the first AGIs come about from comparatively dumb LLMs being prompted in specific manners that force them to make their reasoning more explicit and outputted in a structured manner interpretable to us, like COT and reflection, will this allows us to make the first superhuman AGIs naturally interpretable?
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Will we get AGI before 2030?
56% chance
Will AGI be more attributable to bottom-up machine learning than top-down symbolic systems?
86% chance
Will AGI be a problem before non-G AI?
20% chance
Will AGI be achieved before AIs are able to smell?
62% chance
Will AGI undergo a hard take-off?
23% chance
Will AGI be achieved in the next 5 years?
50% chance
Will AGI come from a technology significantly more advanced than transformers?
39% chance
By when will we have AGI?
Will we get AGI before 2048?
87% chance
Are LLMs capable of reaching AGI?
74% chance