Will AGI be interpretable due to CoT and reflection and similar methods?
30
1kṀ1534
Jun 20
28%
chance

If the first AGIs come about from comparatively dumb LLMs being prompted in specific manners that force them to make their reasoning more explicit and outputted in a structured manner interpretable to us, like COT and reflection, will this allows us to make the first superhuman AGIs naturally interpretable?

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy