If the top chatbot in 2025 "thinks" before responding to a difficult prompt, will its thoughts be human-interpretable?

If the market below resolves YES, will the thoughts produced by the top chatbot be interpretable to humans?

If that market resolves NO, this one will resolve to N/A.

A clear case for NO would be if the thoughts are encoded as vectors that don't correspond to natural language. A clear case for YES would be if the chatbot gives all of its reasoning in well-formed English. If the thoughts contain some steganography or obfuscation, but it's possible to understand the main thrust of the chatbot's reasoning, it would still count as YES.

I will not bet in this market.

Get Ṁ600 play money
Sort by:
bought Ṁ30 of NO

if the chatbot gives all of its reasoning in well-formed English

The "all" seems important. How'd you know?

@firstuserhere It would be hard to know for sure. This is just the most optimistic case; as I mentioned right after that, the market would still resolve YES if humans can understand "most" of its reasoning (according to my best judgement).

bought Ṁ70 of NO

@CalebBiddulph i reckon you wouldn't call say 20% of the reasoning as "most", right?

@firstuserhere I guess it's pretty hard to say what "percentage of reasoning" any given thought represents, especially since some of the reasoning could happen in the model's weights. I went ahead and changed the wording to "the main thrust" of its reasoning (sorry for the edit, but I think this is closer to what I meant).

My main point here is that the model's thoughts can easily be read in natural language, rather than vectors or random gibberish tokens, and that the apparent meaning of the thoughts correlates with their actual meaning.

More related questions