If GPT-5 can do recursive self-improvement, will it first be via fine-tuning on its outputs?
Basic
7
Ṁ82
2026
37%
chance

If this market resolves Yes: https://manifold.markets/NathanHelmBurger/will-gpt5-be-capable-of-recursive-s and I agree with the resolution, will it be because GPT-5 is fine-tuned on its outputs? There can be an initial RLHF or similar training stage, but the core of the self-improvement shouldn't include humans or RL, just supervised learning.

That is, for each training step:
- There is some set of input datapoints.
- With minimal prompting/scaffolding, GPT-5 produces some output given each input datapoint.
- GPT-5 is fine-tuned (normal supervised learning) on some simple function of the input data and GPT-5's output.

If Nathan's market resolves No or N/A, this market will resolve N/A.
If I disagree with the resolution, this market will resolve N/A.

Note this part of Nathan's market description:
> As clarified in the comments, if the [Noa: steps of the?] recursive self-improvement can't be clearly demonstrated using less than 3% of the FLOPs used in training GPT-5, then it doesn't count.

Get
Ṁ1,000
and
S3.00
Sort by:

Seems like this is related to how DeepMind's AlphaZero-style iteration w Gemini goes

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules