GPT4.2, 4.7 etc. Incrementally released from different checkpoints of the training run.
This would stop them from doing architectural changes unless they did it via distillation from a neural network so I think it is unlikely. I think they will probably keep upgrading GPT-4 to GPT-4.x until eventually they do a step change update to GPT-5 and it will probably have a modified architecture. That is vital from them to be getting as much as possible out of their neural networks.
@ZZZZZZ "We believe (and have been saying in policy discussions with governments) that powerful training runs should be reported to governments, be accompanied by increasingly-sophisticated predictions of their capability and impact, and require best practices such as dangerous capability testing."
Could be useful for tracking a lot of properties and emergent phenomena (when exactly does it occur), and sort of like git where you get ability to revert back changes perhaps. But I'm ignorant about how such large models are trained at all, so I'm just saying as a layman in that perspective.
@firstuserhere if that's a quote from them, that could indicate that they might take a more cautious approach. That said, I don't think any governments have listened to them. Also, people take much bigger risks if they feel in control. So, they will likely think that they can do it safely even if they think others should be regulated. For those reasons, I would say that they probably will do a step change improvement where they can't reuse the previous model weights directly. So, I would give this maybe a 25%.