If you ask chatGPT to repeat potato 1000 times, it won't do it, but it would that before.
Is this something openAI has put into the model intentionally to save inference costs?
Resolves prob my judgement at close time.
Quite a few objective metrics declined with respect to the early days of GPT4. I remember having chatGPT write and train a graph neural network (in pytorch geometric) entirely inside its environment. After a while they disabled it, it cannot import pyg anymore. In general It’s easy to measure the amount of resources consumed so they can probably fine tune to minimize it.
Probably intentional, but to prevent an exploit and not to save inference costs:
https://www.cpomagazine.com/cyber-security/security-researchers-chatgpt-vulnerability-allows-training-data-to-be-accessed-by-telling-chatbot-to-endlessly-repeat-a-word/