The assistant AI has to be better to the original one by all regards (except memory and compute).
This market will be decided by the resolution of a bet between Kabir Kumar and me (discussion here: https://discordapp.com/channels/1101560152303353907/1101562301972234241/1136622232236458016).
On June 2024, we will consider together if it happened. If the evidence is ambiguous, we'll ask our online communities to decide.
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ92 | |
2 | Ṁ12 | |
3 | Ṁ12 | |
4 | Ṁ11 | |
5 | Ṁ10 |
People are also trading
@NLeseul Can't speak for what we will decide, but I would say no; the relevant criteria are whether this is indicative of the singularity/doom being near. Requiring more time to train or test does not sound like a significant safeguard.
@amaurylorin Bear in mind that the actual code to execute current LLMs is very, very straightforward and boilerplate; ChatGPT could probably spit out something reasonable for you right now if you wanted. Actually improving an LLM is entirely a matter of assembling a better training data set, throwing enough GPUs at it to train it in a reasonable amount of time, and tweaking its performance based on human evaluation. So those factors will be the main obstacle to anyone trying to build an improved LLM-based AI.
@NLeseul Yes, I think it's likely (>.6) that an AI-assisted team will be able to do that in a day by next year. Most of my unlikelihood is in the efficiency criterion.
@NLeseul in that case, how would the new one be better? I think this question is supposed to be something very different than "can a team back together a weird but functional assistant in a hurry".