
Context: https://open-assistant.io/
For a YES resolution, I don't require it be competetive with the state of the art at the time. It must be comparable to chatgpt (not strictly as good as, but not far inferior), and I require it actually run on a single consumer GPU,
The strongest evidence will be what the LAION folk themselves say; if they think they're released a worthwhile chatbot, then this market will presumptively be a YES.
If somebody wants to propose a precise metric for resolution (prior to release, obviously), I'll consider it and modify this description.
See also:
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ37 | |
2 | Ṁ31 | |
3 | Ṁ14 | |
4 | Ṁ8 | |
5 | Ṁ3 |
OpenAssistant doesn't count because GPU? Llama-30b has been optimized to run on consumer hardware, maybe not LAION's version, but I think they can get there. Time frame is the real question mark, I think it might be a bit into 2024.