12Bish, anywhere from 10-15B is fine. Lower is also cool, a 1B parameter model that matches performance will resolve yes.
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ683 | |
2 | Ṁ484 | |
3 | Ṁ374 | |
4 | Ṁ273 | |
5 | Ṁ272 |
@Issc would be hard to argue it's open source if we can't figure out the parameter count
@dmayhem93 I get it’s the resolution criteria it just seems very unlikely - OpenAi is super conservative with what they release
@Issc They've released some non-LLMs like Whisper and CLIP. The last LLM was GPT-2. https://github.com/openai
I didn't think it was likely either but Mira buying ~1,500 makes me reconsider
@sucralose I think they might just be trying to spike the market idk
All good points actually - still sitting on a no because I think they aiming more towards the safe and closed environment especially with regulation kicking up
@rockenots rounded yes, mostly to account for the awkward reporting of model weights that occasionally excludes embedding layers, or rounds it.
@dmayhem93 clearly I should have read the description, but having too many parameters seems like an odd complaint
@Tomoffer Not sure what you mean here, this is a bet on (OpenAI dropping new open source LLMs) AND (OpenAI is able to drop a model that matches and/or beats their previous GPT-3 that is at most 1/10th of it's size).