Will a 15 billion parameter LLM match or outperform GPT4 in 2024?
13
41
290
Dec 25
8%
chance

GPT-4's benchmark results as of its release in march 2023.

Acceptable upto 17 billion.

Get Ṁ600 play money
Sort by:

Sebastian Bubeck (Microsoft research) recently said that he thinks a 13 billion parameter model could do this. There've been reports of others working at similar sized models for similar performance.

bought Ṁ50 of NO

@firstuserhere Source? Interesting speculation there

More related questions