Resolves to my subjective opinion of the state of the art / expert consensus. (I don't have any better ideas)
(If you think this market is poorly written/specified/has a mistake, tell me, and I may edit it within the first couple days)
Operationalizing a claim from https://www.semianalysis.com/p/google-we-have-no-moat-and-neither (hacker news comments)
We don't actually know if there is a gap.
https://lmsys.org/blog/2023-03-30-vicuna/
Vicuna-13b, the best open source model, has been approximated to be about equal to Google Bard in terms of its capabilities. So whether there is a gap hinges on what Google is hiding from the public. PaLM might not be as good as people are thinking, in which case there would hardly be a gap at all.
sister markets - The gap will close between the quality of open source language models and Google's internal language models in two years / five years
"Large models aren’t more capable in the long run if we can iterate faster on small models" within two years / five years