There's this piece that was published recently and claims that, and generated a bunch of controversy on social media.
The claim itself is unclear, but since the article's argument is that Google has much more hardware than OpenAI, I will be ironmanning the author's argument. And interpret it as meaning that Gemini is trained on 5x as "effective compute". I define effective compute as compute resources used for training, accounting for differences in efficiency. This might be something like FLOPs, but accounting for the fact that I expect Bard to be trained on TPUs and not GPUs, examples seen since training, etc. I will try to judge as objectively as possible but some subjectivity is unavoidable so I will not bet on this market. When information that allows me to judge this is made publicly available, this will resolve. If that doesn't happen until the end of 2024 (since by then the models will have significantly changed), this market will resolve N/A.
Please note that 5x compute does not mean that it generates text that is 5x better, whatever that would mean.