
Did Gemini 1.5 Pro achieve long-context reasoning through retrieval?
1
70Ṁ44Jan 1
50%
chance
1D
1W
1M
ALL
There is no way an attention network is that good.
1 hour video understanding
Needle in a Haystack 99% accuracy
Learning a language that no one speaks by reading a grammar book "in context"
Resolves YES if we later found out that the long context ability was enhanced by agents/retrieval/search/etc., i.e. it was not achieved merely by extending attention mechanism.
Resolves NA if I can't find out by EOY 2024
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Related questions
Related questions
Will GPT-5 have function-calling ability to some o1-like reasoning model, upon release?
49% chance
Will Gemini 1.5 Pro seem to be as good as Gemini 1.0 Ultra for common use cases? [Poll]
70% chance
Will Gemini outperform GPT-4 at mathematical theorem-proving?
62% chance
Will Google Gemini do as well as GPT-4 on Sparks of AGI tasks?
76% chance
Will Gemini exceed the performance of GPT-4 on the 2022 AMC 10 and AMC 12 exams?
72% chance
What will be true of Gemini 2?
Will Google Gemini be able to answer the simple geometry/number theory question in the description?
14% chance