[M5000 subsidy] Will finetuned GPT-3.5 solve any freshly-generated Sudoku puzzle? (2023)
Will Google Bard become better than GPT-4 at any point before September 2024?
Will there be a version of GPT4 with a context window of 100k tokens this year?
Will GPT-4 be trained on more than 10T text tokens?
Will we train GPT-4 to generate resolution criteria better than the creator 50% of the time by the end of 2023?
Will mechanistic interpretability be essentially solved for GPT-2 before 2030?
Will Google's Gemini outperform GPT-4 in the SuperGLUE benchmark test by December 2023?
Will GPT-5 be released incrementally as GPT4.x for different checkpoints from the training run?
Will DALL-E 3 be able to generate arbitrary non-adversarial text?
Will a GPT-4 level system be trained for <$1mm by 2030?
Will mechanistic interpretability be essentially solved for GPT-3 before 2030?
2) We are going to start running out of data to train large language models.
There will be an open source LLM approximately as good or better than GPT4 before 2025
Will there be a OpenAI LLM known as GPT-4.5? by 2033
Will an open-source LLM beat or match GPT-4 by the end of 2024?
Will a model be trained using at least as much compute as GPT-3 using AMD GPUs before Jan 1 2026?
Will a GPT-3 quality model be trained for under $10.000 by 2030?
Will Gary Marcus tweet at least 10 examples of GPT-4 failure which won't be disproven/fixed within 24 hours? (in 2023)
Will GPT, or AI systems that have GPT as their main component, become as reliably factual as Wikipedia, before 2026?
Will any Google model exceed chatGPT interest? (by 2025)