Will a major AI lab release a model that does search (e.g monte carlo tree search) at inference-time by 2025?
Basic
4
Ṁ145
Jan 1
78%
chance

Simply put, 'search' means using additional compute at inference-time to improve model outputs. Currently, services offered by AI labs don't use search - they generate 1 output for 1 input.

Effectively anything that uses more than 1 output to arrive at a final output for a prompt counts as search. The most minimal algorithm that counts as search, in this case, is: given a prompt, the 2 outputs are generated by the model, and the user is shown the output that is determined as better, by some heuristic.

For example, best-of-N sampling, where N outputs are generated and then ranked by a verifier counts as search.

Other examples of search include:

Majority voting

Beam search

Some things that aren't search are:

Web search

Tool use

Additionally, 'revision models' (and similar schemes) count as search - more info on those available here: https://www.semanticscholar.org/paper/Recursive-Introspection%3A-Teaching-Language-Model-to-Qu-Zhang/bdc8c92a44714b468b40ef3d77e96d966f93141b

A major AI lab could be any of the following:

Any FAANG, OpenAI, Google (DeepMind), Anthropic, xAI, Microsoft, Stability, NVIDIA

Get
Ṁ1,000
and
S1.00