Will prompting of generalist models be the dominant method of interaction with LLMs on 1 January 2028?
Mini
2
10
2028
50%
chance

Prompting means direct prompting of a generalist model and includes optimized prompts used with a generalized model such as those created through a framework like DSPy.

Alternatives to direct prompting of a generalist model would include fine-tuning a generalist model with adapters and then using prompts on that to get specific behavior or using representation vectors with prompts over a generalist model, where the behavior is controlled using the representation vector.

Part of the question here is will the market make it such that fine tuning is available in an ordinary programming workflow to the point that it becomes the dominant method of controlling model behavior.

Get Ṁ1,000 play money
Sort by:

Generalism comes with benefits. That's the whole motivation for building general-purpose computers, instead of 100 different machines for separate jobs like adding, multiplying, counting (like some early IBM punchcard machines did). What does seem likely to me is creating generalists out of mixtures of specialist sub-components, sort of like MoE models. But there's a semantic problem with trying to draw such a distinction.

... or using representation vectors with prompts over a generalist model, where the behavior is controlled using the representation vector.

Can you give an example of a project doing this? It's not obvious to me why this should be considered non-generalism.

The main idea of the question is around whether "prompts over a generalist model" will be the dominant mode of interaction and you normally need to do a quick training loop to get representation vectors (at least from what I've seen).

The idea the question is meant to capture is "will the method of interaction that we see today with LLMs still be the dominant mode of interaction in 3 years (borrowing it just being made more efficient external to the LLM, like with DSPy)" or will it be something different.

Representation vectors would be different because you would no longer just be writing a prompt and handing it to the model.