Will LLMs' loss function achieve the level of entropy of human text by the end of 2030?
11
1kṀ569
2031
61%
chance

Texts generated by our civilization inherently possess a certain level of entropy, representing the amount of information and unpredictability within the text. The loss function of large language models measures the difference between predicted outputs and actual human-generated text. As models improve, this loss function decreases, but it cannot be reduced below the natural entropy of human text. According to the original scaling laws paper (https://arxiv.org/abs/2001.08361), it was speculated when and how this entropy level might be achieved, and this idea is taken seriously within the AI research community.

Resolution Criteria:

1. Evidence of Loss Function Plateau:

- There must be significant evidence that further progress in frontier large language models does not lead to a decrease in the loss function. This includes:

- Research papers and technical reports showing that improvements in model architecture, training data, and compute resources no longer yield significant reductions in loss.

- Analysis of loss function trends over time, indicating a plateau.

2. Consensus in Research Community:

- There must be at least moderate consensus in the AI research community that the plateau in the loss function is due to reaching the entropy level of existing human-generated texts. This consensus can be demonstrated by:

- Publications in peer-reviewed journals or conferences where multiple researchers or groups independently arrive at this conclusion.

- Statements or endorsements from (some) leading AI researchers or organizations acknowledging that the loss function has approached the theoretical minimum entropy of human text.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy