Are HLI Metrics against AGI a Red Herring?
4
33
90
2029
51%
chance

Inspired by:

https://arxiv.org/pdf/2304.00002.pdf

Lots of markets on Manifold question either A) When a particular form of AGI (Artificial General Intelligence) is going to be achieved or B) Metrics measuring against segments of HLI (Human Level Intelligence).

Whereas:

AI simply does not have the means to collect such data in the same way humans can.

...and other various contextual shortcomings, data ingestion, contextual comprehension, generalized experiences, previous events predicting future events, drawing connections between different domains, etc.

At close, this market resolves YES if HLI Metrics are shown to be a Red Herring due to the above shortcomings being shown to be overwhelming barriers to achieving ANY particular form of AGI discussed, otherwise resolves NO.

Get Ṁ600 play money
Sort by:

This is a very interesting question! OpenAI was able to predict in advance exactly where GPT-4's text-prediction performance would land on the scaling curve, but this did not seem to enable anyone to predict in advance that the resulting system would score in the top decile on the unified bar exam. And, there are capabilities possessed by existing AI systems (such GPT-4's abilityt to decide to call an API, and then do so by writing out the wire-level JSON on the fly unassisted) are not even comparable to things humans could do.

But, I don't feel like I understand the resolution criteria well enough to bet. What would the world look like if HLI metrics where shown (or not shown) to be a red herring?

@ML I am not sure honestly. Any suggestions?

More related questions