Will we find polysemanticity via superposition in neurons in the brain before 2040?
4
110Ṁ244
2040
64%
chance

Polysemantic neurons in a neural network fire on a wide conceptual range of inputs, or in other words do not correspond to a single semantic pattern.

A leading theory behind why Polysemantic artificial neurons form is superposition (https://arxiv.org/abs/2209.10652), roughly the theory that an overcomplete basis of concepts, represented by vectors, are packed into relatively low-dimensional representation space, at the cost of occasional interference effects.

Will we identify neurons which are polysemantic, in the sense that their firing pattern does not closely correspond to a human-interpretable cause or concept? For a YES resolution, we must find such neurons along with clear evidence that the primary cause of this polysemanticity is superposition.

One way to demonstrate superposition might involve showing that the neuron lies in a local cluster which corresponds to an artificial embedding space, and making accurate predictions about how interference will occur according to the theory of superposition.

The terms here are meant to be used loosely, in the sense that I will resolve YES in situations which do not exactly match this description as long as they fit the spirit of this market. For example, I’ll resolve YES if we observe polysemanticity not in neurons but among slightly larger circuits of neurons.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy