Will it be possible to disentangle most of the features learned by a model comparable to GPT-3 this decade? (1k subsidy)
20
236
1.4K
2031
55%
chance

Get Ṁ200 play money
Sort by:

https://chat.openai.com/share/543c2953-982b-4ef0-8ba8-967068140987

☝️Seems difficult, bigger model than gpt2

bought Ṁ5 YES at 57%

@VAPOR Essentially a link to the autointerp work by OpenAI, i.e. Bills et al (2023) (link).

bought Ṁ0 of YES

@EliezerYudkowsky trade on your current estimate?

@firstuserhere What is a disentangled feature?

@EliezerYudkowsky something that represents a single property of the data

@firstuserhere That is not enough for me to figure out how this market will be judged.

@EliezerYudkowsky It is quite fuzzy, I agree, and there are many different definitions for features.

Here I refer to a basic set of meaningful directions in the activation space from which more complex directions can be created from; these meaningful directions can be converted to human understandable concepts (to allow for the existence of features which are not human understandable), and the model actually learns and uses these directions as general ways to represent the properties of the input data.

The question is then, whether it will be possible to cleanly separate out these directions and to convert them into human understandable concepts for most of the properties of the data that the model is capable of representing and using.

@firstuserhere Does "human-understandable" means "at least one human understood it", or "all humans understood it", or something else?

@a2bb It is better to say human interpretable than understandable, but saying understandable in the text above makes that text easier for me to parse

More related questions