Is Bing's chatbot sentient? [Resolves to poll]

Resolves to the majority result of a yes/no poll of Manifold users at the end of 2023. If Bing changes their chatbot in between market creation and the poll, the poll is about the chatbot as it was when this market was created.

Sort by:
IsaacKing avatar
Isaac King

Also related:

jack avatar
Jackis predicting NO at 8%
jack avatar
Jackbought Ṁ100 of NO

Copying a few of my comments from there:

A good essay:

We're asking the wrong question about AI sentience

What makes you so sure I'm not "just" an advanced pattern-matching program?"

And I liked this essay: - I largely agree with it, and would say that the probability that LaMDA "experiences sensations" (my working definition for sentience, called phenomenal consciousness in this essay) is somewhere between rocks and insects - something like the probability that plants experience sensations. Quoting from the essay: "Plants are complex systems that respond to negative and positive stimuli, that can act in ways that seem to require planning, and that can do things that look like internal and external communication." And yet plants seem to lack a plausible mechanism for experiencing sensations, at least as far as we understand. And, I agree that while LaMDA is probably not conscious, "ML systems as a group have much more potential for consciousness in the future than plants"

See also:

IsaacKing avatar
Isaac King


jack avatar
Jackis predicting NO at 13%

Resolves to poll how? Is the poll yes/no or numeric? Mean, median, or what?

IsaacKing avatar
Isaac King

@jack Yes/no poll, resolves to the majority result.

Muskwalker avatar

How will sentience be defined for the purpose of the poll? 'Obviously yes', 'possibly yes', and 'obviously no' are all reasonable positions to take depending on what exactly is being asked about. Or is the sense of 'sentience' to be left up to the respondents?

IsaacKing avatar
Isaac King

@Muskwalker Left up to the respondants.

LevMckinney avatar
Lev Mckinneyis predicting NO at 10%

David Chalmers has an interesting presentation on LLM conciseness. Notes:

firstuserhere avatar
firstuserhereis predicting NO at 10%


Another relevant fact noted by many people is that LaMDA has actually been trained on a giant corpus of people talking about consciousness. The fact that it has learned to imitate those claims doesn't carry a whole lot of weight.

Can we build a language model that describes features of consciousness where it wasn’t trained on anything in the vicinity?

We know from developmental and social

psychology, that people often attribute consciousness where it's not present.

So the fact that we are seeing increasing

generality in these language models suggests a move in the direction of consciousness.

firstuserhere avatar
firstuserhereis predicting NO at 10%

@firstuserhere "The first reason, which I'll mention very quickly, is the idea that consciousness requires carbon-based biology" uh what? Why? There might be a big space of consciousness and just because in our short existence we've seen only carbon based consciousness doesn't mean it's a general rule

Dreamingpast avatar
Dreamingpastbought Ṁ10 of NO

Why is this still so high? Would've expected this to be <3%

Gen avatar
Genzyis predicting YES at 15%

@Dreamingpast The AI shows the same evidence of sentience that we would expect to see from a human confined to text-only speech.

Paywall skip:

firstuserhere avatar
firstuserherebought Ṁ40 of NO

@Gen Characters in many fiction stories that I've read display the same, yet they are neither sentient, nor real

Gen avatar
Genzybought Ṁ0 of YES

@firstuserhere Would you say I'm sentient? Or anyone else on this site? You have the same evidence for both us and the AI

firstuserhere avatar
firstuserhereis predicting NO at 10%

@Gen We have a strong evolutionary bias to treat anyone/anything that behaves like a human, like a human. By default the position, as we're progressing from individual gates to transistors to ... today, has been "not sentient", so, the burden of proof is on the other side. Also, note that getting value from an interaction is not dependent on sentience. I get value from API calls which are very much non sentient

firstuserhere avatar
firstuserhereis predicting NO at 10%

@Gen It's interesting what you say about how a human would act if they had just one dimension to express themselves - the speech - and that's what we've been doing over the internet.

We've literally got so much data of what it would look like when a human is confined to a text-only mode of communication. We don't have to wonder, we know. That's why if you're going to have a black box trained to mimic what a human would talk like if confined to only text-mode of communication, then it's going to mimic that. However, in humans, the text is not the source of thought, it is a form of expression

Gen avatar
Genzyis predicting YES at 10%

@firstuserhere That makes sense, but you could never diagnose anyone with sentience simply because of text then.

You could be receiving messages, emails, or even physical mail, of someone pleading for help, giving you exact coordinates and saying they need help, and you would be able to say, "there's no evidence that's a sentient being, it's just text".

I don't really know what my point is, because it doesn't really prove the sentience of the AI, but I would say it's equally as sentient as anyone else I engage with through text online.

firstuserhere avatar
firstuserhereis predicting NO at 10%

@Gen I'd like us to try to think about your point from an evolutionary point of view.

  1. My understanding is that language was evolved over a long time. The evolution was to develop a way of communication using language. That way humans could co-ordinate in groups and outperform non-group animals, both predators and preys.

  2. Language turned out to be a revolutionary technology and could be used for a lot of other things too! It could be used to share information that one had no access to via one's own senses and could be used to extend the half-life of a piece of information a lot! Now knowledge didn't just die with a person, we were actively preserving it across generations.

  3. Language - in my mind - has an analogy of something like: I can talk with you in English and you'd understand my message because you have the key to decrypt it. A non-english speaker doesn't have access to this key and is unable to understand this information (they can have ways to approximate this key with their own key/language - something something about how neural networks are really good approximation function generators).

  4. We're not tied to phrases - language evolves, very much. Phrase injections like "idk" or "Lmk" automatically unwrap themselves in my mind, but probably does not for older people who've not used or grown up with such phrases. What we're good at is not rules of language but approximating well from extremely noisy or ambiguous data.

Because of these reasons, since language for us evolved to be a technology/way/method to perform communication, that's what we're good at using it for! We never grew up with another species which was also communicating using a language, and that's why we don't have evolution-based ways of implicitly distinguishing that kind of behavior. Language was and is a way for humans to communicate, but using language as a hammer to view everything from intelligence to sentience as nail, doesn't seem right to me

L avatar

yes, but humans aren't even convinced animals are sentient

Wobbles avatar
Wobblesbought Ṁ10 of NO

what poll

IsaacKing avatar
Isaac King

@DesTiny A poll I will post at the relevant time.