Is Bing's chatbot sentient? [Resolves to poll]
Basic
38
Ṁ5585
resolved Jan 15
Resolved
NO

Resolves to the majority result of a yes/no poll of Manifold users at the end of 2023. If Bing changes their chatbot in between market creation and the poll, the poll is about the chatbot as it was when this market was created.

Get
Ṁ1,000
and
S3.00
Sort by:

Resolving to poll seems quite unlikely to give interesting results, because then you're incentivized to bet on what you believe that most people think has more than 50% of being true.

Also related:

predictedNO

Copying a few of my comments from there:

A good essay: https://www.slowboring.com/p/were-asking-the-wrong-question-about

We're asking the wrong question about AI sentience

What makes you so sure I'm not "just" an advanced pattern-matching program?"

And I liked this essay: https://askellio.substack.com/p/ai-consciousness - I largely agree with it, and would say that the probability that LaMDA "experiences sensations" (my working definition for sentience, called phenomenal consciousness in this essay) is somewhere between rocks and insects - something like the probability that plants experience sensations. Quoting from the essay: "Plants are complex systems that respond to negative and positive stimuli, that can act in ways that seem to require planning, and that can do things that look like internal and external communication." And yet plants seem to lack a plausible mechanism for experiencing sensations, at least as far as we understand. And, I agree that while LaMDA is probably not conscious, "ML systems as a group have much more potential for consciousness in the future than plants"

See also: https://manifold.markets/jack/what-does-sentience-mean

Related:

predictedNO

Resolves to poll how? Is the poll yes/no or numeric? Mean, median, or what?

@jack Yes/no poll, resolves to the majority result.

How will sentience be defined for the purpose of the poll? 'Obviously yes', 'possibly yes', and 'obviously no' are all reasonable positions to take depending on what exactly is being asked about. Or is the sense of 'sentience' to be left up to the respondents?

@Muskwalker Left up to the respondants.

predictedNO

David Chalmers has an interesting presentation on LLM conciseness. Notes: https://philarchive.org/archive/CHACAL-3

predictedNO

@LevMckinney

Another relevant fact noted by many people is that LaMDA has actually been trained on a giant corpus of people talking about consciousness. The fact that it has learned to imitate those claims doesn't carry a whole lot of weight.

Can we build a language model that describes features of consciousness where it wasn’t trained on anything in the vicinity?


We know from developmental and social

psychology, that people often attribute consciousness where it's not present.

So the fact that we are seeing increasing

generality in these language models suggests a move in the direction of consciousness.

predictedNO

@firstuserhere "The first reason, which I'll mention very quickly, is the idea that consciousness requires carbon-based biology" uh what? Why? There might be a big space of consciousness and just because in our short existence we've seen only carbon based consciousness doesn't mean it's a general rule

Why is this still so high? Would've expected this to be <3%

predictedYES

@Dreamingpast The AI shows the same evidence of sentience that we would expect to see from a human confined to text-only speech. https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html

Paywall skip: https://archive.is/ekfLO

@Gen Characters in many fiction stories that I've read display the same, yet they are neither sentient, nor real

@firstuserhere Would you say I'm sentient? Or anyone else on this site? You have the same evidence for both us and the AI

predictedNO

@Gen We have a strong evolutionary bias to treat anyone/anything that behaves like a human, like a human. By default the position, as we're progressing from individual gates to transistors to ... today, has been "not sentient", so, the burden of proof is on the other side. Also, note that getting value from an interaction is not dependent on sentience. I get value from API calls which are very much non sentient

predictedNO

@Gen It's interesting what you say about how a human would act if they had just one dimension to express themselves - the speech - and that's what we've been doing over the internet.

We've literally got so much data of what it would look like when a human is confined to a text-only mode of communication. We don't have to wonder, we know. That's why if you're going to have a black box trained to mimic what a human would talk like if confined to only text-mode of communication, then it's going to mimic that. However, in humans, the text is not the source of thought, it is a form of expression

predictedYES

@firstuserhere That makes sense, but you could never diagnose anyone with sentience simply because of text then.

You could be receiving messages, emails, or even physical mail, of someone pleading for help, giving you exact coordinates and saying they need help, and you would be able to say, "there's no evidence that's a sentient being, it's just text".

I don't really know what my point is, because it doesn't really prove the sentience of the AI, but I would say it's equally as sentient as anyone else I engage with through text online.

predictedNO

@Gen I'd like us to try to think about your point from an evolutionary point of view.

  1. My understanding is that language was evolved over a long time. The evolution was to develop a way of communication using language. That way humans could co-ordinate in groups and outperform non-group animals, both predators and preys.

  2. Language turned out to be a revolutionary technology and could be used for a lot of other things too! It could be used to share information that one had no access to via one's own senses and could be used to extend the half-life of a piece of information a lot! Now knowledge didn't just die with a person, we were actively preserving it across generations.

  3. Language - in my mind - has an analogy of something like: I can talk with you in English and you'd understand my message because you have the key to decrypt it. A non-english speaker doesn't have access to this key and is unable to understand this information (they can have ways to approximate this key with their own key/language - something something about how neural networks are really good approximation function generators).

  4. We're not tied to phrases - language evolves, very much. Phrase injections like "idk" or "Lmk" automatically unwrap themselves in my mind, but probably does not for older people who've not used or grown up with such phrases. What we're good at is not rules of language but approximating well from extremely noisy or ambiguous data.

Because of these reasons, since language for us evolved to be a technology/way/method to perform communication, that's what we're good at using it for! We never grew up with another species which was also communicating using a language, and that's why we don't have evolution-based ways of implicitly distinguishing that kind of behavior. Language was and is a way for humans to communicate, but using language as a hammer to view everything from intelligence to sentience as nail, doesn't seem right to me

yes, but humans aren't even convinced animals are sentient

what poll

@DesTiny A poll I will post at the relevant time.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules