Poll: Is LaMDA sentient?
Basic
10
Ṁ589
resolved Jun 24
Resolved as
2%
What is your credence that LaMDA is sentient? (100% means definitely, 0% means definitely not.) Vote by commenting with a percentage between 0% and 100% at the start of your comment. This market resolves to the median % of people responding with a valid vote. Context: - https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/ - https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 - https://garymarcus.substack.com/p/nonsense-on-stilts You can update your vote by commenting again; only your last response will be counted. The poll closes 1 day after this market closes. Of course, you're encouraged to discuss and try to inform/persuade others. See also: https://manifold.markets/jack/what-does-sentience-mean
Get
Ṁ1,000
and
S3.00
Sort by:
predictedNO
Worth noting that poll participants here had a variety of different ideas of what "sentient" means. And not just difference due to actual philosophical questions/uncertainty, just extremely different ideas of the definition of the word, or perhaps of what concepts are important/interesting to try to classify in the first place. Each of "Is LaMDA sentient?", "Can LaMDA experience sensations and phenomena?", or "Is LaMDA self-aware?" would likely get a substantially different set of responses.
predictedNO
Poll results: 0.50% Andrew G 0.00% Enopoletus Harding Epsilon SneakySly 85.00% joy_void_joy 5.00% Bionic 50.00% Martin Randall <1% Andrew Hartman 67.00% Hamish Todd 0.00% Angela 2.00% Franek Żak 0.00% Orr Sharon 2.00% Jack Median: 1.5% (I'm forced to round the resolution to the nearest percent)
predictedNO
Another good essay: https://www.slowboring.com/p/were-asking-the-wrong-question-about
predictedNO
This is on the same point @MartinRandall made earlier that humans can't meet some of the absurd standards for sentience being used to assess LaMDA. "We're asking the wrong question about AI sentience What makes you so sure I'm not "just" an advanced pattern-matching program?"
@jack I think humans are, in fact, just advanced pattern-matching programs. I think LaMDA is an insufficiently advanced one.
predictedNO
@AndrewHartman I agree that "advanced pattern-matching programs" is an accurate description of us - but it's not the only one. In the same way, "a collection of atoms" is also an accurate description of what we are, and we can in principle derive everything about our behavior from the physics of atoms. I believe that there are multiple descriptions of what we are at different levels of abstraction that are all valid and useful. To me, the question is what interesting emergent phenomena show up if you look at the higher-level abstractions. I think a very advanced pattern-matching program might be sentient (like us), but it doesn't necessarily have to be.
predictedNO
I liked this essay: https://askellio.substack.com/p/ai-consciousness
predictedNO
2% - I largely agree with this essay, and would say that the probability that LaMDA "experiences sensations" (my working definition for sentience, called phenomenal consciousness in this essay) is somewhere between rocks and insects - something like the probability that plants experience sensations. Quoting from the essay: "Plants are complex systems that respond to negative and positive stimuli, that can act in ways that seem to require planning, and that can do things that look like internal and external communication." And yet plants seem to lack a plausible mechanism for experiencing sensations, at least as far as we understand. And, I agree that while LaMDA is probably not conscious, "ML systems as a group have much more potential for consciousness in the future than plants"
predictedNO
I thought this was funny, although it has no bearing on how I personally view this question, because I didn't find what LaMDA said about its sentience to change my beliefs about its sentience anyway: https://www.aiweirdness.com/interview-with-a-squirrel/ I do think LaMDA (and GPT-3 etc) are doing more than just putting together sequences of words that humans wrote - I think there's good evidence that they are able to model and process information and concepts beyond just words, in a way that seems beyond what people think of with simple "autocomplete" generation of next words. However, I don't think that has anything to do with whether they "experience feelings or sensations" (the most common definition of sentience I see).
@jack Humans can also play pretend, this is the result is expect, didn't update me either. The intuition is that it's hard for a non-sentient being to pretend to be sentient. We already know that a sentient being can pretend to be non-sentient.
0% The program outputs correct responses, but those responses were likely chosen out as being the most convincing. Other responses would likely be less convincing.
predictedNO
0% (unsure if lack of percentage sign matters but reposting just in case.)
why so much liquidity?
2% - I think it is a capable language model, but reading that transcript makes me think it is more just being lead on by the interviewer rather than actually exhibiting sentience in any meaningful way.
0
Sentience and qualia are Things even if they’re hard to define and ai is incapable of having it; the best we can do is cleverly word a definition of sentience which we cleverly argue the “object behaviors” match up with, and regardless of however well said object can model something that is alive, it lacks the life itself. (Somewhat babbly; I haven’t worked this out entirely in my own soul, although maybe it’d be useful to do so. I do say this as someone who lies to myself and somewhat knows what it’s like to try to simulate believing a false truth (to the extent that it seems true to me), and such a simulation isn’t the same as living and perceiving truth and true feelings. It’s actually soul-deadening. So I think this positions me against accepting behaviors/symptoms of sentience/etc as the same as actual sentience. Again this isn’t something I’ve thought through detail by detail. Acting like a good person doesn’t make you a good person. Maybe some of it is that when man responds to subconscious suggestion, there’s a will or a soul behind it. The suggestibility goes against “how we are supposed to function”. It’s wrong to intentionally mislead a man; we know that it just is. Similar thing about tricking a puppy and spooking it. There’s a reason we don’t do such things; there’s something more to them. Again, speculative babble that could use more thought.)
We need a poll on whether humans are sentient. I certainly can't beat some of the absurd standards being used to assess LaMDA.
predictedYES
@MartinRandall Wish there was a "like" function for this comment!
@MartinRandall Of course they're sentient, and so are groundhogs, insects, etc.
@MartinRandall Personally, I think about ~10-20% of the population probably fails by my personal criteria, plus most kids under the age of 5 or so.
Also, I absolutely do think we will have machine sentience before we can find a way to get machines to learn languages as fast as a human (i.e., only picking up on normal conversations).
@EnopoletusHarding Oh hey, that's actually a really good heuristic for judging the likelihood of internal world-modeling, y'know? The size of the corpus needed for an AI to be fluent. How many words do you think humans who learn languages from observation alone hear (say, by watching TV shows or something), versus the many trillions than GPT and their ilk are digesting?
@AndrewHartman obviously humans do lots of transfer learning from other data types. Give a human baby nothing but trillions of words of raw text and no other input I doubt they even learn to read.
@MartinRandall Yes, definitely. But I think the contention here is that the AI hasn't learned to do it either, probably for similar reasons.
@AndrewHartman Unlike sentience, we have standardized reading comprehension tests. Such a contention strikes me as akin to saying that AlphaGo isn't really playing go.
@MartinRandall I mean, sure. Does this pass reading comprehension tests? Based on how I've heard its operation described, I get the feeling it wouldn't score well.
@AndrewHartman comparable to a small child, last I checked, depending on the details of the test and the prompt.
@MartinRandall "comparable to a small child, last I checked, depending on the details of the test and the prompt." >consumes the entire library of Babel >answers "comparable to a small child" Not impressive.
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules