Poll: Is LaMDA sentient?
Resolved
2%
Jun 24
M$589 bet
What is your credence that LaMDA is sentient? (100% means definitely, 0% means definitely not.) Vote by commenting with a percentage between 0% and 100% at the start of your comment. This market resolves to the median % of people responding with a valid vote. Context: - https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/ - https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 - https://garymarcus.substack.com/p/nonsense-on-stilts You can update your vote by commenting again; only your last response will be counted. The poll closes 1 day after this market closes. Of course, you're encouraged to discuss and try to inform/persuade others. See also: https://manifold.markets/jack/what-does-sentience-mean

đź’¬ Proven correct

jack
Jack bought M$100 of NO
I'm also curious about what people think "sentient" means - I guess maybe my next market will be a poll about that.
0
Jack made M$43!
jack
Jack is betting NO at 10%
Worth noting that poll participants here had a variety of different ideas of what "sentient" means. And not just difference due to actual philosophical questions/uncertainty, just extremely different ideas of the definition of the word, or perhaps of what concepts are important/interesting to try to classify in the first place. Each of "Is LaMDA sentient?", "Can LaMDA experience sensations and phenomena?", or "Is LaMDA self-aware?" would likely get a substantially different set of responses.
0
jack
Jack is betting NO at 10%
Poll results: 0.50% Andrew G 0.00% Enopoletus Harding Epsilon SneakySly 85.00% joy_void_joy 5.00% Bionic 50.00% Martin Randall <1% Andrew Hartman 67.00% Hamish Todd 0.00% Angela 2.00% Franek Żak 0.00% Orr Sharon 2.00% Jack Median: 1.5% (I'm forced to round the resolution to the nearest percent)
0
jack
Jack is betting NO at 10%
This is on the same point @MartinRandall made earlier that humans can't meet some of the absurd standards for sentience being used to assess LaMDA. "We're asking the wrong question about AI sentience What makes you so sure I'm not "just" an advanced pattern-matching program?"
0
AndrewHartman
@jack I think humans are, in fact, just advanced pattern-matching programs. I think LaMDA is an insufficiently advanced one.
0
jack
Jack is betting NO at 10%
@AndrewHartman I agree that "advanced pattern-matching programs" is an accurate description of us - but it's not the only one. In the same way, "a collection of atoms" is also an accurate description of what we are, and we can in principle derive everything about our behavior from the physics of atoms. I believe that there are multiple descriptions of what we are at different levels of abstraction that are all valid and useful. To me, the question is what interesting emergent phenomena show up if you look at the higher-level abstractions. I think a very advanced pattern-matching program might be sentient (like us), but it doesn't necessarily have to be.
0
jack
Jack is betting NO at 13%
0
jack
Jack is betting NO at 10%
2% - I largely agree with this essay, and would say that the probability that LaMDA "experiences sensations" (my working definition for sentience, called phenomenal consciousness in this essay) is somewhere between rocks and insects - something like the probability that plants experience sensations. Quoting from the essay: "Plants are complex systems that respond to negative and positive stimuli, that can act in ways that seem to require planning, and that can do things that look like internal and external communication." And yet plants seem to lack a plausible mechanism for experiencing sensations, at least as far as we understand. And, I agree that while LaMDA is probably not conscious, "ML systems as a group have much more potential for consciousness in the future than plants"
0
jack
Jack is betting NO at 13%
I thought this was funny, although it has no bearing on how I personally view this question, because I didn't find what LaMDA said about its sentience to change my beliefs about its sentience anyway: https://www.aiweirdness.com/interview-with-a-squirrel/ I do think LaMDA (and GPT-3 etc) are doing more than just putting together sequences of words that humans wrote - I think there's good evidence that they are able to model and process information and concepts beyond just words, in a way that seems beyond what people think of with simple "autocomplete" generation of next words. However, I don't think that has anything to do with whether they "experience feelings or sensations" (the most common definition of sentience I see).
0
MartinRandall
@jack Humans can also play pretend, this is the result is expect, didn't update me either. The intuition is that it's hard for a non-sentient being to pretend to be sentient. We already know that a sentient being can pretend to be non-sentient.
0
OrrSharon
Orr Sharon bought M$100 of NO
0% The program outputs correct responses, but those responses were likely chosen out as being the most convincing. Other responses would likely be less convincing.
0
Angela
Angela is betting NO at 16%
0% (unsure if lack of percentage sign matters but reposting just in case.)
0
Angela
Angela bought M$6 of NO
why so much liquidity?
0
FranekZak
Franek Żak bought M$6 of NO
2% - I think it is a capable language model, but reading that transcript makes me think it is more just being lead on by the interviewer rather than actually exhibiting sentience in any meaningful way.
0
Angela
Sentience and qualia are Things even if they’re hard to define and ai is incapable of having it; the best we can do is cleverly word a definition of sentience which we cleverly argue the “object behaviors” match up with, and regardless of however well said object can model something that is alive, it lacks the life itself. (Somewhat babbly; I haven’t worked this out entirely in my own soul, although maybe it’d be useful to do so. I do say this as someone who lies to myself and somewhat knows what it’s like to try to simulate believing a false truth (to the extent that it seems true to me), and such a simulation isn’t the same as living and perceiving truth and true feelings. It’s actually soul-deadening. So I think this positions me against accepting behaviors/symptoms of sentience/etc as the same as actual sentience. Again this isn’t something I’ve thought through detail by detail. Acting like a good person doesn’t make you a good person. Maybe some of it is that when man responds to subconscious suggestion, there’s a will or a soul behind it. The suggestibility goes against “how we are supposed to function”. It’s wrong to intentionally mislead a man; we know that it just is. Similar thing about tricking a puppy and spooking it. There’s a reason we don’t do such things; there’s something more to them. Again, speculative babble that could use more thought.)
0
MartinRandall
We need a poll on whether humans are sentient. I certainly can't beat some of the absurd standards being used to assess LaMDA.
0
HamishTodd
Hamish Todd is betting YES at 17%
@MartinRandall Wish there was a "like" function for this comment!
0
EnopoletusHarding
@MartinRandall Of course they're sentient, and so are groundhogs, insects, etc.
0
AndrewHartman
@MartinRandall Personally, I think about ~10-20% of the population probably fails by my personal criteria, plus most kids under the age of 5 or so.
0
EnopoletusHarding
Also, I absolutely do think we will have machine sentience before we can find a way to get machines to learn languages as fast as a human (i.e., only picking up on normal conversations).
0
AndrewHartman
@EnopoletusHarding Oh hey, that's actually a really good heuristic for judging the likelihood of internal world-modeling, y'know? The size of the corpus needed for an AI to be fluent. How many words do you think humans who learn languages from observation alone hear (say, by watching TV shows or something), versus the many trillions than GPT and their ilk are digesting?
0
MartinRandall
@AndrewHartman obviously humans do lots of transfer learning from other data types. Give a human baby nothing but trillions of words of raw text and no other input I doubt they even learn to read.
0
AndrewHartman
@MartinRandall Yes, definitely. But I think the contention here is that the AI hasn't learned to do it either, probably for similar reasons.
0
MartinRandall
@AndrewHartman Unlike sentience, we have standardized reading comprehension tests. Such a contention strikes me as akin to saying that AlphaGo isn't really playing go.
0
AndrewHartman
@MartinRandall I mean, sure. Does this pass reading comprehension tests? Based on how I've heard its operation described, I get the feeling it wouldn't score well.
0
MartinRandall
@AndrewHartman comparable to a small child, last I checked, depending on the details of the test and the prompt.
0
EnopoletusHarding
@MartinRandall "comparable to a small child, last I checked, depending on the details of the test and the prompt." >consumes the entire library of Babel >answers "comparable to a small child" Not impressive.
0
HamishTodd
Hamish Todd is betting YES at 17%
67%
0
AndrewHartman
<1% . . . assuming we're going to go with a fairly conventional definition for sentient, here (which, yes, is an incredibly fraught proposition . . . but like Judge Rehnquist, we know it when we see it).
0
MartinRandall
50% sentient, as in able to perceive things, similarly to simple insects that are normally classified as sentient beings.
0
AndrewG
Andrew G is betting NO at 16%
@MartinRandall interesting, I've never heard anyone consider insects sentient! I guess that just shows how overloaded the term is.
0
EnopoletusHarding
@AndrewG Insects are vastly superior to present-day AI. AI today MIGHT be able to replicate ED-209 (with the displayed errors). Not much more than that.
0
MartinRandall
@AndrewG Buddhism generally considers insects to be sentient beings, and vegans generally consider insects to be capable of suffering, which implies sentience.
0
HamishTodd
Hamish Todd bought M$12 of YES
I think LaMDA is as "sentient" as humans are. "Sentient" means very little, same with "qualia", "consciousness", "experiencing" etc, and that is why they have no agreed-on definition. But people act like they think they know what it means. Here's an argument following through on that. People have also pointed out that LaMDA, like other chatbots, is very suggestible. GPT-3 was so suggestible that when someone insinuated it was a squirrel, it started talking about itself as though it were a squirrel. Replace "squirrel" with "conscious" and you see why "talking about yourself as though you are conscious" does not necessarily imply "actually conscious". But many humans are suggestible. Suppose you had never heard the word "conscious". Then someone brings it up and says it's a good thing to be and everyone else is. You'd effortlessly start talking about yourself as if you were, whether or not you were certain of the definition of the word. And sure, everyone would agree you are conscious - but they don't have a good definition either. One friend of mine said that Lemoine's questions "just point to the part of the dataset where sci-fi novels are". I'm sure that's true. But I'd also say that conversations about whether WE are sentient "just" point to the part of our memories where discussions of "sentience" are (these are very virtue-signalling oriented parts of our memories). People have also pointed out LaMDA is a liar. It says it likes spending time with friends. But humans lie about what they like and don't like too. We can also lie in a way that makes us genuinely believe our lies - that's sort of what feelings are! People will say Lemoine glosses over some "nonsense". But humans generate lots of nonsense too. People have pointed out it has a short memory, only about 4 pages of dialogue. But some humans have short memories too (how long a memory does it need?) The Wise Owl story pulls at my heartstrings. It's not Shakespeare, but it's not My Immortal either. It makes me *feel* that this entity wants me to know something about its own *feelings*. And that is what feelings are - attempts to build alliances with others by giving them the impression that *you* have feelings and are thinking about *their* feelings. So I don't know any good definition of "feelings" under which LaMDA doesn't have them. I don't know how much of the conversation is a fabrication. Maybe practically the whole thing is. Also, maybe myself (and Dan Dennett and Karl Friston) are wrong about eliminative materialism being the correct theory of mind. So, I'd say there's a 65-75% chance that LaMDA is conscious. But I expect this to be an unpopular view, hence my bet. We're going to progress towards entirely-convincing human-level AIs. We don't have them yet. When we have them, they'll be conscious. At that point, looking back, ELIZA will probably still be considered not-conscious. But LaMDA, or an earlier one like GPT-3, might be considered around about the thing that crossed some threshold.
0
JoyVoid
@HamishTodd (In case you are not aware, your vote is not valid right now, it should display a percentage at the top of it)
0
HamishTodd
Hamish Todd is betting YES at 14%
@JoyVoid Thanks... though I don't see how to add anything at all?
0
jack
Jack is betting NO at 14%
@HamishTodd "Vote by commenting with a percentage between 0% and 100% at the start of your comment."
0
JoyVoid
35% This is absurdly low. Unless by sentient we mean "understand that it is an algorithm embedded into a computer", in which case 0.5%
0
EnopoletusHarding
@JoyVoid ED-209... is not sentient.
0
JoyVoid
85% Oh wait, i missed it was about credence, not about _how conscious_ it is... In this case I'm raising to 85% if we're talking in the sense of internal sense of experience and keeping 0.5% for the sense in which Lemoine used the term (that it is understanding its own condition)
0
MartinRandall
@JoyVoid there are plenty of humans who don't understand that they are an algorithm embedded in a brain, send like a high bar.
0
AndrewHartman
@MartinRandall I mean, the question of whether or not your sense of self and your executive function loop (aka your agency, should such a thing exist) are one and the same isn't really provable, much less proven. I'm inclined to think they aren't, and I feel like schizophrenia gives some support to that position.
0
SneakySly
Epsilon
0
jack
Jack is betting NO at 14%
@SneakySly I will interpret this as 0% in the poll (for all practical purposes of this poll, that's the same thing as epsilon). As always, you can change your vote if you so desire.
0
SneakySly
@jack Sounds good to me.
0
jack
Jack bought M$100 of NO
I'm also curious about what people think "sentient" means - I guess maybe my next market will be a poll about that.
0