Will I consider any LLM to be a moral patient by 2024?
18
188
468
resolved Jan 1
Resolved as
1.0%

Context: https://experiencemachines.substack.com/p/dangers-on-both-sides-risks-from

h/t @Rodeo

Moral patient = an entity that deserves moral consideration, whose interests count morally in their own right, to which we may have duties.

LLM = large language models (e.g. GPT)

Try to convince me one way or another in the comments.

Related:

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ129
2Ṁ100
3Ṁ31
4Ṁ19
5Ṁ17
Sort by:
reposted

Going to resolve this to 1% unless something major happens tonight.

Since there seems to be a little bit of renewed interest in the question I figured I might as well share some my recent thoughts on related issues. https://sentientism.info/sentientist-pledge/nicolas-delon

predicted NO

This is a cool market. I wonder if we could get philosophers to take positions on these issues on a site like this and over time gain reputational benefits for the long term insight?

@BTE I would love for more philosophers to join and participate actively in questions.

predicted NO

@NicoDelon Is there a conference that we could maybe create a fun dedicated series of markets for that are meant to be like provocative and like a mini social network to facilitate random encounters and also just get a lot of philosophers to make a ton of predictions on things like this?? I bet we could get funding to organize that…

@BTE That could be fun! I’m not conferencing this year but I’ll keep an eye out. We’d need a critical mass of active philosophers on here first.

predicted NO

@NicoDelon Oh no we would be doing this with my new prediction market platform. We could customize it just for this.

@BTE Ooooooooooh. I see where you’re going with that.

@NicoDelon I would like to do a small experiment. I am currently writing about my own moral framework

in which LLM would probably count as moral agents, at least as much as animals anyway, probably more. Given that you give it a fair read, I think you might update toward LLM-moral relevance, and I would then put 66% probability or more, depending on other discussions we may have.

Once I am done with it, would it be possible to offer to give you mana to read it? I would bet at least 66% and give you 1/3 of the expected YES profit in advance (if anyone other than Nico wants to challenge bet NO, let me know!)

I am of course also interested in normal discussions, but this is also a self-bet of how much people will agree/be persuaded by the post

@epiphanie_gedeon I’ll read your post for M100 and provide at least 500 words of feedback for M500. I can’t predict how or whether it’ll change my mind.

@NicoDelon Yes, for the last part I did not mean to ask you whether it would. And great then, will get back to you! (obviously you can change your mind if circumstances change by then, just wanted to know if it was possible in principle)

@epiphanie_gedeon Sounds good. As my reply to Patrick suggests, I’m not fully set on NO on either question. I think LLMs will become moral patients before becoming moral agents, simply because the conditions for being a moral agent would automatically fulfill those for being a moral patient (say, if they become morally responsible agents). And I can’t rule out the release of, say, Gemini undermining my confidence in NO. So it could be tight. But for now, I’m leaning NO on both questions.

bought Ṁ10 of NO

I'm not sure I fully understand your definition of a Moral Patient, but one of your papers, "Consider the agent in the arthropod" links to this paper, which you note, "makes an impressive case for taking the welfare of invertebrates seriously in ethical decision-making."

https://www.wellbeingintlstudiesrepository.org/cgi/viewcontent.cgi?article=1527&context=animsent

So that paper and yours seem to state that sentience is not necessary for a being to have a moral consideration.

Some of the parameters from the source paper which do seem to count toward a being having a moral consideration include (besides ecological considerations):

  • invertebrate lineages with centralized nervous systems

  • neurological complexities

  • sophisticated cognitive abilities

  • have evolutionary lineages

So obviously LLM's have no moral considerations from an ecological standpoint, as they are not biological entities requiring some kind of ecosystem to function, so the question you posed likely has to do more with whether they have a moral consideration from a sentience or sentience-adjacent standpoint. Is that correct? The question posed seems to be getting at whether LLMs may have some sort of cognitive capabilities, despite whether whatever may appear to be cognitive is actually coming from an electronic process on a silicon chip.

Arguably out of the four parameters above used as a measuring stick for re-considering invertebrate moral standing from a non-ecological point of view:

  1. LLM's do not have centralized nervous system, nor a nervous system at all, much less a definable genetic lineage from which a nervous system came from.

  2. No neurological complexities, as again, LLM's do not have a brain or nervous system in a biological sense.

  3. No evolutionary lineage.

Arguably, one might say that LLM's have, "cognitive abilities," if you define cognitive abilities to mean, "some sort of system, whether biological, mechanical or electronic, which defines some kind of logical output following some input." Researchers in AI do in fact look at and study biological brains to try to discover different ways of compressing or processing information, since biological brains are far more efficient than electronic, silicon-based logic. However this would be the only parameter that one could define as being perhaps linking a simple biological brain or nervous system, and it's really more of a mathematical abstract, as typically when we say, "cognitive abilities," we are talking about a huge range of complex cognitive capabilities, not just probabilistic language processing via a transformer.

I have put together a group of third party verified markets that attempt to measure various analogs to, "dimensions of cognition," that one might define in a particular type of AI that gets released, including LLM's and other types of AI's, and I will continue to grow this list.

https://manifold.markets/group/third-party-validated-predictive-ma-6bab86c0b8b0

@PatrickDelaney Correct. Good comment. For now I’m at: LLMs would count if they were sentient and/or sufficiently agentic and/or entangled in significant relationships with other bearers of moral status; I haven’t seen conclusive evidence that any existing model meets the criteria. Though I wouldn’t be shocked if this happened in the near future.

I could be persuaded to change my mind by either providing compelling evidence that at least one LLM meets one of the sufficient criteria or convincing me to adopt at least one other sufficient criterion that they meet that I haven’t included.

predicted NO

@NicoDelon Why wouldn't you be shocked if it happened in the near future? Because it's embarrassing to be shocked at something or because it has a high probability of occurring or some other reason? Or are you just thinking along the lines of...maybe some sort of hybrid spider/robot with electrodes gets created in a lab, having an LLM-based software controlling some functions tied to its legs, and a non-central spider nervous system controlling others, and something like that just doesn't seem too implausible? Like...what is necessary and what is sufficient? Is just strapping something on to a biological thing sufficient to fulfill, "entangled in significant relationships?"

@PatrickDelaney I wouldn’t be shocked because 1. I’ve already seen signs of LLMs proto-agency, and 2. Consciousness could be realized in LLMs before it becomes clear that it has, because it likely won’t take the form we expect it to.

predicted NO

@NicoDelon Shouldn't we perhaps first settle on what consciousness is before we start claiming to have invented it?

@BTE we don't need to; a theory-light approach can work:

https://philpapers.org/archive/BIRTSF.pdf

@NicoDelon Robert Long ( @Rodeo ) also has very sensible things to say about this:

https://experiencemachines.substack.com/p/common-mistakes-about-ai-consciousness

o7

predicted NO

@Rodeo Love your substack!

@BTE Encourage him to keep posting

@NicoDelon just four more posts to go! o7

Related:

@NicoDelon I am confused, is it just free arbitrage, or is there any differences?

@epiphanie_gedeon Those are distinct concepts on moral theory. Each market provides a brief definition of the terms.

More related questions