Skip to main content
MANIFOLD
Is a SIMULATION of consciousness and GENUINE consciousness are fundamentally the same?
34
Ṁ690Ṁ7.3k
resolved Sep 22
Resolved as
0.0%

I mean, what is really different? What is the same though? What does it even mean to be fundamentally the same?

Resolves to % of the market at close.

Market context
Get
Ṁ1,000
to start trading!

🏅 Top traders

#TraderTotal profit
1Ṁ958
2Ṁ749
3Ṁ387
4Ṁ222
5Ṁ94
Sort by:

I'll see if I can have a better resolution criteria in the future. Profit is irreplaceable, but I am trying my best to refund the mana loss for everyone. Let me know if I missed you or if you would like more <3

Someone just posted this on the Manifold Discord, seems relevant here:

Any ideas on how to have a similar market style but preventing sniping?

@Jingliu Have a secret end date that's earlier than the listed end date, and publicize that the listed end date isn't the true end date.

Hash the true end date in advance and share it in the description so that participants can verify after the fact that you weren't gaming things.

@Jingliu Resolve based on a poll, have the time at which it closes be random so that no one knows when to snipe it, use the average percentage, or just specify that you'll ignore last-minute spikes that look like obvious manipulation.

In practice, any market that uses the "Resolves to market price" criterion will always resolve to the opposite of what it actually should be resolved to, since people will always snipe it last minute towards the side that gets them the most profit. See, for example: https://manifold.markets/jack/will-biden-be-president-on-915-reso

predictedNO

@jonsimon

will always resolve to the opposite

Not exactly, people can compete to make the last trade in any direction. If someone buys 3000 No from your limit orders and the. You buy even more Yes to get to 1000%, you get these m$3000

@ms Yeah, it was a bit of an exaggeration (even the market I linked to closed at 50%), but the point is that people who snipe it last minute will snipe against what the majority of people think, making the ultimate resolution of these markets completely useless.

predictedYES

@JosephNoonan Thoughts on resolving based on allocation of unique betters regardless of position size? Then I feel it’s functionally a poll without any extra overhead.

@lieblius Do you mean an actual poll, or just by counting the number of people who bet on each option? The problem with the latter is that batting on an option doesn't necessarily mean I believe in it. If you give me a market on something that I think has a 20% chance of happening, and the market says it has a 10% chance, I'll bet YES on it even though I think the event in question won't happen.

predictedYES

@JosephNoonan I meant the latter but I see your point. I just had the though about what if we made an embedded poll, but I am seeing now there are tradeoffs between making a custom embed with a nice flowing UI and embedding a Google form to encourage unique responses. My best guess is a new feature on manifolds side of things that lets you embed a poll directly in the question. Here I made a test market to visualize the Google form embed approach, I don't know how I feel about it:

different simulation levels are different; should resolve to No / 0.0%

predictedNO

The electrical signals going from my eyes to my brain aren't just a representation of the what I see, they actually cause an image in my mind - that's consciousness!

Doesn't matter how many calculations and transformations my brain does on the electric signals, if they didn't get converted into an image I can see I would be a robot not a consciousness.

@Daniel_MC that is implicitly assuming non physical consciousness.

predictedNO

@Aleph it actually doesn't. My point is that the calculations don't cause the consciousness. If I got a computer to do the same calculations that my brain does (transforming the information from the cones into 3 primary colours, subtraction for edge detection ect ect), that would not cause the computer to perceive the image.

My point is actually that there must be something else to it, beyond the calculations. Calculations are non-physical. If consciousness comes from calculations, then it is non physical.

If we could simulate a conscious on a computer, then theoretically we could perform the simulation by doing the calculations with pen and paper. But we wouldn't call the stacks of paper conscious.

predictedYES

@Daniel_MC why not? Consciousness seems to me to be a property of a computational process, which a paper calculation can certainly have.

@Thomas42 Relevant thought experiment RE the Chinese Room

predictedNO

@Thomas42 because it's just a stack of paper! Doesn't the thought experiment show that the computation alone isn't enough to get consciousness?

predictedNO

@jonsimon I agree that there would be a shit ton of paper involved, and it would give all the insights of a real person (albeit not in real time). But the calculations are just representations and symbols that only have meaning when a person interprets them - like words on the page of a book.

Our consciousness on the other hand gives us perceptions of the world around us. The electrical signals going from my eyes to my brain aren't just a representation of the what I see, they actually cause an image in my mind - that's consciousness!

@Daniel_MC let us say that we have written down a full specification of a human brain.

We also wrote down the rules to step forward the human brain.

You are acting in the role of the computer here. Rather than a frozen moment in time, it gets iteratively stepped over. So the person actually experiences and responds.

You can think of the universe as being like this. A frozen instant in time of the current state which is then automatically stepped through.

You don't have to understand the symbols to step through them. You just have to know the rules for stepping forward in time.

Like a computer definitely doesn't understand that it is simulating a person (or simulating weather or some other complexity) by any reasonable definition, but does follow the rules. And that can have beings simulated which reasonably understand what you are saying.

----

Also if the simulated person acts exactly the same as a version of the real person... Then what is the difference? Like you can say that consciousness has zero effect but that seems like a really odd assumption.

@jonsimon I would say the difference is whether the 'simulated persons' can truely say "I think therefore I am".

Do they actually see sights, hear sounds, think thoughts and feel sensations?

Or do they just act like they do?

@Daniel_MC But why would they say 'I think therefore I am' if they weren't introspecting on thinking? I agree that a robot with a hardcoded: SAY "I THINK THEREFORE I AM" is not conscious.

Everything we observe so far seems to run on a specific set of physical rules. Our brains and bodies use chemistry, building up to neurons, muscles, etc. I don't have much reason to postulate a piece that is either 'outside' normal physics or does anything super exotic (ALA quantum microtubules).

---

So we get a atom by atom specification (ignoring some iffy bits with that) and we run a sim on a computer using step forward rules copied from our understanding of physics.

If their own consciousness is what the physical-person was introspecting on, which then caused them to say that then if the simulation also says 'I think therefore I am' then we should almost-certainly believe the simulation, yes?

After all the rules of physics and the person's atoms almost-certainly don't have a robotic hardcoded statement that only triggers when consciousness is gone.

(Similarly, should we really expect someone who has lost access to their conscious to remotely function right? Losing a significant portion of your working body isn't healthy)

Of course the argument above just says that we can actually test it, given a very powerful computer and very powerful scanning tech.

It does not necessarily say that it is true.

But it ties back to my previous comment's argument. Of 'what is the difference?'.

Of course we can postulate robots that are definitely not conscious. Outward behavior is not all that we are interested in.

However I don't have any reason to expect direct simulations of humans to have cut out any of the essential parts.

----

The architecture that would think on its own thoughts (signals about processing being fed back into the signal processing center) is entirely plausible as something to implement. Though that doesn't automatically make it generally intelligent like we are. Still I think it provides even further evidence for an architecture vaguely like that, which would have introspections and would have what are like 'inputs from 'outside'.

I agree we don't have a definite answer, but it seems to me we keep postulating something extra when we have been given heaps of evidence for worlds that don't have any extra physics.

Take the Penrose pill 0_o

(joking but >80% is way too confident for something we know this little about)

What do you mean by "SIMULATION of consciousness"? Do you mean a model exhibiting similar stimulus/response patterns to a conscious entity? Because that's not a "simulation of consciousness" that's "mimicking the functional behavior of a conscious being".

Simulating consciousness would mean simulating the underlying mechanisms that lead consciousness to be present in conscious beings. Basically running a "consciousness program". That might or might not have any observable external manifestations. But of course we have no idea how to do that because we have no idea mechanistically what consciousness is.

The correct answer to this question is NO, but I won't place a bet here since it would just be bettering on where these participants will push the market and although I have a fairly good guess about that (namely, high, in the wrong direction), I am not interested in betting on that.

While people are just silly in the belief that existing language models could pass a Turing Test even with fine tuning (the other day someone posted a ridiculous comment that "this seems like an easy task with today's models"), it is true that you could do this with substantially improved language models. That seems to count as simulation of consciousness.

But it is not consciousness, because we know a few things about what consciousness requires, and no language model however good will ever have those things.

@DavidBolin Ditto on the meta concern: these markets just punish nonconformance with the common belief , while “proper” prediction markets highly reward insights the further they are from the common wisdom. It’s not a good use of the system.

@DavidBolin > we know a few things about what consciousness requires

[Citation needed]

predictedYES

@NathanShowell one of the requirements of consciousness that most are too stupid or ignorant to understand is that of course it must be made out of meat! And I'll also mention language models for some reason even though no one asked about them.

@peterpumpkin Being made out of meat is not one of the conditions.

@NathanShowell No citation is needed.

E.g. normal people do not wonder if the cup of coffee on their desk might be conscious, indicating they think they know a few things about consciousness.

predictedYES

@DavidBolin makes sense I now believe Jesus died for our sins and rose again.