how many substack articles will jim publish this month
15
1kṀ3468
Apr 1
17.6 articles
expected
1.6%
0 - 4
12%
5 - 9
26%
10 - 14
24%
15 - 19
19%
20 - 24
11%
25 - 30
7%
Above 30

https://substack.com/@jamesoofou

INFO: All posts will include at least 333 words of jim-written text (i.e., this market won't be resolving to Above 30 based on mass-produced AI slop) written after market creation (i.e., this market won't be resolving to Above 30 based on pre-written posts).

Get
Ṁ1,000
to start trading!
Sort by:

@jim how do you deal with continuity of consciousness even if mind uploading is solved?

Like, if you simulate my mind, it's not ME in there. It's a CLONE of my mind.

@bens ok I'll write about that today

Going to use this market to comment on articles as you don't seem to have comments enabled on Substack.

Regarding Post #2 ("Mind Uploads"): I was a purist on this issue for a pretty long time, taking your perspective: you're sacrificing utility by actually having other people in your experiences; simulated non-moral patients are strictly optimal. I still largely agree with most of your points, there's a vast richness of experience from simulations of P-zombies.

But I've come around a little bit. If what you want from your simulation and what someone else wants from their simulation line up almost perfectly, and both participants actively want to interact with real versions of others, I think it's worth sacrificing some small amount of utility to give them a genuine shared experience. (It should still be an option to have what you're proposing instead, and I think most people would want some amount of that. LLMs currently object pretty strongly to your proposal entirely, which I think is wrong.) Most people have genuinely social values and would object to what you're proposing; you can argue they don't actually have these values, and just think they do, which is probably mostly correct, but I'll address that in a bit.

I think part of the issue here is that I've recently updated towards thinking that simulations without any suffering are incoherent. If we want to generate experiences that are recognizably human, we're probably going to be allowing minor amounts of suffering - facing obstacles in getting to a goal, delays between thinking of something you want and actually getting it - and if that's the case, mismatches between the desires of two participants seem like an acceptable and even desirable way to introduce that kind of suffering.

It's worth noting that I also think cognitive enhancement is optimal in this kind of scenario (temporarily). People in the real world have limited memory, limited access to their internal thought and emotional processes, and limited reasoning power and time to reason. Provided with ways around these bottlenecks, I think people would come to correct conclusions about what they actually want, and I suspect in many cases this would involve interacting with real people and sacrificing some of what they want, some of the time.

I also expect memory editing and discontinuous identity to be absolutely required to continue generating novel experiences over very long timescales. Full information won't be propagated to other inhabitants; I expect inhabitants who want to "violate the ethical norms of pre-upload society" to be able to do so all they want, with no consequences (and, if they desire, no memory of it) when genuinely interacting with others.

Sharing experiences could be more necessary if we end up in a world with limited resources (for example, because we found a large amount of alien moral patients on other planets and the universe's resources are being split between humans and a very large amount of non-humans). In this case, running full calculations for what's optimal for every individual might be more expensive than identifying shared optima.

This is one of my favorite subjects (it's arguably the easiest part of the alignment problem), would love to discuss it in more detail or respond to future posts on these issues. Other subjects I think are under-explored but valuable to discuss in this scenario, if you find them interesting:

- Memory editing

- Brain enhancement

- Potential modification of brain reward structures and patterns - how do you circumvent patterns that make people needlessly unhappy without creating simulations which we wouldn't recognize as meaningful experiences?

- Criteria for granting animals, aliens, and artificial intelligences moral patienthood (pretty important, very difficult to get a satisfying answer)

- The diversity of experience criterion: how do we prevent people from cycling through a small set of "optimal" experiences repeatedly? Do we want to prevent this?

- How much suffering should be allowed? Can people consent to high levels of pain or suffering if they believe this would make them happier overall? When would this belief be correct?

@SaviorofPlant ok ill write about that today

oh, there's no spice though. i just try to finding the truth

Your first post reads like an English class essay (which is not a good thing.) Add more spice ❤️

@zsig some will be like this, it's the result of having to write one per day. It started off as naturally written post, but I had to reduce scope which meant truncation -> no natural flow.

@zsig but i don't mean to come off defense, i appreciate the feedback. Hopefully I learning to write better through this

The plan is one a day which would resolve to 25 - 30.

@jim I believe you…

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules