After humans are fully surpassed by ai, will it become impossible for agency of a human to survive without augmentations within 5 years?
5
31
130
2099
17%
chance

Fully surpassed by ai timer starts when a market like my "humans fully surpassed by ai" market resolves yes - it must be unambiguous and stand up under arbitrary levels of scrutiny, but it need not be energy efficient yet. This surpass would have to be one that doesn't kill manifold users for this result to be of meaning; obviously the market can't resolve yes if we're all dead.

lose control of their personal life trajectory means people who are unaugmented effectively cannot retain decisionmaking power over how their life is spent, not even by being rich; without augmentations they'd be completely unable to compete, and presumably would die shortly afterwards.

this would be expected to either involve starving and dying, or being used as a cog in a larger machine in a way that reliably removes any individual expression. some humans are already in this experience, but not all of the unaugmented ones.

this does not require all humans to be strongly disempowered to resolve true; it only requires that physical, in-body augmentations are needed for the human to retain capability. healthcare and modifications that do not leave foreign matter in the body do not automatically count, unless they also break that individual's agency.

agency recognition will be resolved using some form of fairly precise technical agency detection mechanism, but such a mechanism is not yet fully specified mathematically. see https://causalincentives.com/ to attempt to contribute to that project.

I hope this resolves false, but worry it will resolve true. This is one of the key "better than basic success" ai safety outcomes I care about - in-body augmentation should be or become optional.

Get Ṁ200 play money
Sort by:

What if a subset of people is left around with a privileged status (even just one)? Let’s say a nation-state military AI just happens to get its alignment “right” with respect to treating the glorious leader like an indoctrinated human would. Taken literally, your description requires complete subjugation, but I guess with N_free<millions it would also be properly fulfilled in spirit.

To be clear, I specifically mean the scenario where the AI is actually truly benevolent and aligned compatibly (without any misunderstanding risk of the paperclip maximization type), but simply only towards a subset of “Roko’s buddies”.

@yaboi69 coming back to this - by my original intent, that would qualify as an augmentation; but by my original phrasing, it very much would not, so it'd have to resolve no. It might be worth me or someone else making a market that queries that more directly.

bought Ṁ10 of NO

I bet 'no' because I think unmodded humans will still be around for quite a bit longer, maybe 50 - 100 years. I see them as being kept as sort of historical curiosities or pets, with them being given enough free reign that they feel free, and are similar empowered in their life choices to humans of 2022, but constrained in the sense that they can't afford to compete for a substantial share of the real stakes. Stakes like control over significant shares of galactic resources.

@NathanHelmBurger That's if we get a good outcome. I'm worried about AI that is only as aligned as humans that has a similar split against unaugmented humans as how poor humans are treated today. it wouldn't wipe out unaugmented humans right away perhaps, but over time any who don't adapt wouldn't be able to compete.

predicts NO

@L Agreed. I bet conditional on 'good outcome' because I don't think I'll care about the resolution of this market conditional on 'bad outcome'. 😂 😭