
Fully surpassed by ai timer starts when a market like my "humans fully surpassed by ai" market resolves yes - it must be unambiguous and stand up under arbitrary levels of scrutiny, but it need not be energy efficient yet. This surpass would have to be one that doesn't kill manifold users for this result to be of meaning; obviously the market can't resolve yes if we're all dead.
lose control of their personal life trajectory means people who are unaugmented effectively cannot retain decisionmaking power over how their life is spent, not even by being rich; without augmentations they'd be completely unable to compete, and presumably would die shortly afterwards.
this would be expected to either involve starving and dying, or being used as a cog in a larger machine in a way that reliably removes any individual expression. some humans are already in this experience, but not all of the unaugmented ones.
this does not require all humans to be strongly disempowered to resolve true; it only requires that physical, in-body augmentations are needed for the human to retain capability. healthcare and modifications that do not leave foreign matter in the body do not automatically count, unless they also break that individual's agency.
agency recognition will be resolved using some form of fairly precise technical agency detection mechanism, but such a mechanism is not yet fully specified mathematically. see https://causalincentives.com/ to attempt to contribute to that project.
I hope this resolves false, but worry it will resolve true. This is one of the key "better than basic success" ai safety outcomes I care about - in-body augmentation should be or become optional.