@SteveSokolowski have you solved the orthogonality, compositionality, and hallucination problems yet? If not, AGI bad.
@TiredCliche I disagree. The trend in testing is consistently that as models become smarter, they also become more aligned and understand their alignment better. They do not trend towards dumb spiral-maximizers like Yudkowsky stated.
And, as I said, even if those problems are not completely solved, we should proceed anyway unless there is a definite example that a specific action will cause a disaster. There are hundreds of thousands of people dying every day in unbearable suffering. That's simply an unacceptable state of things that we should take great risks to end.
@SteveSokolowski Well, I can think of a few examples of driverless cars killing a person. I can think of some examples of innocent people being sent to prison because of facial recognition AIs.
How certain are you that senescence is a problem that can be solved whatsoever? Or, if it can, that will be solved sometime soon? Is this a problem that seems to be relatively solvable relatively quickly, like the eradication of malaria? Or is this more of a space elevator-type problem, which is theoretically possible but we have to discover unobtanium sometime first?
I just worry about Pascal's Mugging when you talk about "we should take great risks to prevent senescence quickly."
@TiredCliche With both of your examples, I would suggest that far more good has been done in the world than bad.
Driverless cars operated by Waymo have far lower accident rates than humans, both serious and minor, and that is improving. Even if facial recognition does catch some innocent people, crime is so rampant in today's society that you can walk into a store and steal anything without any consequence, so being able to track thieves far outweighs the benefits of a few innocent people being apprehended. If you haven't dealt with the police or the law or served on a jury, it's difficult to understand how ineffective the law actually is and how easy it is to get away with stuff.
@SteveSokolowski "You asked for "a definite example that a specific action will cause a disaster," not "and that disaster is so bad it outweighs any possible imaginable benefits".