I want to have a long infohazardous philosophy talk with several prominent individuals in the AI/crypto/decentralized information space about how to incentivise and scale the mechanistically interpretable alignment of decentralized symbolic AI fast enough to beat ML to the punch.
I want that more than any other material possession that money can buy, so I will be putting all of my available funding towards incentivising these sort of disscussions.
I'm using this prediction to survey Manifold on whom they believe would be the best at either (1) understanding and implementing the solution I'm proposing OR (2) charitably identifying precisely where the solution fails.
This prediction will resolve to whomever takes the time to understand my claims and either (1) helps implement them at scale (more than 100,000 verified humans earning revenue by performing interpretable alignment labor) OR (2) convinces me that this particular approach is not a simple solution to mechanisticlly interpretable decentralized alignment of humanity.
Here are some of the "proper date" markets for reference:
https://manifold.markets/Krantz/if-aella-and-i-go-on-a-proper-date?r=S3JhbnR6
https://manifold.markets/Krantz/if-krantz-goes-on-a-proper-date-wit?r=S3JhbnR6
https://manifold.markets/Krantz/if-krantz-goes-on-a-proper-date-wit-llS5nI9Etn?r=S3JhbnR6
https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6
https://manifold.markets/Krantz/if-krantz-hires-danny-sheehan-as-hi?r=S3JhbnR6
A sincere "thank you" to anyone that takes this seriously.
If you believe it is tautologically impossible for someone you don't know to have a solution to a problem you don't understand, then you can just view this as a market for predicting who will be the person that I give all my money to for interpretably pointing out explicitly where my claims fail. Which is something I genuinely would love to do. Please consider helping me.