Inspired by this lesswrong post: Connectomics seems great from an AI x-risk perspective.
I'm studying this myself. I think we're closer to a human brain connectome (for an average human, represented in pieces scattered across many sources) than the linked article suggests.
Quote from the article:
"It seems that, in the course of trying to do WBE, we would necessarily wind up understanding brain learning algorithms well enough to build non-WBE brain-like AGI, and then presumably somebody would do so before WBE was ready to go."
I feel quite confident that this is the case. I think an org with high infosec, safety mindset, and belief in the importance of not deploying non-WBE brain-like AGI could manage to develop WBE. I think this is a high-reward / high-risk approach, but I favor it as a direction of research.