The Brain-Like AGI research program by Steven Byrnes is based on taking brain models from neuroscience, containing a genetically hard-coded steering system and a learned-from-scratch thought generator and a learned-from-scratch thought assessor, and analogizing this to a method of creating artificial general intelligence. A hope is that it may provide a factorization of the problems involved in alignment that better fits together with the products that capabilities researchers are actually producing.
In 4 years, I will evaluate Brain-Like AGI and decide whether there have been any important good results since today. I will probably ask some of the alignment researchers I most respect (such as John Wentworth or Steven Byrnes) for advice about the assessment, unless it is dead-obvious.
About me: I have been following AI and alignment research on and off for years, and have a somewhat reasonable mathematical background to evaluate it. I tend to have an informal idea of the viability of various alignment proposals, though it's quite possible that idea might be wrong.
At the time of making the prediction market, my impression is that the Brain-Like AGI research program contains numerous critical insights that other alignment researchers seem to be missing; everyone involved in AI safety should read the Brain-Like AGI sequence. However, beyond what is written about in the posts, I'm concerned that there might not be much new to say about the Brain-Like AGI program in 4 years. Sure, the five-star open questions that Steven Byrnes poses would be nice to solve, but I'm not sure that e.g. human social instincts is tractable, or that the other five-star open questions will be much connected to the Brain-Like AGI program.
More on Brain-Like AGI: