Do you think Mixture of Expert (MoE) transformer models are generally more human interpretable than dense transformers?
16
Never closes
Yes
No
Results
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
Will mechanistic/transformer interpretability [eg Neel Nanda] end up affecting p(doom) more than 5%?
10% chance
Will the most capable, public multimodal model at the end of 2027 in my judgement use a transformer-like architecture?
63% chance
Will the first AGI be a transformer model?
46% chance