Is Anthropic's ghost grads implementation currently bugged?
Basic
16
Ṁ3931
resolved Feb 24
Resolved
YES

[High context mech interp question] In https://transformer-circuits.pub/2024/jan-update/index.html Anthropic introduce "ghost grads". This is a fairly complex technique (e.g. it includes treating various scaling factors as constant wrt gradient descent, equivalent to using stop gradients). It also leaves some details ambiguous. I've heard of subtle bugs implementing this technique, some of which don't impact performance! So, is Anthropic's implementation also bugged?

This markets resolves yes if the Anthropic's implementation as of the posting of the circuits update was bugged. If there is a detail they didn't specify, this does not impact market resolution. Resolves yes/no based on updates to the circuits update post and/or my subjective impression on discussions

Get
Ṁ1,000
and
S3.00
Sort by:
bought Ṁ450 YES

https://transformer-circuits.pub/2024/feb-update/index.html#dict-learning-resampling

We had a bug in our ghost grads implementation that caused all neurons to be marked as dead for the first K steps of training.

https://wandb.ai/jbloom/mats_sae_training_gpt2_ghost_grad_experiment

I found that using the post Relu activations didn't deteriorate performance 🤣 But this has a weird interaction with whether or not you detach the alive neurons from the ghost grad MSE loss (calculated via the residual). Very weird. Would love for someone to replicate. I will default to implementing with Exp(x) and detaching (implied by the wording of "a second forward pass".

What bugs don't impact performance?

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules