I currently find other opportunities more exciting.
Specifically, I want to try:
Other types of research (especially empirical alignment research in areas like scalable oversight)
Seeking collaboration with or mentorship from (many?) different existing researchers with diverse/different research views
Making concrete progress on control and deploy-time interventions
Clarifying my worldview disagreements with existing alignment thinkers (how useful/helpful is ~human-level AIs? how dangerous are they? how likely are misuse scenarios? will FOOM happen?)
Improving my understanding of the whole alignment problem
As a result, I haven’t yet applied to enter the MATS extension phase.
It is very plausible that I will change my mind, but it’s already November.