Physics, astronomy, cosmology and related areas are increasingly relying on machine learning. Most papers get away with using black box methods with little effort towards interpretability.
By the end of 2025 will the situation radically change so that any use of machine learning in physics and related areas will require that the method used is either fully natively interpretable or that considerable effort is spent to explain it using XAI methods or mechanistic interpretability (in the case of deep learning)?
Operationally, will at least 80% of the papers that contain the words 'machine learning' or a suitable equivalent in their title or abstract and are filed on arxiv under the physics and astronomy categories use inherently interpretable machine learning or devote a significant portion (at least a paragraph of text and a figure) of the paper to explainability, during at least a month before Dec 31, 2025?
Clarification: resolution will be based on a representative sample of papers if there are too many to assess fulfilment of the criteria manually. There is also a (for now) small subset of physics papers that mention machine learning in their titles and/or abstracts but do not apply machine learning to physics, because they do the reverse: they apply physical concepts to machine learning, for instance studying phase transitions during learning in deep neural networks. Clearly these papers will not count towards either the numerator or the denominator of the 80% fraction required for resolution, irrespective of whether they mention interpretability.