Mechanistic interpretability
Mechanistic interpretability is a subfield of research within explainable artificial intelligence that aims to understand the internal workings of neural networks by analyzing the mechanisms present in their computations. The approach seeks to analyze neural networks in a manner similar to how binary computer programs can be reverse-engineered to understand their functions.
History
The term mechanistic interpretability was coined by Chris Olah.Early work combined various techniques such as feature visualization, dimensionality reduction, and attribution with human-computer interaction methods to analyze models like the vision model Inception v1.
Key concepts
Mechanistic interpretability aims to identify structures, circuits or algorithms encoded in the weights of machine learning models. This contrasts with earlier interpretability methods that focused primarily on input-output explanations.Linear representation hypothesis
This hypothesis suggests that high-level concepts are represented as linear directions in the activation space of neural networks. Empirical evidence from word embeddings and more recent studies supports this view, although it does not hold up universally.Methods
Mechanistic interpretability employs causal methods to understand how internal model components influence outputs, often using formal tools from causality theory.Mechanistic interpretability, in the field of AI safety, is used to understand and verify the behavior of complex AI systems, and to attempt to identify potential risks.