F

F. Bonchi

Total Citations
36
h-index
3
Papers
2

Publications

#1 2601.12913v1 Jan 19, 2026

Actionable Interpretability Must Be Defined in Terms of Symmetries

This paper argues that interpretability research in Artificial Intelligence is fundamentally ill-posed as existing definitions of interpretability are not *actionable*: they fail to provide formal principles from which concrete modelling and inferential rules can be derived. We posit that for a definition of interpretability to be actionable, it must be given in terms of *symmetries*. We hypothesise that four symmetries suffice to (i) motivate core interpretability properties, (ii) characterize the class of interpretable models, and (iii) derive a unified formulation of interpretable inference (e.g., alignment, interventions, and counterfactuals) as a form of Bayesian inversion.

M. Jamnik Pietro Barbiero M. Zarlenga Francesco Giannini Alberto Termine +2
1 Citations
#2 2601.12913v3 Jan 19, 2026

Actionable Interpretability Must Be Defined in Terms of Symmetries

This paper argues that interpretability research in Artificial Intelligence (AI) is fundamentally ill-posed as existing definitions of interpretability fail to describe how interpretability can be formally tested or designed for. We posit that actionable definitions of interpretability must be formulated in terms of *symmetries* that inform model design and lead to testable conditions. Under a probabilistic view, we hypothesise that four symmetries (inference equivariance, information invariance, concept-closure invariance, and structural invariance) suffice to (i) formalise interpretable models as a subclass of probabilistic models, (ii) yield a unified formulation of interpretable inference (e.g., alignment, interventions, and counterfactuals) as a form of Bayesian inversion, and (iii) provide a formal framework to verify compliance with safety standards and regulations.

M. Jamnik Pietro Barbiero M. Zarlenga Francesco Giannini Alberto Termine +2
1 Citations