603556-Tonnaer

Chapter 5 Out-of-Distribution Generalisation with LSBD Representations Learning disentangled representations is suggested to help with generalisation in machine learning models. This is particularly obvious for combinatorial generalisation, the ability to combine familiar factors to produce new unseen combinations. Disentangling such factors should provide a clear method to generalise to novel combinations, but recent empirical studies suggest that this does not really happen in practice. Disentanglement methods typically assume i.i.d. training and test data, but for combinatorial generalisation we want to generalise towards factor combinations that can be considered out-of-distribution (OOD). There is a misalignment between the distribution of the observed data and the structure that is induced by the underlying factors. A promising direction to address this misalignment is Linear Symmetry-Based Disentanglement (LSBD), which is defined as disentangling symmetry transformations that induce a group structure underlying the data. Such a structure is independent of the (observed) distribution of the data and thus provides a sensible language to model OOD factor combinations as well. We investigate This chapter is largely based on our paper Out-of-Distribution Generalisation with SymmetryBased Disentangled Representations (Tonnaer et al., 2023), reproduced with permission from Springer Nature.

RkJQdWJsaXNoZXIy MjY0ODMw