603556-Tonnaer

6.3 Future Work 129 mation labels can easily be obtained and figuring out how to relax the LSBD definition to be able to deal with non-symmetric variation as well. Future work regarding OOD generalisation can focus on extending to more realistic settings where missing factor combinations are causing models to generalise poorly. Open challenges are learning good disentangled models (LSBD or traditional) for more realistic datasets and assessing how well such models help with generalisation, even if we don’t have full access to information about the underlying transformations. Furthermore, as suggested before in the context of anomaly detection, a clear question for future work is how to better quantify the normality of a data point, which may also provide a better measure to assess the generalisation of generative models to OOD factor combinations. Our results on LSBD-VAE suggest that the encoder may generalise better than the decoder, further analysis on this behaviour is needed to understand this effect better, and may help to determine a better way of quantifying generalisation. Lastly, our OOD generalisation work is motivated partially by the need for better likelihood models in the context of anomaly detection, but so far we have only focused on how well models generalise to OOD combinations that should be considerednormal, without considering anomalies, i.e. data points that are OOD both empirically and semantically. Further research is needed to assess if improved OOD generalisation for unseen factor combinations can also help to improve anomaly detection in the presence of anomalies that are not simply the result of unseen factor combinations, but of some other (unwanted) mechanism. In particular, the question to investigate is whether models that generalise better can prevent false negatives in anomaly detection.

RkJQdWJsaXNoZXIy MjY0ODMw