603556-Tonnaer

44 Anomaly Detection with Variational Autoencoders Table 3.5: auROC scores for 3D-Printed Products. Latent dim. Shrink hole Dirty spot Discoloured line Edge Erosion Print line 2d 0.70 0.61 0.60 0.75 0.69 32d 0.80 0.70 0.72 0.86 0.78 64d 0.80 0.71 0.72 0.87 0.75 Table 3.6: auPRC scores for 3D-Printed Products. Latent dim. Shrink hole Dirty spot Discoloured line Edge Erosion Print line 2d 0.70 0.47 0.20 0.85 0.76 32d 0.81 0.56 0.32 0.91 0.83 64d 0.81 0.56 0.26 0.91 0.82 We also observe an influence of different defect types on the auROC and auPRC scores. Edge erosion for example appears easier to detect thandiscoloured lines, with our method. We can also see this from the density plots, as shown for the 64-dimensional model in Figure 3.1a and Figure 3.11. For all defects we see some separation in ELBO scores on average, although there is also still noticeable overlap. This, along with the auROC and auPRC scores, shows that our method is capable of separating normal data from anomalies, though not perfectly. We can use the trained VAE to reconstruct both good and anomalous images, we show some examples of this in Figure 3.12 for the 64-dimensional model. We observe that reconstructions resemble “cleaned up” versions of anomalous images, ideally yielding a large difference between original and reconstruction in the pixels that most likely form an anomaly. We also observe that reconstructions are somewhat blurry and simplified, even for good examples. This sometimes yields artefacts in the difference images that do not correspond to an anomaly, i.e. that are considered acceptable in the production line. In most cases, however, true defects appear to have a bigger impact, resulting in a good capability of separating normal data from anomalies. Furthermore, highlighting these differences provide a useful tool for a domain expert observer to more easily

RkJQdWJsaXNoZXIy MjY0ODMw