603556-Tonnaer

3.4 Experimental Setup 39 VAE architectures for MNIST and 3D-printed products For the encoders of the VAEs, we use CNNs with VGG-like architectures (Simonyan and Zisserman, 2014). Each encoder exists of a number of convolutional “blocks”, each block consisting of a 3×3 convolution before 2×2 max-pooling, each with stride 1, followed by batch normalisation (Ioffe and Szegedy, 2015). After these blocks follow two fully-connected layers—a hidden layer (followed by batch normalisation) and a layer representing the parameters µenc andlogσenc of the approximate posterior distribution. The decoder architecture is a reversed version of the encoder, where maxpooling is replaced by nearest neighbour up-sampling. All convolutional and fully-connected layers are followed by a ReLU nonlinearity, except for the layers representing the approximate posterior parameters µenc and logσenc, which have no non-linearity, and the final convolutional layer in the decoder whose output represents the generative parameter µdec, which has a sigmoid activation to ensure that pixel values stay between 0 and 1 (representing grey-scale values). For our experiments on MNIST, we test two different architectures. Each starts with two convolutional blocks with 32 and 64 filters, respectively, followed by a fully-connected layer with 64 hidden units. The difference between both architectures is that the latent spaces have dimensions 2 or 32, respectively. Models for MNIST were trained for 50 epochs (full passes over the data). For our 3D-printed data, we test three different architectures. Each starts with four convolutional blocks with 16, 32, 64, and 128 filters, respectively. The first two architectures have a fully-connected layer of 64 hidden units, and latent dimensions of 2 and 32, respectively. The third model has 128 hidden units, and a 64-dimensional latent space. These models were trained for 200 epochs. Figure 3.6 shows a visualisation of the architecture of the last-mentioned model. 224x224x1 112x112x16 56x56x32 28x28x64 14x14x128 128 64 64 μ σ 64 z 128 112x112x16 56x56x32 28x28x64 14x14x128 224x224x1 3x3 Conv. + ReLU 2x2 Max-Pool BatchNorm 3x3 Conv. + ReLU 2x2 Max-Pool BatchNorm 3x3 Conv. + ReLU 2x2 Max-Pool BatchNorm 3x3 Conv. + ReLU 2x2 Max-Pool BatchNorm 2x2 UpSampling 3x3 Conv. + ReLU BatchNorm 2x2 UpSampling 3x3 Conv. + ReLU BatchNorm 2x2 UpSampling 3x3 Conv. + ReLU BatchNorm 2x2 UpSampling 3x3 Conv. + Sigmoid FC + ReLU BatchNorm FC N(μ, σ) FC + ReLU BatchNorm FC + ReLU BatchNorm Figure 3.6: VAE architecture for 3D-printed products.

RkJQdWJsaXNoZXIy MjY0ODMw