4.6 Experimental Setup 79 1000 or even 10000 were producing good results without significant changes among them. Therefore the value 100 was used for the datasets with the same structure (Square, Arrow and Airplane). For the ModelNet40 and COIL-100 the experiments showed that this hyperparameter for values as high as 10000 could affect the reconstructions, thus a lower value γ =1 was chosen. The training of the weakly-supervised models AdaGVAE and AdaMLVAE was done with a data generator that organised the available training data into pairs. The only condition introduced in Locatello et al. (2018) to train these models is to provide paired data with few factors changing among them. For our datasets, two factors change. Table 4.3: LSBD-VAE hyperparameters for all datasets. DATASETS PARAMETERS SQUARE, ARROW, AIRPLANE t ∈[10−10, 10−9], γ =100.0, EPOCHS 1500 MODELNET40 t ∈[10−10, 10−5], γ =1.0, EPOCHS 1500 COIL-100 t ∈[10−10, 10−5], γ =1.0, EPOCHS 6000 Table 4.4: Model hyperparameters for all datasets. MODEL PARAMETERS VAE TRAINING STEPS 30000 β-VAE β =5, TRAINING STEPS 30000 CC-VAE β =5,γ =1000, cmax =15, ITERATION THRESHOLD 3500, TRAINING STEPS 30000 FACTOR γ =1, EPOCHS 30000 DIP-I λod =1, λd =10, TRAINING STEPS 30000 DIP-II λod =1, λd =1, TRAINING STEPS 30000 ADAGVAE β =1, EPOCHS 500 ADAMLVAE β =1, EPOCHS 500 QUESSARD λ=0.01, TRAJECTORIES 3000
RkJQdWJsaXNoZXIy MjY0ODMw