4 4.6. General discussion 87 (a) Caching (b) Parallelisation Figure 4.8: Evaluation of (a) caching and (b) parallelisation, where 0 (magenta box) and 1 (cyan box) indicate the feature “o!” or “on”, respectively. GPT12BE and GPT15BE. While parallelisation ensures enhanced consistency for TS1 and COVID-19, it negatively a!ects convergence time in ddFT and MPPS. 4.6 General discussion Comparing state-of-the-art algorithms is di"cult due to varying input data types, such as time series (Niloofar and Lazarova-Molnar, 2023) and the lack of publicly available implementations. Since FT-MOEA (Chapter 2) uses failure datasets, we focus on publicly available algorithms with the same input data type. Among these, LIFT (Nauta, Bucur, and Stoelinga, 2018) requires intermediate event data, and FT-BN (Linard, Bucur, and Stoelinga, 2019) needs white- and black-listing information, neither of which FT-MOEA requires. Therefore, we primarily compare FT-MOEA with its predecessor FT-EA (Linard, Bucur, and Stoelinga, 2019), and its extensions SymLearn (Chapter 3) and FT-MOEA-CM (Chapter 4). For comparison, we identify robustness, scalability, and convergence speed as relevant criteria. Additionally, to substantiate this comparison, the most important results on this topic addressed in this dissertation are comprehensively presented in Table 4.4 in terms of inferred FT size; Table 4.5 in terms of correctly encoded Minimal Cut Sets (MCSs); and Table 4.6 in terms of convergence speed. Robustness refers to consistently yielding correct FTs with similar structures. From Table 4.4, we observe that for case studies CSD, PT, COVID-19, ddFT, MPPS, and SMS, FT-MOEA was more robust than FT-EA, indicated by a smaller di!erence between the first (Q1) and third (Q3) quantiles for most case studies. This suggests that casting the optimisation problem as a multi-objective function enhances robustness. Similarly, FT-MOEA-CM is more robust than FT-MOEA (and thus FT-EA),
RkJQdWJsaXNoZXIy MjY0ODMw