668430-Roa

84 Chapter 4. Fault Tree inference using Multi-Objective Evolutionary Algorithms and Confusion Matrix-based metrics underlying concept is that case studies with fewer BEs and MCSs should be simpler to manage. As indicated by the results in Figure 4.3, FT-MOEA-CM-All and FT-MOEA-CM-Best consistently reached the global optima across all cases, while FT-MOEA did not in the COVID-19 and GPT15BE case studies. However, in the GPT12BE case, all three algorithms exhibited strong performance, especially FT-MOEA, which surpassed FT-MOEA-CM-All and FT-MOEA-CM-Best. Nevertheless, given that GPT12BE contains more BEs and MCSs than COVID-19, the reasons for FT-MOEA’s di"culties with the latter are not clear. FT-MOEA-CM-All FT-MOEA-CM-Best FT-MOEA 0 20 40 60 Generation 0.00 0.05 0.10 0.15 0.20 Accuracy ddFT 0 10 20 30 40 Generation 0.00 0.05 0.10 0.15 0.20 MPPS 0 25 50 75 100 Generation 0.00 0.15 0.30 0.45 COVID-19 0 20 40 60 80 Generation 0.00 0.02 0.04 0.06 Accuracy TS1 0 25 50 75 100 Generation 0.00 0.05 0.10 0.15 0.20 GPT12BE 0 15 30 45 60 Generation 0.000 0.004 0.008 0.012 0.016 GPT15BE Figure 4.4: Accuracy over generations for all case studies and algorithms. Figure 4.4 depicts the convergence over generations for each case study in terms of accuracy. It is observed that FT-MOEA-CM’s configurations converge more rapidly to the optimal accuracy compared to FT-MOEA. These findings suggest that FT-MOEA-CM may scale better than FT-MOEA due to its superior convergence profile, indicating enhanced capabilities to manage larger problems. Further research is necessary to examine this hypothesis more comprehensively. Conclusion. Regarding scalability, our findings suggest that FT-MOEA-CM may be more scalable than FT-MOEA. Convergence Speed. Convergence speed refers to the time required by an algorithm to automatically infer an FT from a failure dataset. Discussion. Figure 4.5 shows the convergence speed, measured in minutes. In specific case studies, such as ddFT, MPPS, and COVID-19, FT-MOEA-CM outpaced FT-MOEA. Notably, FT-MOEA-CM-Best demonstrated superior performance in the MPPS case study. For other cases, like TS1 and GPT12BE, the algorithms’

RkJQdWJsaXNoZXIy MjY0ODMw