668430-Roa

88 Chapter 4. Fault Tree inference using Multi-Objective Evolutionary Algorithms and Confusion Matrix-based metrics Table 4.4: Sizes of the inferred Fault Trees (FTs), per algorithm across all the case studies (evaluated 5 times) in Part I of this dissertation. |BEs| is the number of Basic Events; |F| is the FT size; |CD| is the number of MCSs in the ground truth problem. Q1, Q2, and Q3 are respectively the 25%, 50%, and 75% quantiles. Case |BEs||F| |CD| FT-EA FT-MOEA SymLearn FT-MOEA-CM Q1 Q2 Q3 Q1 Q2 Q3 Q1 Q2 Q3 Q1 Q2 Q3 CSD(a) 6 10 3 101011101010 - - - 10 10 10 PT(b) 6115 910119 9 9 - - - 9 9 9 COVID-19(c) 9 13 6 141718131313 - - - 13 13 13 ddFT(d) 8 13 6 343553111317 - - - 17 17 18 MPPS(e) 8 23 7 212427142021 - - - 14 14 14 SMS(f) 132513141414141414 - - - 14 14 14 gpt12(g1) 12 25 13 - - - 20 20 20 - - - 20 24 24 gpt15(g2) 15 27 10 - - - 21 21 22 - - - 22 22 22 SS(h1) 10 23 8 - - - 202121232323 - - - SC(h2) 6 11 4 - - - 111111111111 - - - TS1(i1) 10 34† 16 - - - 15 21 25 34 34 34 21 21 26 TS2(i2) 24 25† 26 - - - 8 2323252525 - - - TS3(i3) 20 63† 18 - - - 151515636363 - - - † Fault Trees associated to truss systems (Jimenez-Roa, Volk, and Stoelinga, 2022). (a)CSD: Container Seal Design (NASA, 2002); (b)PT: Pressure Tank (NASA, 2002); (c)COVID-19: COVID-19 FT (Jimenez-Roa, Heskes, Tinga, et al., 2023); (d)ddFT: Data-driven FT (Lazarova-Molnar, Niloofar, and Barta, 2020); (e)MPPS: Mono-propellant propulsion system (NASA, 2002); (f)SMS: Spread Monitoring System (Mentes and Helvacioglu, 2011); (g1)gpt12: GPT generated FT with 12 BEs (Jimenez-Roa, Rusnac, Volk, et al., 2024); (g2)gpt15: GPT generated FT with 15 BEs (Jimenez-Roa, Rusnac, Volk, et al., 2024); (h1)SS: symmetric toy-example (Jimenez-Roa, Volk, and Stoelinga, 2022); (h2)SC: symmetric toy-example (Jimenez-Roa, Volk, and Stoelinga, 2022); (h3)SC: Truss system case TS1 (Jimenez-Roa, Volk, and Stoelinga, 2022); (h4)SC: Truss system case TS2 (Jimenez-Roa, Volk, and Stoelinga, 2022); (h5)SC: Truss system case TS3 (Jimenez-Roa, Volk, and Stoelinga, 2022) as shown in Table 4.5, where FT-MOEA-CM consistently achieved global optima by encoding all MCSs in the failure dataset, a task not always achieved by FT-MOEA. FT inference algorithms that achieve global optima, handle large datasets, and adapt e"ciently from smaller to larger systems are crucial for scalability. Table 4.5 indicates that, for failure datasets with symmetries, SymLearn handles larger problems (up to 24BEs, FTs with up to 63 elements, and 26 MCSs) compared toFT-MOEA. This highlights the benefits of using information like symmetries, especially in the case study TS3, where FT-MOEA struggles with local optima. Additionally, FT-MOEA-CM shows greater scalability than FT-MOEA, even without harnessing symmetries, emphasising the advantages of incorporating more information into the multi-objective function. Finally, we measure the time taken by the algorithms to complete the task, as reported in Table 4.6. Generally, FT-MOEA converges faster than FT-EA, suggesting that multi-objective optimisation enables more e"cient convergence. Between FT-MOEA and FT-MOEA-CM, the results are less conclusive; in some cases, FT-MOEA was faster than FT-MOEA-CM, and vice versa. However, FT-MOEA-CM consistently

RkJQdWJsaXNoZXIy MjY0ODMw