668430-Roa

6 6.2. Methods and materials 127 φp1(t) dt =→'ϖ12(t; ς)+ϖ1F(t; ς)(p1(t) (6.2a) φp2(t) dt =ϖ12(t; ς)p1(t) →'ϖ23(t; ς)+ϖ2F(t; ς)(p2(t) (6.2b) φp3(t) dt =ϖ23(t; ς)p2(t) →'ϖ34(t; ς)+ϖ3F(t; ς)(p3(t) (6.2c) φp4(t) dt =ϖ34(t; ς)p3(t) →'ϖ45(t; ς)+ϖ4F(t; ς)(p4(t) (6.2d) φp5(t) dt =ϖ45(t; ς)p4(t) →ϖ5F(t; ς)p5(t) (6.2e) φpF(t) dt =ϖ1F(t; ς)p1(t)+ϖ2F(t; ς)p2(t)+ϖ3F(t; ς)p3(t)+ ϖ4F(t; ς)p4(t)+ϖ5F(t; ς)p5(t) (6.2f) the system of di!erential equations in Eq. 6.2, we use Python’s solve_ivp function from the scipy.integrate module. This function, based on ‘LSODA’ from the FORTRANodepack library, employs the Adams/BDF method with automatic sti!ness detection (Virtanen, Gommers, Oliphant, et al., 2020). 6.2.2 Model calibration To optimise the hyper-parameters of parametrised Markov chains, we employ a novel approach that combines the Metropolis-Hastings (M-H) algorithm (Hastings, 1970)—a Markov chain Monte Carlo method—with the Sequential Least Squares Programming (SLSQP) algorithm (Virtanen, Gommers, Oliphant, et al., 2020), specialised in solving constrained non-linear problems. Although the M-H algorithm alone does not ensure optimal hyper-parameters, it provides a crucial initial guess that aids the SLSQP algorithm in avoiding premature convergence to local optima. To the best of our knowledge, this is the first time these two algorithms have been used together for this application. Sewer inspections are considered interval-censored, where state transitions occur within certain intervals but are not exactly known (Duchesne, Beardsell, Villeneuve, et al., 2013). This complexity is omitted from our likelihood function, but its further exploration is suggested (Hout, 2016). We analyse the impact of interval-censored data using a non-parametric Turnbull estimator (see Section 6.2.3). The initial part of our optimisation problem aligns with Micevski, Kuczera, and Coombes, 2002, starting with model calibration in a Bayesian optimisation context. We consider y = [y1, . . . , yn], representing the ages of pipes at inspection. Our likelihood function, f(y|⇀, M), where ⇀ =↔ς, p(0)↗, evaluates the probability of observing the data y given the parameters ⇀ and assuming the Markov model M. Incorporating p(0) in the optimisation introduces the constraint )k→S p (0) k =1.

RkJQdWJsaXNoZXIy MjY0ODMw