Generalities on first order ordinary differential equations

Get Complete Project Material File(s) Now! »

Computational burden of the Monte-Carlo estimator

For the Monte-Carlo estimator, when pf << 1 the coefficient of variation satisfies Making the approximation CV = p 1 pfNf , a condition CV < c where c 2 R+ implies that So when the value of pf is close to zero, the condition (2.10) shows that the number of simulations needed must be very high. Running that many simulations being computationally intensive, the Monte-Carlo method is therefore ill-suited to rare event analysis. In order to avoid the computational burden of the Monte-Carlo method, one generally prefers using a variance reduction method.

The principle of variance reduction method

The principle of a variance reduction method is to replace the Monte-Carlo estimator by an estimator satisfying a CLT and for which the asymptotic variance is smaller. The interest of taking such an estimator is that it requires less simulation runs to reach a given precision, which eventually accelerates the estimation. A more accurate estimator means less simulation runs We can see that, for a given accuracy, when 2 < 2 f we have N < Nf . Equation (2.13) even indicates that reducing the variance by a factor x reduces the required number of simulations by a factor x2. Note that this heuristic reasoning (and especially the equation (2.12)) is valid only if Partie I, Chapter 2 – Monte-Carlo methods for rare events we assume that the estimators have reached their asymptotic regime. So even if N can be chosen 2 2 f times smaller then Nf , N still has to be big enough so that the Normal approximation on p N(ˆp − p) and q Nf (ˆpf − p) are satisfied. Efficiency and comparison of estimators In variance reduction methods, getting an estimator with a reduced asymptotic variance is often done by altering the simulation process, and this can sometimes slow down the simulation process. In practice, there is a trade-off between reducing the number of simulation runs and increasing the computational time of each simulation run. For this reason, in [36] the efficiency of an estimator is defined by: eff = 1 where is the mean computational time for a simulation run. This quantity eff can be interpreted as the contribution of a second of computation to the precision of the estimator. It is a good indicator to compare estimators in practice. In this thesis we focus on the reduction of the variance, putting aside the mean computational time for a simulation run, mainly because the variance reduction often compensates for the increase of the time of simulation runs, and because reducing the mean computational time for a simulation run is essentially a code optimization issue, which is not our main domain of expertise. This is why we will sometimes compare estimators directly with their variances and not their efficiencies.

Importance Sampling

In this section we present the method we adapt for PDMP in the Part 2. Namely we present the importance sampling method and the adaptive Cross-Entropy method which we will use to optimize the variance of the importance sampling estimator. For a more complete review of the different uses of importance sampling, and of different ways to optimize it we refer the reader to [51].

Importance Sampling

if p ‘ 10−a, we often have to simulate approximately 10a in order to get at least one simulation Yi in D, and 10a+2 simulations are required to get a good estimation of p. To avoid waiting this time before getting a first Yi in D, the first idea of the Importance Sampling is to simulate from an other random variable Y , for which the region D is more likely. We give more « importance » to a specific region. Let ( Yi)i=1..N be a sequence of N independent random variables and distributed like Y . We denote by g their density with respect to the reference measure . Replacing the Yi’s by other random variables Yi’s distributed as Y would yield an estimator 1 N PNi =0 1D( Yi) that is less likely to be null. This estimator would provide a positive estimation sooner but would unfortunately be biased. The second idea of the Importance Sampling is to add a weight to each simulation output in order to correct this induced bias. If a simulation has been drawn with x times more chances then it is weighted by 1 x . So a simulation Yi is weighted by a likelihood ratio f( Yi) g( Yi) , which yields the following estimator: ˆpg = 1 N XN i=1 1D( Yi)f.

READ  Introduction to high order numerical schemes for compressibleNavier-Stokes equationsIntroduction to high order numerical schemes for compressibleNavier-Stokes equations

Table of contents :

I Introduction 
1 Piecewise Deterministic Markov Processes 
1.1 Markovian objects
1.1.1 Markov chains
1.1.2 Markov processes
1.1.3 Poisson Processes
1.1.4 Compound Poisson process
1.1.5 Continuous time Markov chains
1.2 Piecewise Deterministic Markov Processes
1.2.1 Generalities on first order ordinary differential equations
1.2.2 PDMPs without boundary
1.2.3 PDMPs with boundaries
1.2.4 Markovian object as PDMPs
1.3 Model a multi-component system with a PDMP
1.3.1 Model the evolution of the state of the system
1.3.2 A model for reliability assessment
1.3.3 The model
1.3.4 Example of the Heated room system
1.3.5 Example of the spent fuel pool
2 Monte-Carlo methods for rare events 
2.1 The Monte-Carlo method and its rare event issue
2.1.1 Computational burden of the Monte-Carlo estimator
2.1.2 The principle of variance reduction method
2.2 Importance Sampling
2.2.1 Principle
2.2.2 The Importance Sampling estimator
2.2.3 Dynamical importance sampling
2.2.4 Variance and optimal density
2.2.5 Important practical concerns
2.3 Variance optimization methods for importance sampling
2.3.1 Variance optimization: the Cross-Entropy method
2.3.2 Important remark on the Cross-Entropy
2.3.3 Adaptive Cross-Entropy
2.4 The interaction particle method
2.4.1 A Feynman-Kac model
2.4.2 The IPS algorithm and its estimator
2.4.3 Estimate the variance of the IPS estimator
2.4.4 Classical improvements of the IPS method
II Importance Sampling with PDMPs 
3 Theoretical foundation for Importance sampling on PDMP 
3.1 Prerequisite for importance sampling
3.1.1 The law of the trajectories
3.1.2 The dominant measure and the density
4 The practical and optimal importance processes 
4.1 Admissible importance processes
4.2 A way to build an optimal importance process
4.3 Remarks on the optimal process
4.4 Practical importance processes for reliability assessment
4.5 A parametric family of importance processes for reliability assessment
5 Applications to power generation systems 
5.1 Application to the Heated-Room system
5.1.1 A parametric family of importance processes for the Heated-Room system
5.1.2 Results
5.2 Practical issues with power generation systems
5.2.1 Specify a parametric family of proximity functions
5.2.2 Results with the series/parallel parametric proximity function on the spent-fuel-pool system
6 Conclusion on the importance sampling for PDMP 
III A contribution to the IPS method 
7 The optimal potential functions 
7.1 The potentials used in the literature
7.2 The optimal potential
7.3 A short simulation study for the comparison of potentials
7.3.1 First example
7.3.2 Second example
8 Conclusions and implications 
IV The interacting particle method for PDMPs 
9 The inefficiency of IPS on concentrated PDMP
9.1 IPS with PDMP
9.1.1 A Feynman-Kac model
9.1.2 The IPS algorithm and its estimators
9.1.3 Variance estimation for the PDMP case
9.1.4 The SMC improvement for PDMP
9.2 Concentrated PDMP make the IPS inefficient
9.2.1 The kind of PDMP used in the reliability analysis
9.2.2 Concentrated PDMP
9.2.3 A poor empirical approximation of the propagated distributions within the IPS
10 Efficient generation of the trajectories using the Memorization method
10.1 Advantage of Memorization over a rejection algorithm
10.2 Principle of the memorization method
11 The IPS+M method for concentrated PDMPs 
11.1 Modify the propagation of clusters
11.2 Convergence properties of the IPS+M estimators
12 Application to two test systems 
12.1 Empirical confirmation
12.1.1 The Heated-room system
12.1.2 Results of the simulations on the Heated-room system
12.1.3 Remark on the SMC with Memorization
12.1.4 A dam system
12.1.5 Results of the simulations for the dam system
13 Conclusion on the IPS+M 
V Conclusion and prospects for future work 
Optimal intensity’s expression, and some properties of U
A.1 Proof of Equality (4.11)
A.2 Proof of theorem
A.3 Equality (4.12)
Bibliographie

GET THE COMPLETE PROJECT

Related Posts