A variational Bayesian approach for restoring data corrupted with non-Gaussian noise 

Get Complete Project Material File(s) Now! »

Hierarchical Bayesian modeling

In Bayesian inference, we often have more than one unknown variable to estimate. Prior distributions are assigned to these variables either through their joint distributions if the variables are supposed to be dependent or through their marginal distributions if they are independent. The unknown variables can be generally structured into dierent groups (see Figure 2.9). First, we have the main variables which include the target signal, the blurring operator in the case of blind deconvolution, the noise statistics etc. Informative priors (regularization or conjugate) are generally assigned to these variables when prior knowledge is available. These informative priors may introduce some new variables which are generally unknown (e.g., we assign to the target image a total variation prior with unknown regularization parameter, to the variance of the noise an inverse Gamma prior with unknown rate parameter, to the signal of interest a scale mixture of Gaussian distributions which involves unknown mixing variables etc.) These new variables dene the second
group and are usually called parameters. The latter are generally modeled with conjugate or non-informative priors. Conjugate priors may also involve new unknown variables called hyperparameters which are in most case modeled by non-informative distributions. This structured model is referred to as hierarchical Bayesian modeling which is at the core of Bayesian inference.

Algorithms for computing Bayesian estimates

Once the posterior distributions of these unknown variables are determined, a Bayesian estimator is derived for each unknown variable using its posterior distribution given the remaining ones. The common Bayesian point estimates Algorithms for computing the MAP estimate: The MAP estimate is computed by minimizing a cost function equal up to an additive constant to the the minus logarithm of the posterior density. Based on the properties of this cost function (dierentiablility, convexity, continuity etc.), several algorithms can be employed to solve this minimization problem.m Among the mostly used algorithms, we can mention descent algorithms such as nonlinear conjugate gradient and quasi-Newton methods. Other algorithms are based on majorize-minimize strategies [Chouzenoux et al., 2011] such as half-quadratic approaches [Geman and Yang, 1995; Nikolova and Ng, 2005; Charbonnier et al., 1994; Ciuciu and Idier, 2002], expectationmaximization [Champagnat and Idier, 2004; Celeux and Diebolt, 1985]. For non-dierentiable objective functions, one may use primal proximal algorithms [Combettes and Pesquet, 2011, 2007] and primal-dual methods [Esser et al., 2010; Chambolle and Pock, 2011].
Note that, the parameters and the hyperparameters in the hierarchical Bayesian model can be jointly estimated with the target signal using either the MAP estimate or other approaches [Pereyra et al., 2015; Thompson et al., 1991; Archer and Titterington, 1995; Molina et al., 1999; Almeida and Figueiredo, 2013; Bardsley and Goldes, 2009; Bertero et al., 2010]. One popular approach is based on the discrepancy principle proposed in order to address the problem of selecting the regularization parameter in deconvolution problems involving Gaussian noise [Thompson et al., 1991]. The regularization parameter is chosen such that the variance of the residual (i.e, the dierence between the observed image and the blurred estimate) is equal to the variance of the noise. This method has been also extended to other data delity terms such as the Poisson noise and the signal dependent Gaus sian noise [Bertero et al., 2010; Bardsley and Goldes, 2009]. Among other well known approaches, we can mention the generalized cross validation and the L-curve [Golub et al., 1979; Hansen and O’Leary, 1993].

READ  Statistical Pattern Recognition for Structural Health Monitoring

Majorize-Minimize Framework

The majorization-minimization (MM) principle is a powerful tool for designing algorithms to solve optimization problems. The idea behind the MM approach is to replace a complicated minimization problem with successive minimizations of some well chosen surrogate functions [Hunter and Lange, 2004]. These functions are called tangent majorants.

Table of contents :

Résumé
Abstract
Notations
List of acronyms
1 General introduction 
1 Motivation
2 Challenges
2.1 MCMC simulation methods
2.2 Variational Bayesian approximation methods
3 Main contributions
4 Publications
5 Outline
2 Background 
1 Inverse problems
2 Bayesian methodology for solving inverse problems
2.1 Bayesian framework
2.2 Link with penalized approaches
2.3 Choice of the prior
2.4 Hierarchical Bayesian modeling
2.5 Algorithms for computing Bayesian estimates
3 Stochastic simulations methods
3.1 Importance sampling
3.2 Rejection sampling
3.3 Markov chain Monte Carlo methods
4 Approximation methods
4.1 Laplace approximation
4.2 Variational Bayes approximation
3 Majorize-Minimize adapted Metropolis Hastings algorithm 
1 Problem statement and related work
1.1 Langevin diusion
1.2 Choice of the scale matrix
2 Proposed algorithm
2.1 Majorize-Minimize Framework
2.2 Proposed sampling algorithm
2.3 Construction of the tangent majorant
3 Convergence analysis
4 Experimental results
4.1 Prior and posterior distributions
4.2 Results
4 An Auxiliary Variable Method for MCMC algorithms 
1 Motivation
1.1 Sampling issues in high dimensional space
1.2 Auxiliary variables and data augmentation strategies .
2 Proposed approach
2.1 Correlated Gaussian noise
2.2 Scale mixture of Gaussian noise
2.3 High dimensional Gaussian distribution
2.4 Sampling the auxiliary variable
3 Application to multichannel image recovery in the presence of Gaussian noise
3.1 Problem formulation
3.2 Sampling from the posterior distribution of the wavelet coecients
3.3 Hyperparameters estimation
3.4 Experimental results
4 Application to image recovery in the presence of two terms mixed Gaussian noise
4.1 Problem formulation
4.2 Sampling from the posterior distribution of x
4.3 Experimental results
5 A variational Bayesian approach for restoring data corrupted with non-Gaussian noise 
1 Problem statement
1.1 Model
1.2 Related work
1.3 Bayesian formulation
2 Proposed approach
2.1 Construction of the majorizing approximation
2.2 Iterative algorithm
2.3 Implementation issues
3 Application to PG image restoration
3.1 Problem formulation
3.2 Numerical results
6 Conclusion 
1 Contributions
2 Perspectives
2.1 Short-term extensions
2.2 Future works and open problems
A PCGS algorithm in the case of a scale mixture of Gaussian noise
B Proof of Proposition 2.1
List of gures
List of tables
List of algorithms
Bibliography

GET THE COMPLETE PROJECT

Related Posts