Persistent Scatterers Interferometry 

Get Complete Project Material File(s) Now! »

Residue-cut algorithm methods

These family of methodologies estimate the phase gradients of the unwrapped phase directly from the wrapped phased and then they integrate these values sequentially by means of some two-dimensional path in the interferogram. These kind of algorithms perform a previous estimation of the ghosts lines or phase discontinuities in order to exclude them from the integration path [Gol] or to use them to correct the phase differences along the detected phase residues by adding integer multiples of 2π [Cos96, Fly97] before the integration, see figure 2.2.6 for an illustration of the problem.
An interesting property of these kind of methodologies is that the unwrapped phase is consis-tent (or congruent) with the wrapped phase. It means that they differ only by an integer number (one multiple of 2π) of one full phase cycle. Then, all the complications appear on finding the phase discontinuities, or in other words, in properly estimating the phase gradient of the un-wrapped phase.

Least squares estimation techniques

Like the residue-cut algorithms, least squares ones are based on the assumption that the ob-served wrapped phase is correctly sampled nearly everywhere. Then, instead of performing an integration of the gradient estimates along some determined paths, a computation of the un-wrapped solution is performed based on a minimization of the total squared departure of the estimated unwrapped gradients from their wrapped counterparts. This minimization is performed globally, simultaneously over the whole image. It is very interesting to balance the equations with some kind of weight based on the interferometric coherence or the variance of the estimated phase gradient [Pri96][Ghi94, Hun79].
In the least square problem formulation no gradients are explicitly disregarded as it is done in the residue-cut algorithm. So least square unwrapped estimate is usually very smooth (spa-tially) and generally not congruent with the wrapped input phase. Thus, despite these kind of methodologies are very efficient and mathematically elegant they often give disappointing results in practice. However, they obtain always a complete solution for the whole image which in some cases constitutes a good compromise.

Other algorithms

The two kinds of phase unwrapping algorithms briefly introduced above are the most popular and classical ones. However, it exists other methodologies based on different approaches. The existing diversity illustrates the open-ended nature of the phase unwrapping problem.
In case of high coherence interferograms, region growing phase unwrapping techniques have proven to give very good results [Rei97, Xu96]. These kinds of algorithms try to identify regions of high quality and to unwrap them individually. Finally these reliable and isolated areas of un-wrapped phase are merged between them in order to have the same reference.
Another very elegant approach is the one based on Kalman filters [Kra96]. By means of this filtering technique, based on the statistic of the observed gradients of the wrapped phase, it is performed a reliable estimation of the local interferometric frequency and an integration of this value in order to obtain a prediction of the unwrapped phase value.
Multiresolution frequency estimators combined with least squares achieve asymptotically un-biased slope reconstruction [Dav96]. They work locally in an adaptive way based on the interfer-ogram quality and adjusting the achieved planimetric resolution.

SAR interferometry processing steps

Under a practical point of view the performances of the interferometric processing is divided into several steps. There are a lot of interferometric tools available in the market, which share more or less the same principles. All the interferometric processing within this PhD has been done with DIAPASON software [CNE98], developed by the French Space Agency (CNES) and maintained and upgraded by the company Altamira-information, S.L [ALT]. In figure 2.2.7 the main steps of DIAPASON’s software are presented as illustration of the practical procedure required to perform an interferogram. However, this architecture can be extrapolated to rest of interferometric chains.

Data extraction

The input data must be always an interferometric SAR data pair, in level 0 (RAW) or in level 1 (SLC). Whatever the case this data must be extracted from the delivered product and put it into a binary file with (setting a particular format). Furthermore, other additional information neces-sary for the rest of the processing must be extracted from the product annotations such as orbit ephemeris, image timing information and satellite acquisition frequencies. All these product an-notations are decoded usually in ASCII files forming an image descriptor which defines the main characteristics of the SAR data.

SAR focusing

In case of level 0 the RAW data must be compressed or focused by means of the SAR synthesis step in order to obtain a level 1 (or SLC) data.. The focusing step is the responsible of converting the raw data to a complex (I+Q) image with an improved resolution (the nominal one considering the used satellite mode). Previous to the image focusing some important processing parameters such as the azimuth FM rate and the azimuth Doppler centroid value must be estimated care-fully for each acquisition, which can be done based on the data itself or based on the product annotation information, achieving in the first case a better accuracy.

Generation of a descriptor of the illuminated ground surface

It is necessary to describe the ground terrain which covers the area illuminated by the swath of the satellite. This could be done by means of a descriptor file, which specifies the corners of the Area Of Interest (AOI) and a geodetic model of the earth surface. This earth model will be use to remove the interferometric flat earth fringes (to obtain an InSAR). In case of availability of a Digital Elevation Model (DEM) of the AOI it could be used also to improve the compensation of the flat earth fringes from the interferogram by removing the fringes due to the ground topography. In that case the final product will be a differential interferogram (DInSAR).

Correction of the product annotation timings

The SLC is a raster file which contains information about the SAR reflectivity of the illuminated ground surface in slant-range coordinates (sensor geometry). The acquisition starting time and the near range for the first sample of the raster file is given within the product annotation param-eters. These times are used to relate every SAR sample with its correspondent ground position, when compensating the flat earth fringes in the interferogram and when projecting the SAR mea-surements to ground coordinates. Usually these timing annotations are not accurate enough in order to guarantee a precise geocoding of the products.
This correction is performed by means of the cross-correlation of the SAR amplitude of the master image with a simulated SAR reflectivity based on the DEM. This simulated reflectivity image is obtained as function of the available topographic information of the ground (given by the DEM), the sensor and the image mode characteristics and its orientation (given by the orbit statevectors). The correlation detects thus the shifts between both geometries (which are a constant value in range and azimuth) and the refinement can be performed.

Image coregistration

The interferometric data pair are acquired from slightly different point of view. Hence they present different image geometries. Therefore, the SAR couple must be coregistered before any interfer-ometric operation. To coregister a pair of SLC images, one image geometry, called slave image, is transformed into the other one, called master image. To coregister the slave to the master, it is necessary to know the range and azimuth offsets that must be applied to each pixel of the slave image in order to project it onto the master geometry. The coregistration estimates these deformation grids in range and azimuth axis that allows the transformation of each point of the slave images into the master geometry of acquisition, as if all the scenes have been imaged from the same point of view. Usually the coregistration procedure is based in three main steps:
1. Rough coregistration: It consists in the estimation of a constant offset between the two images in order to approach both geometries (with a precision of some few pixels). This shift can be evaluated by means of an incoherent amplitude correlation of an undersampled image of the multilook product or approximated by the user that deduced the offsets to be applied from the visual analysis of the multilook products.
2. Fine coregistration: uses incoherent amplitude correlation over small windows distributed throughout the image swath, in order to obtain a precise and adaptive estimation of the range and azimuth offsets. However, the quality of these estimates depends on the cor-relation rate, which is sometimes very low as for example in vegetated areas or inundated (flooded) regions. This estimation is enhanced comparing the correlation grids with a syn-thetic coregistration grids obtained from the use of a DEM of the area and the orbital files. Due to the inaccuracies of the image timing annotation (in range and azimuth) the synthetic grids are very precise in a relative point of view (correction of the shifts between master and slave), but do not provide the appropriate absolute value for the coregistration offsets. These synthetic coregistration grids give the theoretic offsets between the data pair according to their orbit separation, the ground topography and the acquisition parameters. The result of the fine coregistration is finally composed of two grids containing the offsets that should be applied, in range and in azimuth, to each pixel of the considered slave image. Figure 2.2.8 shows an example of coregistration grids obtained by amplitude correlation and the final interpolated ones.
3. Application of the grids: Finally the slave image is resampled in an adaptive way to the master geometry applying to each pixel the offset values indicated by the coregistration grid.
In the bibliography [Gab88] it is demonstrated that a coregistration with an accuracy of 1/8th of a pixel yields to an almost negligible (4%) decrease in the interferometric coherence. Typically, the final coregistration accuracy achieved with DIAPASON is in the order of 0.1 pixel, corresponding to approximately 0.8 m. in ENVISAT and 0.1 m. in TERRASAR-X stripmap modes in slant-range.
Figure 2.2.8: Example of fine coregistration grids. In (a) and (b) there are the grids with the fine range and azimuth offsets estimated by amplitude correlation between master and slave. The image in (c) is the correlation rate index, which indicates the quality of these estimates. In (d) and (e) there are the final interpolated coregistration grids with the fine offsets that must be applied to the slave image. They are obtained thanks to the use of the orbits state vector, the DEM and the image annotation parameters.
(c) over Bam, Iran with ENVISAT image mode data. A strong ground deformation pattern can be observed on the phase image due to earthquake which took place on December 2003. This is possible thanks to have one acquisition before and another one after the earthquake. Each blue-to-red color-cycle (one full phase cycle) equals approximately 2.8 cm of deformation. The dark areas of coherence (very different signals between master and slave) in the center of the image highlights the destruction and the damage caused by the earthquake over the urban area.

READ  The role of emotional intelligence in interest-based negotia

Generation of the interferogram

Within this step the interferometric products are generated. They are obtained by means of the coherent difference between the two coregistered data products in slant-range geometry. It is suitable to define a multilooking factor in order to estimate this mean phase difference for a group of pixels with a good level of signal to noise ratio. This parameter will determine the planimetric resolution of the final product. The main products are:
• The interferometric amplitude: the mean derived from the amplitude of the two input SLC.
• The interferometric phase: image with the phase difference between the two input channels. This phase value is wrapped between (-π y π).
• The interferometric coherence: it is a quality index which is directly related with the precision of the estimated phase difference for every interferometric pixel. It is comprised between 0 (no signal) and 1 (very good SNR).
In classical InSAR applications it is commonly performed a common band filtering in both di-rections range and azimuth taking into account the estimated Doppler centroid and the geometric baseline which characterize the data pair. This is done to enhance the estimated interferometric coherence.
The interferometric phase is related to the physic distance difference that travels the wave considering the master and the slave orbital position and the same target on the ground. In particular, the interferometric phase is measuring the evolution of this differential distance between the adjacent samples. The flat earth phase contribution is automatically compensated within the DIAPASON’s interferometric procedure; the phase measured is then the increase of topography between adjacent pixels, considering a geodetic earth surface model. In case of having a DEM of the AOI these topographic phase is also compensated jointly with the flat earth fringes resulting in a Differential Interferogram (DInSAR), see next chapter for further details about InSAR and DInSAR procedures and applications. An example of interferometric product is given in figure 2.2.9.

Compensation of the orbital state vectors inaccuracies

Possible inaccuracies in the used orbital state vectors are translated into linear phase trends in the interferometric phase, typically in range direction [Koh03]. The interferometric softwares usually estimate possible phase slopes or gradients in range and azimuth direction directly from the interferometric phase. Then a synthetic phase ramps is generated in order to compensate these trends over the interferometric phases. An area of good coherence is usually required in order to perform a successful estimation of these phase trends, see figure 2.2.10 for an illustrated example.

Table of contents :

1 Overview 
1.1 Introduction
1.2 Main objective
1.3 Outline of this thesis
2 Background 
2.1 Synthetic Aperture Radar
2.1.1 Acquisition system and geometry
2.1.1.1 Image Doppler centroid
2.1.1.2 SAR image distortions
2.1.2 The phase of the SAR signal
2.1.2.1 The travel phase
2.1.2.2 The reflection phase
2.1.2.3 The construction phase
2.2 SAR interferometry
2.2.1 Phase stability conditions
2.2.2 Estimation of the interferometric phase quality
2.2.3 Sources of decorrelation
2.2.3.1 Geometric decorrelation
2.2.4 Phase unwrapping
2.2.4.1 Residue-cut algorithm methods
2.2.4.2 Least squares estimation techniques
2.2.4.3 Other algorithms
2.2.5 SAR interferometry processing steps
2.2.5.1 Data extraction
2.2.5.2 SAR focusing
2.2.5.3 Generation of a descriptor of the illuminated ground surface
2.2.5.4 Correction of the product annotation timings
2.2.5.5 Image coregistration
2.2.5.6 Generation of the interferogram
2.2.5.7 Compensation of the orbital state vectors inaccuracies
2.2.5.8 Phase unwrapping
2.2.5.9 Geocoding
2.3 InSAR main applications
2.3.1 Digital Elevation Model generation
2.3.1.1 Geometric interpretation
2.3.2 Estimation of ground deformation maps
2.3.2.1 Complete interferometric phase model
2.3.2.2 Detection of movement
3 Persistent Scatterers Interferometry 
3.1 Review of PSI technology
3.2 What is a PS
3.2.1 Why not all targets exhibit a PS behavior?
3.2.1.1 Type of reflection As it was highlighted above, different kinds of reflection can occur, as depicted in figure 3.2.2
3.2.1.2 Ground object size
3.2.1.3 Wavelength
3.2.1.4 Radar resolution
3.2.1.5 Functional models
3.2.2 Estimating PS like pixels on radar images
3.2.2.1 SAR amplitude stability
3.2.2.2 Stacking of the interferometric coherence
3.2.2.3 Other methodologies
3.3 Stable Point Network technique
3.3.1 Image extraction procedure
3.3.2 Image selection procedure
3.3.3 Image coregistration procedure
3.3.4 Initial mask of PS procedure
3.3.5 Stable Point Network analysis procedure
3.3.5.1 Relationship establishment
3.3.5.2 Estimation of the model parameters at network arcs
3.3.5.3 Network integration
3.3.5.4 Estimation of the Atmospheric Phase Screen (APS)
3.3.5.5 Final spatial high resolution estimation of the SPN model parameters
3.3.5.6 Estimation of the deformation time series
3.3.6 Final selection of points of measurement
3.3.6.1 Methodology for selecting good SPN points of measurement
3.3.7 PS gecoding
3.3.7.1 Precise geocoding procedure
3.3.7.2 Example of application
4 Improvements of the SPN technique
4.1 Image coregistration quality control
4.1.1 Detection of super PS
4.1.1.1 The impulse response function (IRF)
4.1.1.2 Identification of super PSs
4.1.2 Evaluation of the coregistration accuracy
4.1.3 Example of application of the methodology
4.2 PS-like pixel selection enhancements
4.2.1 Enhancements of the initial estimation of PS
4.2.1.1 Relative calibration of the images
4.2.1.2 Example of application
4.2.2 Final selection of PSlike pixels enhancements
4.2.2.1 Origin of ambiguous SPN measurements
4.2.2.2 Impact of SAR artifacts on SPN measurements
4.3 SPN linear deformation improvements
4.3.1 Robustness of the linear deformation pattern in time
4.3.1.1 Variogram definition
4.3.1.2 Estimation of the variogram per SLCs
4.3.1.3 Application
4.4 SPN non-linear
4.4.1 Characterization of the SPN linear model fitting procedure by means of simulations
4.4.1.1 Performance of the estimator in function of the noise
4.4.2 SPN estimation system for non-linear deformations
4.4.2.1 Example of monitoring of non-linear deformations by using a priori information: Katrina hurricane test case
4.4.2.2 Guidelines for detecting possible non-linear deformation areas in SPN
4.4.2.3 Advanced SPN for the automatic monitoring of non-linear deformations
4.4.2.4 Application on Paris test site
5 Summary and Conclusions

GET THE COMPLETE PROJECT

Related Posts