Get Complete Project Material File(s) Now! »
Optimality oriented approaches for the uncondi-tional model
In this section, we intend to present several classical methods of detection theory, applicable for unconditional models. First, we recall the minimum probability of error approach, and the Bayes risk and recall the link between them. Secondly, we will bring forward uniformly most powerful tests through two main theorems; the Neyman-Pearson theorem and the Karlin-Rubin theorem. These two theorems will be subject to a contribution later on in Chapter 6 by means of a generalisation to the asymptotic case. Finally, we will introduce the well known generalised likelihood ratio test.
The minimum probability of error test
The minimum probability of error test is based on the assumption that the probability of occurrence of each hypothesis is known such that P(H0) + P(H1) = 1. According to the application at hand, this assumption can be more or less reasonable. This prior belief about the hypotheses participate in the derivation of the threshold with which the likelihood ratio is compared. Analogously to uniformly most powerful tests, generalised likelihood ration tests etc., this approach belongs also to the category of tests that rely on likelihood ratios to design detectors. Let us consider the binary hypotheses problem: 0 :Y = 0+X with: Y fY0 (1.9) H1 :Y = 1+X with: Y fY 1 H.
with 0 and 1 the signal respectively under the null hypothesis and the alternative hypothesis. The noise is modeled by X, the observation by Y and fYi is the probability density function of Y parametrized by i. The probability of error is defined as: Pe = Pr[ decide H0 and H1 true ] + Pr[ decide H1 and H0 true ] (1.10) = P(H0 j H1)P(H1) + P(H1 j H0)P(H0).
The probability P(Hi j Hj) indicates the probability of choosing Hi when in fact Hj is true and the probability P(Hi) is the probability of occurrence of the hypothesis Hi. The purpose of this approach is to deliver a detector that is optimal according to the minimum probability of error criterion. Therefore: Z Z Pe = P(H1) P(y j H1)dy + P(H0) P(y j H0)dy (1.11) R1 R1.
with R1 = fy : such that the decision is H1g the critical region and R1 = fy :
such that the decision is H0g its complement. Considering that the critical region R1 and R1 its complement partition: Z P(y j Hi)dx = 1 ZR1 P(y j Hi)dy (1.12).
The Neyman-Pearson lemma
The well known Neyman-Pearson lemma first appeared in 1933 in a paper of Jerzy Neyman and Egon Sharpe Pearson [Neyman and Pearson, 1933]. It is used to find an optimal test according to the criterion of the maximum probability of detection of a test. Hence, the purpose is to obtain the test with the greatest power for a fixed size. However, this lemma is dedicated to detection problems where the considered hypotheses are simple, which means that each hypothesis corresponds to one distribu-tion only. The problem needs also to be completely known and unknown parameter must be specified by a single value i.e. no unknown parameters can be present in the distributions. The Neyman-Pearson lemma basically says that likelihood ratio tests are optimal,for testing simple hypothesis problems, according to a certain criterion, that is, the best probability of detection for a fixed probability of false alarm.
We recall that only non-randomised tests are considered in this manuscript. Theorem 1.2.1. Let Y be a random vector of probability density function fY (y) with 2 . We consider the following detection problem H0 : = 0 (1.20) H1 : = 1.
The Rao test
The GLRT provides a test statistic that is undeniably simple and has interesting asymptotic optimality properties [Kay, 1998] according to the Wilks theorem. How-ever, the Rao test (1948), also called the score test, can be a good alternative to the GLRT, especially when the problem requires, the calculation of maximum likelihood estimates under both hypotheses in opposition to the given GLRT example presented in Eq. (1.26), which can become constraining. In contrast, the Rao test statistic needs a maximum likelihood estimate only under the null hypothesis. This means that for the Problem (1.25), the Rao test will not even need to derive a maximum likelihood estimate for the test statistic because the signal is known under H0. Let us take the same binary decision problem as an example. Let Y be a random vector of pdf fY (y) and where 2 R is unknown under the alternative hypothesis. We consider the same problem from (1.25).
The Rao statistic test for Problem (1.25) is defined as [Kay, 2009]: TR( )= @ln(fY (y)) T I1(0) @ln(fY (y)) = 0 (1.27) @ = 0 @.
The matrix I( 0) represents the Fisher information and I 1( 0) its inverse. As we can see in Eq. (1.27), no MLE is derived since is equal to 0 under H0 and 0 is known. If 0 was unknown, a maximum likelihood estimate would need to be derived and 0 would be replaced by b0 in 1.27. The test statistic TR( ) is then compared to a threshold, just like the GLRT, to take a decision about which hypothesis is true: H0 or H1. The threshold can be calculated so as to ensure a constraint on the false alarm: H1 TR()R R (1.28) H0.
The Rao test is known to be equivalent to the GLRT in asymptotic scenarios [Kay, 1998]. Consequently their asymptotic performance are also equivalent. It has been claimed though by [Chandra and Sankhya, 1985] and [Chandra and Joshi, 1983] that Rao’s test is locally more powerful than the GLRT and the Wald test that comes next.
Invariant hypothesis-testing problems
In this subsection, we shall discuss two fundamental concepts of invariance; an invari-ant family of distributions and an invariant hypothesis testing problem. The first is a necessary condition to the second as we will see below. Hence, the following defini-tion will be useful for definition 2.1.3 about invariant problems and can be found in [Borovkov, 1998, Definition 1, p. 281].
Definition 2.1.2. Let Y fP g and G be a group of measurable transformations g of the space Rn into itself. A family fP g is invariant under G if for each g 2 G and 2 there exists an element g 2 such that: P g (Y 2 A) = P (gY 2 A), for any Borel set A of Rn. The assumption about the measurability of g 2 G is made to ensure that when-ever Y is a random variable, gY is also a random variable. The transformations g of the space are defined by the equality g = g and constitute a group G if there is a one-to-one correspondence between the parameter set and the family of distributions fP g 2 i.e P 0 6= P 1 if 0 6= 1. Thus, G is the corresponding group of transformation for the parameter space and is homomorphic to G. Example 2.1.2. Let us consider the same decision problem (2.1) of example 2.1.1. The observation Y follows the distribution N (k H; k2I). The parameter corresponds for the example to k H such that Y = + X. Therefore, the family of distributions that needs to be invariant is fP g = N ( ; k2I). Since gY follows the distribution N (ck H; c2k2I) i.e. N (c ; c2k2I), the distribution of Y is invariant to G. Conse-quently the family fP g is invariant under G and the transformation of the parameters (k ; k2I) is g(k ; k2I) = (ck ; c2k2I). It is worth noticing that the inverse element of G is ginv(y) = c 1QTAy and the identity element is gid(y) = QAy, with QA of rotation angle 2k and k 2 Z.
Definition 2.1.3. We say that the problem of testing the hypothesis H1 = f 2 1g against H0 = f 2 0g, where 0 [ 1 = , is invariant if the following two conditions are satisfied:
1. The family fP g is invariant under G, the group of transformations.
2. The sets 1 and 0 are invariant under g 2 G, that is g i = i for i = 0; 1.
Example 2.1.3. We consider the same problem (2.1) from Example 2.1.1. On the basis of Example 2.1.2, the first condition of Definition 2.1.3 is verified. With regards to invariance of the parameter subsets 0 = R and 1 = R +, we know that the expression of the hypothesis for an observation gY is:
H0 : ck 6 0 (2.3) H1 : ck > 0.
Limitations of the standard invariance
Throughout this chapter, we have presented different aspects of invariance. Thereby, it is clear enough that the standard concept of invariance as it is found in literature holds some limitations. These limitations are mainly related to definitions 2.1.2 and 2.1.3. In Definition 2.1.2 the condition for a family to be invariant imposes that sig-nals have the same distribution, up to some parameter. Admittedly, with a different value of the parameter, but still the totality of the considered signals need to belong to the same family of distribution. In a context where signals are assumed to have unknown distributions with no guarantee of similarity, this assumption hinder the use of invariance and its interesting aspects of reduction and optimality. Regarding the second problematic aspect of the standard definition of an invariant hypothesis-testing problem, it is strongly linked to the parameter set. Indeed, ordinarily the hypotheses H0 and H1 induce corresponding parameter sets respectively 0 and 1, which need to be invariant by the action of the group of transformation G. In a lot of scenarios this assumption is very much reasonable, but in others, H0 and H1 can be based on some-thing other than R subsets, for instance probabilistic events as we will see in Chapter 3 and 5.
Table of contents :
Acknowledgements
Abstract
Résumé long
Contents
Acronyms
Notations
Introduction
1 The unconditional approaches in detection theory
1.1 Hypothesis testing problems
1.1.1 Specificities of the hypotheses
1.1.2 Statistical tests
1.1.3 Optimality and robustness
1.2 Optimality oriented approaches for the unconditional model
1.2.1 The minimum probability of error test
1.2.2 The Bayes risk
1.2.3 Uniformly most powerful tests
1.2.4 The holy trinity
1.2.5 The Bayesian approach
1.2.6 Wald’s UBCP tests
2 Invariance in detection theory
2.1 Invariant detection problems
2.1.1 The group of transformation
2.1.2 Invariant hypothesis-testing problems
2.2 Maximal invariant and orbits
2.3 Invariant tests
2.4 Limitations of the standard invariance
2.5 A more general formulation of invariance
3 A conditional approach: Random Distortion Testing framework
3.1 The Deterministic Distortion Testing problem
3.1.1 Problem statement
3.1.2 Main theoretical results
3.2 The Random Distortion Testing problem
3.2.1 Problem statement
3.2.2 Main results
3.3 Conclusions
4 Distributed Random Distortion Testing
4.1 Problem statement
4.2 Reformulation of the hypothesis testing problem
4.3 Multi-sensors configurations
4.3.1 The centralised configuration
4.3.2 The distributed configuration
4.4 RDT for multi-sensors configurations
4.4.1 Theoretical material
4.4.2 Optimality for centralised configuration
4.4.3 Optimality for the distributed configuration
4.5 Numerical results
4.5.1 Lower bounds on detection probabilities
4.5.2 The constraint of level for the false alarm probability
4.6 Conclusion
5 General Random distortion Testing
5.1 Problem statement
5.2 Preliminary definitions
5.2.1 Power function and size
5.2.2 Conditional power function
5.3 Theoretical results
5.3.1 Preliminary results
5.3.2 An optimal test
5.4 Conclusion
6 An asymptotic approach: Asymptotic Karlin-Rubin’s Theorem
6.1 An asymptotic formulation of the Neyman-Pearson theorem
6.2 An asymptotic formulation of the Karlin-Rubin theorem
6.3 Testing the presence of a deterministic signal in a subspace cone
6.3.1 Problem statement
6.3.2 Invariance under group action
6.3.3 Uniformly Most Powerful Invariant test
6.3.4 Asymptotically Uniformly Most Powerful Invariant test
6.3.5 Unknown SNR
6.3.6 Numerical results
6.3.7 Asymptotic optimality of the GLRT
6.4 Conclusions
Conclusions and perspectives
6.5 Conclusions about our contributions
6.6 Perspectives
A Proof of equivalence between the initial problems (4.1) and (4.8), and the RDT formalised problems (4.5) and (4.11).
A.1 Proof of Proposition 4.2.1
A.2 Proof of Proposition 4.3.1
B Proof of the Asymptotic Neyman Pearson Theorem 6.1.1
C Proof of the Asymptotic Karlin Rubin Theorem 6.2.1
D Proof of Theorem 3.1.2
E Proof of lemma 3.2.1
F Proof of Theorem 3.2.2
G Proof of lemma 5.3.1
H Proof of Theorem 5.3.4
Bibliography