kernel regression with shape constraints for vehicle trajectory reconstruction 

Get Complete Project Material File(s) Now! »

History of reproducing kernels: A realm (still) divided

From a mathematical standpoint, we can summarize the century-long history of kernels as a succession of theoretical breakthroughs and recessions due to the lack of novel insights. The theory itself was reborn multiple times around 1907 and 1921. As Aronszajn (1950) recounts it, two tendencies prevailed, either putting the function space forward and considering the kernel as a tool to be computed (Zaremba, Szegö, Bergman), or studying the kernel for its properties and then deriving the function space as a by-product (Mercer, Moore). The unifying effort of mathematicians (Aronszajn, 1943, 1950; Schwartz, 1964) more acquainted with modern Bourbaki structures led to the elegant theory as it is still presented today. Meanwhile, Emanuel Parzen in a Promethean gesture 5 brought the torch of kernels to statisticians and passed it at Stanford, notably to Grace Wahba and Thomas Kailath, in fine extending the theoretical results to engineering studies.
So far, we have not mentioned machine learning. Are least-squares problems the first example of it? It is obviously hard to draw a line between communities, but machine learning has the peculiarity with respect the aforementioned fields to have at its core the credo of performance. Since we are sketching the history, let us mention some of the multiple reasons which could be invoked to explain the emergence of machine learning in the early 90’s: the economic boom of the private sector after the Cold War, the access to personal computers, the emigration of scientists from Eastern Europe,. . . To say the least, Vladimir Vapnik, one of the popes of statistical learning, joined the Bell Labs in the early 90s and Support Vector Machines (SVMs) peaked in the early 2000s. But the applicative power reached a glass ceiling as the cubic complexity of SVMs could deal with millions of points but not the billions that the industry now required. Dwelling in the shadow of neural networks, some kernel specialists look either to bypass the complexity issue through kernel approximations or to reassert the importance of kernels as a theoretical limit of neural networks.

theoretical preliminaries and problem formulation

In this section, we present the tools from the theory of kernel methods that we shall apply. We then introduce our strengthened problem with second-order cone constraints.
Notations: We use the shorthand [[1, P]] = {1, . . . , P}. RN+ is the subset of RN of elements with nonnegative components. BN denotes the closed unit ball of RN for the Euclidean inner product, 1N the vector of all-ones. For a matrix A ∈ RN,N, we write by ∥A∥ its operator norm. IdN is the identity matrix of RN,N. We chose not to explicit the output space for the function spaces to avoid cumbersome notations, as it can be always deduced from the context. The space of functions with continuous derivatives up to order s is denoted by Cs(0, T). For a function K(·, ·) defined over a subset of R × R, ∂1K(·, ·) denotes the partial derivative w.r.t. the first variable. For a Hilbert space (Hk, ⟨·, ·⟩K), BK is the closed unit ball of Hk, ∥ · ∥K denoting the corresponding norm. Given a subspace V ⊂ Hk, we denote by V⊥ its orthogonal complement w.r.t. ⟨·, ·⟩K. A μ-strongly convex function L : Hk 7→ R is a function satisfying, for all f1, f2 ∈ Hk,α ∈ [0, 1], L(αf1 + (1 − α)f2) + α(1 − α)μ 2 ∥f1 − f2∥2 K ⩽ αL(f1) + (1 − α)L(f2).

READ  Optimisation of CRISPR-Cas9 genomic editing of mammary cells 

revisiting lq control through kernel methods

This section presents a step-by-step approach to identify the matrix-valued kernel K of the Hilbert space S of solutions of a linear control system, equipped with the scalar product (4.3). This is done independently from the state constraints which effect is only to select a closed convex subset of the space of trajectories. In Section 4.3.1, we consider the case Q ≡ 0 which enjoys explicit formulas. We also express a representer theorem (Theorem 4.1 suited for problems of the form (Pδ,fin). This allows us to revisit, through the kernel framework, classical notions, such as the solution of the unconstrained LQR problem, or the definition of the Gramian of controllability. In Section 4.3.2, we consider the case Q ̸≡ 0 and relate our solution to an adjoint equation over matrices. Furthermore, the identification of kernels developed in Section 4.3 is by no means restricted to finite T, hence the kernel formalism can also tackle infinite-horizon problems.

Table of contents :

1 introduction 
1.1 Background on estimation and control with constraints
1.1.1 Considered framework for optimizing models with constraints
1.1.2 Shape constraints in nonparametric regression
1.1.3 State constraints in LQ optimal control
1.2 Reproducing kernels in a nutshell
1.2.1 History of reproducing kernels: A realm (still) divided
1.2.2 Positive definite kernels: A nonlinear embedding with inner products
1.2.3 RKHSs: A Hilbertian topology stronger than pointwise convergence
1.2.4 Vector-valued RKHSs and reproducing property for derivatives
1.2.5 Two computational tools: the representer theorems and the kernel trick
1.3 Contributions of the thesis
1.3.1 Structure of the thesis
1.3.2 Tightening infinitely many shape constraints into finitely many
1.3.3 A new kernel for (state-constrained) LQ optimal control
1.3.4 Differential inclusions: Lipschitz minimal time and kernel-based graph identification
2 kernel regression with shape constraints for vehicle trajectory reconstruction 
2.1 Introduction
2.2 Problem Formulation
2.3 Optimization
2.4 Numerical Experiments
2.5 Conclusions and perspectives
3 real-valued affine shape constraints in rkhss 
3.1 Introduction
3.2 Problem formulation
3.3 Results
3.4 Numerical experiments
3.5 Appendix of the results of Chapter
3.5.1 Proof of Theorem 3.1
3.5.2 Shape-constrained kernel ridge regression
3.5.3 Examples of handled shape constraints
4 state constraints in lq optimal control through the lq kernel 
4.1 Introduction
4.2 Theoretical preliminaries and problem formulation
4.3 Revisiting LQ control through kernel methods
4.3.1 Case of vanishing Q
4.3.2 Case of nonvanishing Q
4.4 Theoretical approximation guarantees
4.5 Finite-dimensional implementation and numerical example
5 the lq reproducing kernel and the riccati equation 
5.1 Introduction
5.2 Vector spaces of linearly controlled trajectories as vRKHSs
5.3 Proof of Theorem 5.2
6 lipschitz minimum time for differential inclusions with state constraints 
6.1 Introduction
6.2 Main results
6.3 Discussion on the main results
6.3.1 Nonautonomous systems
6.3.2 Weakening the hypotheses
6.3.3 Considering point targets
6.4 Proofs
7 data-driven set approximation for detection of transportation modes 
7.1 Introduction
7.2 Approximating discrete sets with SVDD
7.2.1 Notations
7.2.2 Theoretical framework of the SVDD algorithm
7.2.3 The SVDD algorithm
7.2.4 Extension of SVDD to noisy data
7.2.5 Discussion on application of SVDD and illustration on real data
7.3 Theoretical guarantees on set estimation
7.3.1 Main results
7.3.2 Further results
7.4 Application to detection of transportation mode on simulated data
7.5 Conclusion and perspectives
Bibliography

GET THE COMPLETE PROJECT

Related Posts