Get Complete Project Material File(s) Now! »
FOCAL: joint Forwarding and Caching with Latencyawareness
Caching needs to be supported by proper packet forwarding [Rossini and Rossi, 2014; Dehghan et al., 2015] as the performance of the first is driven by the request arrival process the second is responsible for. At the same time, the traffic to be forwarded is the miss traffic of the local cache. From this observation it appears clearly that caching and forwarding must be jointly optimized. It is the purpose of FOCAL [Carofiglio et al., 2015b, 2016]. It adopts LAC+ caches, fed by a novel forwarding strategy LB-Perf that persistently directs the most popular objects through the same interfaces, regularly making sure they can afford it, while load balancing the rest of the traffic. Chapter 3 substantiates FOCAL. It shows how FOCAL is deduced from the optimal solution of a latency minimization problem. Moreover, we analyze FOCAL stability, thoroughly evaluate its CCNPL-Sim C++ implementation on various network topologies and analyze its sensitivity to various settings.
Fairness in Information-Centric Networking
Chapter 4 investigates the fairness of content throughput allocation when caching becomes ubiquitous throughout a network [Bonald et al., 2017]. As caching the most popular objects is the trend our own caching algorithms enforce, will there be a severe distortion to fairness caching algorithms have to be designed to resorb ? It turns out that ensuring the most popular objects occupy storage, as LFU does, is the caching side of the solution to -fair throughput allocations problems. -fair content-level packet scheduling is the complementary part of the solution.
In other words, caching policies do not need to be designed for fairness as previous work suggested. Not focusing on cache hit ratio but on the throughput fairness, show that -fairness in ICN can be handled in a similar way as in traditional networks, through packet scheduling.
AFFORD: Ask For Directions, machine learning-based routing
The last contribution of this thesis addresses the prohibitive cost of performing a Longest Prefix Match over a huge FIB. Indeed, considering the unlimited namespace of the anticipated Internet of Everything, it would be hard to forward packets at every node based on an exact knowledge of paths to destinations. With Ask For Directions (AFFORD) [Mekinda and Muscariello, 2016], we propose to train a trie of small Artificial Neural Networks in the control plane and fast-interrogate them in the data plane for the most probable next hops.
Performance analysis of Networks of Caches
Networks of caches are hard to analyze because the IRM assumption no longer holds after the first-level caches [Kurose, 2014]. Indeed, the miss traffic, flowing through a cache’s egress faces carries time correlations. The IRM hypothesis allows two consecutive exogenous requests to address the same content, they are deemed totally independent. Conversely, within cache miss streams, two consecutive requests are not likely to address the same content since the first would have triggered a cache hit event for the second. In absence of connectivity issues, PIT entry aggregation, which mitigates the forwarding of Pending Interest packet duplicates, makes same-content consecutive misses much less probable.
In case of LRU caches for example, the Characteristic Time dictates the minimum time interval between two same-content consecutive misses. Thus, the miss traffic essentially carries queries for distinct content per Characteristic time. Clearly, within the miss traffic, the number of requests for any content k does not follow a Poisson process, as their interarrival time does not follow an exponential distribution.
Table of contents :
Abstract
Résumé
List of publications
1 Introduction
1.1 Today’s Internet
1.1.1 An architecture that no longer fits its usage
1.1.2 The rampant threat of host-centric Internet collapse
1.1.3 The dawn of HTTPS-by-default
1.1.4 Excessive latency
1.1.5 Weak multipath support
1.1.6 A non-native relief by CDNs
1.2 5G, an opportunity
1.3 Information-Centric Networking
1.3.1 Content-Centric / Named-Data Networking
1.3.2 NDN operations
1.3.3 NDN security
1.3.4 NDN routing
1.3.5 NDN forwarding
1.3.6 Mobility in NDN
1.3.7 Caching in NDN
1.4 Problem statement
1.5 Our contributions
1.5.1 LAC/LAC+: Latency-aware caching
1.5.2 FOCAL: joint Forwarding and Caching with Latency-awareness .
1.5.3 Fairness in Information-Centric Networking
1.5.4 AFFORD: Ask For Directions, machine learning-based routing .
1.6 Mathematical foundations
1.6.1 Elements of probability theory
1.6.2 Queuing theory fundamentals
1.6.3 Lyapunov optimization
1.6.4 Nonlinear optimization
1.6.5 Cache performance analysis
1.6.6 Performance analysis of Networks of Caches
2 LAC/LAC+: Latency-Aware Caching Strategies in ICN
2.1 Introduction
2.2 Related work
2.3 Latency-aware heuristics
2.4 Analysis
2.4.1 Assumptions
2.4.2 Miss ratio
2.4.3 Lower bound
2.5 Simulation
2.6 Conclusion and future work
3 FOCAL: Joint Forwarding and Caching with Latency-awareness in ICN
3.1 Introduction
3.2 Related work
3.3 Problem statement
3.4 Optimal algorithm design
3.4.1 Optimal algorithm design guidelines through analytic insight
3.4.2 Numerical solutions
3.4.3 Maximizing the hit ratio of dynamic caches through optimal bundling
3.5 FOCAL
3.5.1 Latency-aware caching strategies
3.5.2 Latency-aware forwarding strategies
3.6 Performance analysis
3.7 Simulation
3.7.1 Linear topology with forwarding branches
3.7.2 Fat tree with direct access to content repositories
3.7.3 US backbone-like scenario
3.8 Conclusion
4 On the Fairness of ICN
4.1 Introduction
4.2 Related work
4.3 Cache Network Model
4.3.1 Model assumptions
4.3.2 Cache network capacity
4.3.3 Problem formulation
4.3.4 Solution
4.4 Toy examples
4.4.1 Client/Server tandem
4.4.2 Client/Cache/Server bus
4.5 Evaluation
4.5.1 Client/Cache/Server bus
4.5.2 A simple network
4.6 Conclusion
5 Supervised Machine Learning-based Routing for NDN
5.1 Introduction
5.2 Related work
5.3 AFFORD
5.3.1 AFFORD supervised learning
5.3.2 AFFORD forwarding
5.4 Analysis
5.5 Evaluation
5.5.1 Tiny-size FIB
5.5.2 Medium-size FIB
5.5.3 Big-size FIB
5.6 Conclusion and future work
6 Conclusion and future work
Bibliography