Dynamic TDM-based Arbitration Hardware Design

Get Complete Project Material File(s) Now! »

Real-time Systems

Real-time systems have applications in many fields of activity, such as transportation, telecom-munication, robotics, space missions, multimedia systems, and industry. A real-time system is a reactive system in nature. It is often seen as a control environment that is associated with a computer control system. More precisely, a real-time system is in permanent interaction with its external environment, usually represented by a physical process, in order to control its behavior evolving over time. The interactions of a real-time system with its environment are designed in order to respond to events in the controller environment. More precisely, the system response time must be within a limited time, i.e., the system must be able to respond to changes in the state of its environment within this time limit. Not respecting the timing constraints of a real-time system could cause the system to behave incorrectly, causing instability, which could lead to a major system failure.

Definitions and Properties

Definition 2.1.1. Real-Time System [62, 64]. A real-time system is defined as a control-command system in which the application’s correctness depends not only on the result but also on the time at which this result is produced. If the application timing constraints are not respected, we talk about system failure.
In a real-time system, time is the most important resource to manage. Tasks must be assigned and executed in such a way that they respect their timing constraints. Interactions between tasks are also time-constrained, messages are transmitted in a well-defined time interval between two real-time tasks. The environment in which a computer operates is an essential component of any real-time system. Therefore, the respect of the timing constraints of a real-time system affects its reliability. A failure of a real-time system can have catastrophic consequences, whether it is economic or human.
Definition 2.1.2. Predictability [62, 3]. A system is considered predictable if a useful bound is known on temporal behavior that covers all possible initial states and state transitions.
Predictability [62] in a real-time system depends on several factors, covering both software and hardware. In a predictable system, it must be possible to determine in advance whether all com-putation activities can be performed within a given time limit. To this end, the system should be deterministic, i.e., based on a sequence of input events, the system produces a sequence of output events, always the same output with the same input and in an order determined by the order of the input events [3]. Several factors affect the deterministic behavior of a system, including its hardware architecture, operating system and the programming language used to write the application.
Definition 2.1.3. Composability [3, 4]. A system is considered composable if tasks cannot affect the behavior of any other task, neither its computation result nor its temporal behavior.
In a composable system, a component of the system should not affect the temporal behaviour of another component unintentionally [9]. Composability is a major factor in improving the pre-dictability of the system. However, modern hardware architectures include several advanced functions (e.g. caches), which increase the average performance of the system at the cost of highly variable timing behaviour. Since these hardware resources are most often shared and are based on historical execution information to improve performance, the execution time of an application is likely to depend on the execution history and interference of other competing applications.
Real-Time Tasks
Definition 2.1.4. Real-Time Task [62, 64, 8]. A real-time task denoted ti, sometimes also called a process or a thread, is an executable entity of work which, at a minimum, is characterized by a worst-case execution time and a time constraint.
Definition 2.1.5. Job [64, 8]. A job is a running instance of a task on the hardware platform.
Tasks in a real-time system are invoked/activated at regular intervals and must be executed in a limited time window. Each invocation of a task corresponds to a job that monitors the state of the system by taking input data, performs some computations and, if necessary, sends commands to modify and/or display the state of the system. A real-time task is often characterized by its criticality level, which is defined according to the severity that a task failure will have on its environment. A real-time task is also characterized by its temporal properties, distinguishing several types of task: periodic, aperiodic and sporadic. Each type normally gives rise to multiple jobs.
Definition 2.1.6. Periodic Tasks [64, 49, 73]. Periodic tasks are real-time tasks which are activated (re-leased) regularly at fixed rates (period). The period of a task ti is commonly designated by Ti. The time constraint for an instance of a periodic task is a deadline di that can be less than, equal to, or greater than the period. It is often assumed that the deadline equals the period (implicit deadline).
Generally, a periodic task is used for the acquisition or the production of data at regular times. The task activities must be performed in a cyclically manner at specific rates, which can be de-rived from the requirements of the application. For instance, this type of task is used for moni-toring purposes, where the task acquires data from sensors at regular intervals. The utilization denoted ui, of a periodic task ti is the ratio of its execution time Ci over its period ti, i.e., ui = Ci .
Definition 2.1.7. Aperiodic Tasks [64, 73]. Aperiodic tasks are real-time tasks which are activated irreg-ularly at some unknown and possibly unbounded rate. Aperiodic tasks have soft deadlines or no deadlines.
Aperiodic tasks are event-driven, which means that these tasks behave in an unpredictable way and are governed by events from the system environment.
Definition 2.1.8. Sporadic Tasks [64, 73]. Sporadic tasks are real-time tasks which are activated irregu-larly with some known bounded rate. The bounded rate is characterized by a minimum inter-arrival period, that is, a minimum interval of time between two successive activations. This is necessary (and achieved by some form of flow control) to bound the workload generated by such tasks. The time constraint is usually a deadline di.
Just like aperiodic tasks, sporadic tasks are event-driven. There is no prior knowledge of the arrival times of sporadic tasks. However, they do have requirements on the task’s minimum inter-arrival time. Unlike aperiodic tasks that may not have hard deadlines, sporadic tasks do have hard deadlines, i.e. when the triggering event of the task occurs, a response is required in a limited time window.
Task Specification
In real-time systems, a task ti is characterized, as depicted in Figure 2.1, by its priority Õi., its release time ri, its deadline di, its execution time Ci, and its period Ti.
Definition 2.1.9. Release Time [64, 73]. A release time ri, is a point in time at which a real-time job becomes ready to (or is activated to) execute.
A job can be scheduled for execution at any time at or after the release time. It may or may not be executed immediately, because, for example, a higher or equal-priority task is using the processor.
Definition 2.1.10. Deadline [64, 8]. A deadline, di is a point in time by which the task (job) must complete.
Usually, a deadline di is an absolute time. Sometimes, di is also used to refer to a relative deadline. To avoid confusion we denote the relative deadline as Di. A relative deadline of a task is the deadline measured in reference to its release time.
Definition 2.1.11. Priority [64]. The priority i of a task indicates its order of importance for scheduling.
A task has a priority Õi, which can be fixed or dynamically assigned. The higher the value of a task, the higher this task’s priority level. Most of the time, the highest priority task instance (job) that is ready, i.e., whose activation/release date has passed and that has not yet completed, is elected to be executed by the processor. Note that in this thesis, we assume fixed priority assignment, hence the index of task i is equal to its priority Õi. Therefore, the task index i also represents its priority.
Definition 2.1.12. Execution Time. A task’s execution time is the required time for a processor to execute an instance, i.e., job, of a task ti.
The execution time Ci of a task is not always constant (see Section 2.1.2). For example, a task can have different execution paths and different number of loop iterations each time the task executes. The execution paths and the number of loop iterations vary because of the changes in the input data. The upper-bound and lower-bound of a task’s execution time are defined as follows:
Definition 2.1.13. Worst-Case Execution Time (WCET) [76]. The WCET represents the longest execu-tion time of any job of a task under any circumstances.
Definition 2.1.14. Best-Case Execution Time (BCET) [76]. The BCET represents the shortest execution time of any job of a task under any circumstances.
When designing a system following the multi-tasking approach (several tasks sharing the same hardware execution platform), in addition to execution times, one has to consider the response-time Ri of a task.
Definition 2.1.15. Response Time. The response time of a task corresponds to the interval from the task’s release time to the task’s completion.
In a multi-tasking scheduling context, a task is not executed immediately when it is released because the required shared resources or the processor may be used by higher priority tasks and might thus not be available. Besides, in a preemptive scheduling context, a lower priority task can be suspended so that the processor can execute a higher priority task which is ready. As a result, a task’s response time could be larger than its execution time.

READ  NUMA Architectures: Topologies and Management Solutions

Worst-Case Execution Time Analysis

A real-time system must be valid not only with regard to the computed results but also with regard to its time constraints. This assessment is based on bounding the temporal behavior of every task in the system. Specifically, a task’s worst-case execution time (WCET) is used to ensure that its deadline is met in all cases, even in the worst case.
The task’s execution time is not constant and varies according to many factors [22] that can be divided into two parts, (1) hardware, the architectural mechanisms implemented at the processor level, and (2) software, the operating system and concurrent tasks, including program inputs that can affect the path followed in the task. Many research studies have focused on obtaining an upper-bound of the worst-case execution time in the context of complex single-core architectures. This interest in obtaining a reliable and accurate upper-bound is driven by the criticality of real-time systems where a system failure can lead to severe consequences, and the cost of pessimism and imprecision during system validation, leading to low hardware resource utilization.
As mentioned before, WCET analysis relies on the hardware platform on which the task’s jobs are running. The increase in the complexity of hardware architectures meets the performance needs of general-purpose systems, which also applies to real-time systems, albeit to a lesser ex-tent. Hardware solutions have the benefit of being transparent to the users of the systems in-volved. However, more complex architectures are usually difficult to analyse, due to the large state space. Either we explore this large state space (good precisions, bad analysis time) or we use abstractions to simplify the analysis (better analysis time with less precision).
Figure 2.2 illustrates the difference between best-case, worst-case, and average execution time of a given task. The set of all execution times is shown as the upper curve, along with the best-and worst-case execution times (BCET and WCET). In most cases, the state space is too large to exhaustively explore all possible executions and thereby determine the exact worst- and best-case execution times.
There are three main families of methods for bounding the worst-case execution time of a task [76]:
Dynamic or measurement-based methods: The common method to establish execution-time bounds is to measure the end-to-end execution time of the task for a subset of the possible execu-tions. For this method, the task, or parts of it, execute on the given hardware or a simulator for some set of inputs. From measured times the maximal and minimal observed execution times are derived. In general, these methods overestimate the BCET and underestimate the WCET and so are not safe for hard real-time systems. A safety margin is thus often added to the observed maximal execution time, e.g., by multiplying by a given factor – the resulting WCET estimate may still be unsafe.
Static methods: To compute the execution time bounds of a task, static methods are not based on any execution but on the observation and analysis of its source code or binary code. This method analyze the set of all possible execution paths through the (binary) code of a task, com-bines this information with an abstract model of the hardware architecture, and obtains an upper bound for this combination [75]. Since all analysis steps aim to compute over-approximations, the obtained bounds are safe. However, it is challenging to proof that the formal analysis models actually match the underlying hardware implementations (which might not be publicly avail-able, contain bugs, be poorly documented, or not be documented at all).
Probabilistic methods: Probabilistic analyses build upon the fact that an exhaustive measure-ment of the WCET cannot be made for all possible states of the system, and that hardware ar-chitectures are so complex that static analysis is often very difficult to perform. Burns et al. [17] proposed a statistical method, where execution time is represented by a probabilistic function. The WCET upper-bound is associated with a confidence level by which the upper-bound can be held. Therefore, the probabilistic WCET (pWCET) is defined as follows [23]: The probabilistic worst-case execution time (pWCET) of a task describes the probability that the worst-case execution time of that task does not exceed a given value. Probabilistic methods often require special properties on the software and hardware (e.g., independence of execution times) and are still subject to research.

Table of contents :

1 Introduction 
1.1 Context
1.2 Contribution Overview
1.3 Thesis Outline
I Background/State of the art 
2 Real-Time Systems 
2.1 Real-time Systems
2.1.1 Definitions and Properties
Real-Time Tasks
Task Specification
2.1.2 Worst-Case Execution Time Analysis
2.1.3 Real-Time Scheduling
2.1.4 Scheduling Analysis
2.2 Mixed-Criticality Systems
2.3 Conclusion and Considered System Model
2.3.1 Task Model
2.3.2 Scheduling Police
3 Multi-Core Memory Access Interference 
3.1 Memory Hierarchy
3.1.1 Registers
3.1.2 Scratchpad
3.1.3 Caches
3.1.4 Main Memory (DRAM)
3.2 Memory Arbitration Schemes
3.2.1 Fixed-Priority
3.2.2 First-Come First-Served
3.2.3 Round-Robin
3.2.4 Time-Division Multiplexing
3.2.5 Budget-Based Arbitration
3.2.6 TDM Arbitration and Multi-Criticality Scheduling
3.3 Conclusion and Assumed Hardware Architecture
II Contribution
4 Problem Statement and Contribution Summary 
4.1 Thesis Problem Statement
4.1.1 Criticality-Aware Arbitration Schemes
4.1.2 Challenges with Time-Division Multiplexing
4.1.3 Preemption Costs for Regular TDM Arbitration
4.2 Contribution Outline
5 Dynamic TDM-based Arbitration
5.1 Criticality Aware TDM-based Arbitration (TDMfs)
5.2 Dynamic TDM-based Arbitration Schemes
5.2.1 TDMdz: Deadline Driven Arbitration
5.2.2 TDMds: Dynamic TDM Arbitration with Slack Counters
5.3 Decoupling Dynamic TDM Arbitration from Slots
5.3.1 TDMes: Decoupled from TDM slots
5.3.2 TDMer: Memory Variability Awareness
5.4 Worst-Case Behavior
5.5 Experiments for Dynamic TDM-Based Arbitration Schemes
5.5.1 Experimental Setup
5.5.2 Results for Dynamic TDM-Based Arbitration Schemes
5.5.3 Results for Dynamic TDM with Initial Slack
5.5.4 Results for Varying Memory Access Latencies
5.6 Conclusion
6 Dynamic TDM-based Arbitration Hardware Design
6.1 Architecture Overview
6.2 Arbitration Logic
6.2.1 Deadline and Slack Computation
6.2.2 Update Rules
6.3 Control Signal Generation
6.4 Bit-Width Considerations
6.5 Worst-case Behavior w.r.t. the Hardware Design
6.6 Experiments
6.6.1 Evaluation Platform
6.6.2 Results for Hardware Synthesis
6.6.3 Results for Dynamic TDM with Round-Robin Arbitration
6.6.4 Results for Bit-Width Constrained Slack Counters
6.7 Conclusion
7 Multi-tasks and Preemption Support
7.1 Multi-task Preemptive Scheduling Policy
7.2 Preemption Costs for Dynamic TDM-based Arbitration
7.3 Arbitration-Aware Preemption Techniques
7.3.1 Scheduling with RequestWaiting (SHDw)
7.3.2 Scheduling with Request Preemption (SHDp)
7.3.3 Scheduling with Criticality Inheritance (SHDi)
7.3.4 Misalignment Delays
7.3.5 Response-Time Analysis
7.4 Experiments
7.4.1 Experimental Setup
7.4.2 Results for Preemption Schemes
7.4.3 Results for (Preemptive) Arbitration Schemes
7.4.4 Results for Varying Memory Access Latencies
7.5 Conclusion
III Conclusion
8 Conclusion and Future Work 
8.1 Conclusion
8.2 FutureWork
Bibliography

GET THE COMPLETE PROJECT

Related Posts