About

Log in?

DTU users get better search results including licensed content and discounts on order fees.

Anyone can log in and get personalized features such as favorites, tags and feeds.

Log in as DTU user Log in as non-DTU user No thanks

DTU Findit

Journal article

Analysis of preemption costs for the stack cache

From

Université Paris-Saclay1

Department of Informatics and Mathematical Modeling, Technical University of Denmark2

The design of tailored hardware has proven a successful strategy to reduce the timing analysis overhead for (hard) real-time systems. The stack cache is an example of such a design that was shown to provide good average-case performance, while remaining easy to analyze. So far, however, the analysis of the stack cache was limited to individual tasks, ignoring aspects related to multitasking.

A major drawback of the original stack cache design is that, due to its simplicity, it cannot hold the data of multiple tasks at the same time. Consequently, the entire cache content needs to be saved and restored when a task is preempted. We propose (a) an analysis exploiting the simplicity of the stack cache to bound the overhead induced by task preemption, (b) preemption mechanisms for the stack cache exploiting the previous analysis and, finally, (c) an extension of the design that allows to (partially) hide the overhead by virtualizing stack caches.

Language: English
Publisher: Springer US
Year: 2018
Pages: 700-744
Journal subtitle: International Journal of Time-critical Computing Systems
ISSN: 15731383 and 09226443
Types: Journal article
DOI: 10.1007/s11241-018-9298-7

DTU users get better search results including licensed content and discounts on order fees.

Log in as DTU user

Access

Analysis