The ESI Symposium took place on April 9 in the Auditorium of Eindhoven University of Technology. The theme this year was “Intelligence, the next challenge in system complexity?” and featured keynotes from Edward Lee (Professor, UC Berkley) and Henk van Houten (CTO and Head of Research for Royal Philips). The event was visited by some 300 participants, with a good balance between academia and industry. For those of you who could not attend, feel free to read about the program on the ESI website, and look at the video below for an impression of the event.
Book Chapter Published by Elsevier
I am pleased to announce that our chapter “Reducing Design Time and Promoting Evolvability using Domain-specific Languages in an Industrial Context” has been accepted for publication in the Elsevier book “Model Management and Analytics for Large Scale Systems“.
This work is the result of an industrial ESI project addressing the need for new methodologies to reduce development time, simplify customization, and improve evolvability of complex software systems. The chapter explains how these challenges are addressed by an approach to model-based engineering (MBE) based on domain-specific languages (DSLs). However, applying the approach in industry has resulted in 5 technical research questions, namely how to: RQ1) achieve modularity and reuse in a DSL ecosystem, RQ2) achieve consistency between model and realizations, RQ3) manage an evolving DSL eco-system, RQ4) ensure model quality, RQ5) ensure quality of generated code. The five research questions are explored in the context of the published state-of-the-art, as well as practically investigated through a case study from the defense domain.
Paper Accepted at WMC 2018
A paper entitled “Decoupling Criticality and Importance in Mixed-Criticality Scheduling” has been accepted at the 6th International Workshop on Mixed Criticality Systems (WMC).The paper addresses the need for more expressive task models for mixed-criticality systems by presenting an extension to the well-known mode-based adaptive mixed-criticality model by Vestal. The proposed model allows a task’s criticality and its importance to be specified independently from each other. A task’s importance is the criterion that determines its presence in different system modes. Meanwhile, the task’s criticality (reflected in its Safety Integrity Level (SIL) and defining the rules for its software development process), prescribes the degree of conservativeness for the task’s estimated WCET during schedulability testing.
We indicate how such a task model can help resolve some of the perceived weaknesses of the Vestal model, in terms of how it is interpreted, and demonstrate how the existing scheduling tests for the classic variant’s of Vestal’s model can be mapped to the new task model essentially without changes.
Paper Accepted at RTCSA 2018
We celebrate the acceptance of our paper “Mixed-criticality Scheduling with Dynamic Memory Bandwidth Regulation” at RTCSA. This paper is the next step in my research collaboration with CISTER on mixed-criticality systems.
The paper aims to safely reduce the cost of mixed-criticality multi-core systems by addressing inefficient usage of memory bandwidth. This is achieved by combining per-core memory access regulation with the well-established Vestal model, which improves on the state-of-the-art in two respects: 1) We allow the memory access budgets of the cores to be dynamically adjusted, when the system undergoes a mode change, reflecting the different needs in each mode, for better schedulability. 2) We devise memory regulation-aware and stall-aware schedulability analysis for such systems, based on AMC-max. By comparison, the state-of-the-art offered no option of dynamic adjustment of core budgets, and only offered regulation-aware schedulability analysis based on AMC-rtb, which is inherently more pessimistic. Finally, 3) we consider different task assignment and bandwidth allocation heuristics, to assess the improvement from the dynamic memory budgets and new analysis. Our results show improvements in schedulability ratio of up to 9.1% over the state-of-the-art.
Paper Accepted at ECRTS 2018
We are pleased to announce that our paper “Worst-case Stall Analysis for Multicore Architectures with Two Memory Controllers” has been accepted at ECRTS 2018. This paper represents another successful collaboration with my former colleagues from CISTER.
The paper addresses the problem that increasing bandwidth requirements have resulted in platform architectures with multiple memory controllers, for which existing analyses to compute worst-case memory stall time are not safe. This work presents a new worst-case memory stall analysis for a memory-regulated multi-core architecture with two memory controllers. This stall analysis can be integrated into the schedulability analysis of systems under fixed-priority partitioned scheduling. Five heuristics for assigning tasks and memory budgets to cores in a stall-cognisant manner are also proposed. We experimentally quantify the cost in terms of extra stall for letting all cores benefit from the memory space offered by both controllers, and also evaluate the five heuristics for different system characteristics.
Paper Accepted at MOMA3N
A paper entitled “Pain-mitigation Techniques for Model-based Engineering using Domain-specific Languages” has been accepted at the Special Session on Model Management And Analytics (MOMA3N), a workshop co-located with MODELSWARD 2018. This paper is my first publication related to my work at TNO-ESI, which focuses on model-based engineering (MBE), virtual prototyping, and domain-specific languages (DSLs).
This paper is an experience report from an investigation into how to mitigate the pains associated with a transition to a model-based design flow using DSLs. The contributions of the paper are: 1) a list of 14 pains related to MBE as a technology that is representative of our industrial partners designing high-tech systems in different domains, 2) a selected subset of six pains is positioned with respect to the state-of-the-practice, 3) practical experiences and pain-mitigation techniques from applying a model-based design process using DSLs to an industrial case study based on a Threat Ranking component of a Combat Management System, and 4) a list of three open issues that require further research.
Paper Accepted at DATE 2018
Another paper written with my former colleagues at CISTER has been accepted. The paper is entitled “Mixed-criticality Scheduling with Memory Bandwith Regulation” and appear at DATE 2018. The paper considers the problem that existing schedulability analyses for mixed-criticality multi-core systems do not consider task interference in shared platform resources, such as memories, potentially making them optimistic and unsafe. We address this issue by formulating a schedulability analysis for mixed-criticality fixed-priority-scheduled multi-core systems using per-core memory access regulation. We also propose multiple heuristics for memory bandwidth allocation and task-to-core assignment. The analysis and heuristics are implemented in a tool and evaluated through extensive experiments.
Article Accepted in IEEE Transactions on Computers
Anna Minaeva had an article entitled “Time-Triggered Co-Scheduling of Computation and Communication with Jitter Requirements” accepted in IEEE Transactions on Computers. The article considers the problem of efficiently co-scheduling task execution and communication in multi-core automotive platforms. Most existing works typically deal with zero-jitter scheduling, which results in lower resource utilization, but has lower memory requirements. In contrast, this article focuses on jitter-constrained scheduling that puts constraints on the tasks jitter, increasing schedulability over zero-jitter scheduling.
The contributions of this article are: 1) Integer Linear Programming and Satisfiability Modulo Theory model exploiting problem-specific information to reduce the formulations complexity to schedule small applications. 2) A heuristic approach, employing three levels of scheduling scaling to real-world use-cases with 10000 tasks and messages. 3) An experimental evaluation of the proposed approaches on a case-study and on synthetic data sets showing the efficiency of both zero-jitter and jitter-constrained scheduling. It shows that up to 28% higher resource utilization can be achieved by having up to 10 times longer computation time with relaxed jitter requirements.
Hazem Ali Defends Dissertation
Paper Accepted at SCOPES 2017
Hazem had a paper entitled “Combining Dataflow Applications and Real-time Task Sets on Multi-core Platforms” accepted at the 2017 Workshop on Software and Compilers for Embedded Systems (SCOPES). This paper is a short overview of his PhD dissertation, which will be defended in Porto on May 23, and explains an approach to map and schedule a multi-/many-core system containing both applications described as traditional real-time task sets and synchronous data-flow graphs. Hazem’s approach is to convert the data-flow graph into a periodic real-time task set to unify the models before mapping, which enables him to leverage existing real-time analysis techniques and schedulers. However, converting a complex data-flow graph into a periodic task set may result in a large number of tasks, resulting in long analysis times. To mitigate this problem, he proposes a slack-based merging algorithm that allows the number of tasks to be reduced by carefully sacrificing parallelism in the data-flow graph, subject to its latency and throughput constraints. Lastly, the resulting unified real-time task set is mapped to a multi-/many-core platform interconnected by a TDM NoC using a sensitive-path-first algorithm, which first allocates tasks derived from the original data-flow graph that have the highest impact on its execution and schedulability. It is also able to exploit parallelism in graph during mapping.
We hope you enjoy the paper and wish Hazem all the best for his upcoming defense.