Paper Accepted at RTNS 2019

The paper “Response Time Analysis of Multiframe Mixed-Criticality Systems” has been accepted at RTNS 2019. This work is the next in our mixed-criticality research line, in collaboration with my former colleagues at CISTER. It continues our work on the multi-frame task model, also considered in our RTCSA paper this year. The multi-frame model describes tasks that have different worst-case execution times for each job, following a known pattern, which can be exploited to reduce the cost of the system. Existing schedulability analyses fail to leverage this characteristic, potentially resulting in pessimism and increased system cost.

In this paper, we present a schedulability analysis for the multi-frame mixed-criticality model. Our work extends both the analysis techniques for Static Mixed-Cricality scheduling (SMC) and Adaptive Mixed-Criticality scheduling (AMC), on one hand, and the schedulability analysis for multi-frame task systems on the other. Our proposed worst-case response time (WCRT) analysis for multi-frame mixed-criticality systems is considerably less pessimistic than applying the SMC, AMC-rtb and AMC-max tests obliviously to the WCET variation patterns. Experimental evaluation with synthetic task sets demonstrates up to 63.8% higher scheduling success ratio compared to the best of the frame-oblivious tests.

Paper Accepted at RTCSA 2019

A paper “Memory Bandwidth Regulation for Multiframe Task Sets” has been accepted at RTCSA 2018. This paper aims to reduce cost of real-time systems where the worst-case execution times of tasks vary from job to job, according to known patterns. This kind of execution behavior can be captured by the multi-frame task model. However, this model is optimistic and unsafe for multi-cores with shared memory controllers, since it ignores memory contention, and existing approaches to stall analysis based on memory regulation are very pessimistic if straight-forwardly applied.

This paper remedies this by adapting existing stall analyses for memory-regulated systems to the multi-frame model. Experimental evaluations with synthetic task sets show up to 85% higher scheduling success ratio for our analysis, compared to the frame-agnostic analysis, enabling higher platform utilization without compromising safety. We also explore implementation aspects, such as how to speed up the analysis and how to trade off accuracy with tractability.

Book Chapter Published by Elsevier

I am pleased to announce that our chapter “Reducing Design Time and Promoting Evolvability using Domain-specific Languages in an Industrial Context” has been accepted for publication in the Elsevier book “Model Management and Analytics for Large Scale Systems“.

This work is the result of an industrial ESI project addressing the need for new methodologies to reduce development time, simplify customization, and improve evolvability of complex software systems. The chapter explains how these challenges are addressed by an approach to model-based engineering (MBE) based on domain-specific languages (DSLs). However, applying the approach in industry has resulted in 5 technical research questions, namely how to: RQ1) achieve modularity and reuse in a DSL ecosystem, RQ2) achieve consistency between model and realizations, RQ3) manage an evolving DSL eco-system, RQ4) ensure model quality, RQ5) ensure quality of generated code. The five research questions are explored in the context of the published state-of-the-art, as well as practically investigated through a case study from the defense domain.

Paper Accepted at WMC 2018

A paper entitled “Decoupling Criticality and Importance in Mixed-Criticality Scheduling” has been accepted at the 6th International Workshop on Mixed Criticality Systems (WMC).The paper addresses the need for more expressive task models for mixed-criticality systems by presenting an extension to the well-known mode-based adaptive mixed-criticality model by Vestal. The proposed model allows a task’s criticality and its importance to be specified independently from each other. A task’s importance is the criterion that determines its presence in different system modes. Meanwhile, the task’s criticality (reflected in its Safety Integrity Level (SIL) and defining the rules for its software development process), prescribes the degree of conservativeness for the task’s estimated WCET during schedulability testing.

We indicate how such a task model can help resolve some of the perceived weaknesses of the Vestal model, in terms of how it is interpreted, and demonstrate how the existing scheduling tests for the classic variant’s of Vestal’s model can be mapped to the new task model essentially without changes.

Paper Accepted at RTCSA 2018

We celebrate the acceptance of our paper “Mixed-criticality Scheduling with Dynamic Memory Bandwidth Regulation” at RTCSA. This paper is the next step in my research collaboration with CISTER on mixed-criticality systems.

The paper aims to safely reduce the cost of mixed-criticality multi-core systems by addressing inefficient usage of memory bandwidth. This is achieved by combining per-core memory access regulation with the well-established Vestal model, which improves on the state-of-the-art in two respects: 1) We allow the memory access budgets of the cores to be dynamically adjusted, when the system undergoes a mode change, reflecting the different needs in each mode, for better schedulability. 2) We devise memory regulation-aware and stall-aware schedulability analysis for such systems, based on AMC-max. By comparison, the state-of-the-art offered no option of dynamic adjustment of core budgets, and only offered regulation-aware schedulability analysis based on AMC-rtb, which is inherently more pessimistic. Finally, 3) we consider different task assignment and bandwidth allocation heuristics, to assess the improvement from the dynamic memory budgets and new analysis. Our results show improvements in schedulability ratio of up to 9.1% over the state-of-the-art.

Paper Accepted at ECRTS 2018

We are pleased to announce that our paper “Worst-case Stall Analysis for Multicore Architectures with Two Memory Controllers” has been accepted at ECRTS 2018. This paper represents another successful collaboration with my former colleagues from CISTER.

The paper addresses the problem that increasing bandwidth requirements have resulted in platform architectures with multiple memory controllers, for which existing analyses to compute worst-case memory stall time are not safe. This work presents a new worst-case memory stall analysis for a memory-regulated multi-core architecture with two memory controllers. This stall analysis can be integrated into the schedulability analysis of systems under fixed-priority partitioned scheduling. Five heuristics for assigning tasks and memory budgets to cores in a stall-cognisant manner are also proposed. We experimentally quantify the cost in terms of extra stall for letting all cores benefit from the memory space offered by both controllers, and also evaluate the five heuristics for different system characteristics.

Paper Accepted at MOMA3N

A paper entitled “Pain-mitigation Techniques for Model-based Engineering using Domain-specific Languages” has been accepted at the Special Session on Model Management And Analytics (MOMA3N), a workshop co-located with MODELSWARD 2018. This paper is my first publication related to my work at TNO-ESI, which focuses on model-based engineering (MBE), virtual prototyping, and domain-specific languages (DSLs).

This paper is an experience report from an investigation into how to mitigate the pains associated with a transition to a model-based design flow using DSLs. The contributions of the paper are: 1) a list of 14 pains related to MBE as a technology that is representative of our industrial partners designing high-tech systems in different domains, 2) a selected subset of six pains is positioned with respect to the state-of-the-practice, 3) practical experiences and pain-mitigation techniques from applying a model-based design process using DSLs to an industrial case study based on a Threat Ranking component of a Combat Management System, and 4) a list of three open issues that require further research.

Paper Accepted at DATE 2018

Another paper written with my former colleagues at CISTER has been accepted. The paper is entitled “Mixed-criticality Scheduling with Memory Bandwith Regulation” and appear at DATE 2018. The paper considers the problem that existing schedulability analyses for mixed-criticality multi-core systems do not consider task interference in shared platform resources, such as memories, potentially making them optimistic and unsafe. We address this issue by formulating a schedulability analysis for mixed-criticality fixed-priority-scheduled multi-core systems using per-core memory access regulation. We also propose multiple heuristics for memory bandwidth allocation and task-to-core assignment. The analysis and heuristics are implemented in a tool and evaluated through extensive experiments.

Article Accepted in IEEE Transactions on Computers

Anna Minaeva had an article entitled “Time-Triggered Co-Scheduling of Computation and Communication with Jitter Requirements” accepted in IEEE Transactions on Computers. The article considers the problem of efficiently co-scheduling task execution and communication in multi-core automotive platforms. Most existing works typically deal with zero-jitter scheduling, which results in lower resource utilization, but has lower memory requirements. In contrast, this article focuses on jitter-constrained scheduling that puts constraints on the tasks jitter, increasing schedulability over zero-jitter scheduling.

The contributions of this article are: 1) Integer Linear Programming and Satisfiability Modulo Theory model exploiting problem-specific information to reduce the formulations complexity to schedule small applications. 2) A heuristic approach, employing three levels of scheduling scaling to real-world use-cases with 10000 tasks and messages. 3) An experimental evaluation of the proposed approaches on a case-study and on synthetic data sets showing the efficiency of both zero-jitter and jitter-constrained scheduling. It shows that up to 28% higher resource utilization can be achieved by having up to 10 times longer computation time with relaxed jitter requirements.

Paper Accepted at SCOPES 2017

Hazem had a paper entitled “Combining Dataflow Applications and Real-time Task Sets on Multi-core Platforms” accepted at the 2017 Workshop on Software and Compilers for Embedded Systems (SCOPES). This paper is a short overview of his PhD dissertation, which will be defended in Porto on May 23, and explains an approach to map and schedule a multi-/many-core system containing both applications described as traditional real-time task sets and synchronous data-flow graphs. Hazem’s approach is to convert the data-flow graph into a periodic real-time task set to unify the models before mapping, which enables him to leverage existing real-time analysis techniques and schedulers. However, converting a complex data-flow graph into a periodic task set may result in a large number of tasks, resulting in long analysis times. To mitigate this problem, he proposes a slack-based merging algorithm that allows the number of tasks to be reduced by carefully sacrificing parallelism in the data-flow graph, subject to its latency and throughput constraints. Lastly, the resulting unified real-time task set is mapped to a multi-/many-core platform interconnected by a TDM NoC using a sensitive-path-first algorithm, which first allocates tasks derived from the original data-flow graph that have the highest impact on its execution and schedulability. It is also able to exploit parallelism in graph during mapping.

We hope you enjoy the paper and wish Hazem all the best for his upcoming defense.