Anna Minaeva Successfully Defends Dissertation

Today, Anna Minaeva successfully defended her PhD dissertation entitled “Scalable Scheduling Algorithms for Embedded Systems with Real-Time Requirements” and earned the right to call herself a doctor. The reviewers were pleased with the dissertation and she confidently answered their questions.

The dissertation considers applications with real-time requirements sharing resources, such as memories, cores, and networks, in distributed systems. Scheduling this type of application subject to resource and precedence constraints, among others, while maximizing system performance is a challenging problem. Existing approaches either propose exact solutions that cannot solve industrial-sized instances or propose heuristic algorithms without validating its efficiency with optimal solutions.

The dissertation addresses this problem through a three-stage approach, corresponding to three problems with gradually increasing complexity and accuracy of the model. The four main contributions of are: 1) Comparison of three formalisms to solve the problems optimally, Integer Linear Programming (ILP), Satisfiability Modulo Theory, and Constraint Programming, along with computation time improvements. To increase the scalability of the ILP approach, an optimal approach that wraps the ILP in a branch-and-price framework is presented. 2) For each problem, a scalable and efficient heuristic algorithm is presented that decomposes the problem to decrease its computation time. 3) The efficiency of the optimal and heuristic strategies are quantitatively and qualitatively compared. 4) The practical applicability of the proposed heuristic algorithms and optimal approaches is demonstrated on case studies of real systems in both the automotive and consumer electronics domains.

I wish Anna the best of luck in her future career and hope I will have the opportunity to work with her again.

Paper Acccepted at ModComp 2019

Our paper “Towards Continuous Evolution through Automatic Detection and Correction of Service Incompatibilities” has been accepted at the 6th International Workshop on Interplay of Model-driven and Component-Based Software Engineering (ModComp). ModComp takes place in September and is co-located with the MODELS conference in Munich.

The paper describes applied research from an industrial ESI project with goal of enabling continuous evolution of software in service-oriented architectures through automatic detection and correction of service incompatibilities. Towards this, the paper has three main contributions: 1) the state-of-the-art in the areas of specification of service interfaces, and detection and correction of incompatible service interactions is surveyed, 2) directions for a methodology to detect and correct incompatible interactions that is currently under development are discussed, and 3) the methodology is discussed in the context of a simplified industrial case study from the defense domain.

Journal Article Presented at ECRTS 2019

Today, Ali presented our Real-time Systems article “Uneven Memory Regulation for Scheduling IMA Applications on Multi-core Platforms” in the Journal-to-conference (J2C) session at ECRTS.

This article addresses the problem of resource sharing in mixed-criticality systems through temporal isolation by extending the state-of-the-art Single-Core Equivalence (SCE) framework in three ways: 1) we extend the theoretical toolkit for the SCE framework by considering EDF and server-based scheduling, instead of partitioned fixed-priority scheduling, 2) we support uneven memory access budgets on a per-server basis, rather than just on a per-core basis, and 3) we formulate an Integer-Linear Programming Model (ILP) guaranteed to find a feasible mapping of a given set of servers to processors, including their execution time and memory access budgets, if such a mapping exists. Our experiments with synthetic task sets confirm that considerable improvement in schedulability can result from the use of per-server memory access budgets under the SCE framework.

Overall, I greatly appreciate that key conferences in the real-time community are starting to allow journal articles to be presented. This increases the exposure of these works that are often longer and better edited. It is also helpful for researchers at the institutes where conference publications are not considered a relevant KPI. You can argue the validity of this reasoning in areas of computer science where conferences are highly competitive with 20-30% acceptance rates, but it is reality for some researchers. An interesting thing with the MODELS conference is that they collaborate with the SOSYM journal such that some accepted articles in the journal gets a full slot at the conference. This is a nice way to highlight good articles and to appreciate the work done by both authors and reviewers.

Paper Accepted at EMSOFT 2019

Our collaboration with CISTER has been extremely fruitful this year, as yet another paper in our research line on mixed-criticality scheduling has been accepted. This latest paper is entitled “Techniques and Analysis for Mixed-criticality Scheduling with Mode-dependent Server Execution Budgets” and has been accepted at the International Conference on Embedded Software (EMSOFT).

The goal of this work is, like many other in this research line, is to reduce cost of mixed-criticality systems. This time, we achieve this by addressing the limitation that a server only has a single execution budget in all modes, despite that their computational requirements may vary significantly. More specifically, the three main contributions of the paper are: 1) a scheduling arrangement for uni-processor systems employing fixed-priority scheduling within periodic servers, whose budgets are dynamically adjusted at run-time in the event of a mode change, 2) a new schedulability analysis for such systems, and 3) heuristic algorithms for assigning budgets to servers in different modes and ordering the execution of the servers. Experiments with synthetic task sets demonstrate considerable improvements (up to 52.8%)

Paper Accepted at RTNS 2019

The paper “Response Time Analysis of Multiframe Mixed-Criticality Systems” has been accepted at RTNS 2019. This work is the next in our mixed-criticality research line, in collaboration with my former colleagues at CISTER. It continues our work on the multi-frame task model, also considered in our RTCSA paper this year. The multi-frame model describes tasks that have different worst-case execution times for each job, following a known pattern, which can be exploited to reduce the cost of the system. Existing schedulability analyses fail to leverage this characteristic, potentially resulting in pessimism and increased system cost.

In this paper, we present a schedulability analysis for the multi-frame mixed-criticality model. Our work extends both the analysis techniques for Static Mixed-Cricality scheduling (SMC) and Adaptive Mixed-Criticality scheduling (AMC), on one hand, and the schedulability analysis for multi-frame task systems on the other. Our proposed worst-case response time (WCRT) analysis for multi-frame mixed-criticality systems is considerably less pessimistic than applying the SMC, AMC-rtb and AMC-max tests obliviously to the WCET variation patterns. Experimental evaluation with synthetic task sets demonstrates up to 63.8% higher scheduling success ratio compared to the best of the frame-oblivious tests.

Paper Accepted at RTCSA 2019

A paper “Memory Bandwidth Regulation for Multiframe Task Sets” has been accepted at RTCSA 2018. This paper aims to reduce cost of real-time systems where the worst-case execution times of tasks vary from job to job, according to known patterns. This kind of execution behavior can be captured by the multi-frame task model. However, this model is optimistic and unsafe for multi-cores with shared memory controllers, since it ignores memory contention, and existing approaches to stall analysis based on memory regulation are very pessimistic if straight-forwardly applied.

This paper remedies this by adapting existing stall analyses for memory-regulated systems to the multi-frame model. Experimental evaluations with synthetic task sets show up to 85% higher scheduling success ratio for our analysis, compared to the frame-agnostic analysis, enabling higher platform utilization without compromising safety. We also explore implementation aspects, such as how to speed up the analysis and how to trade off accuracy with tractability.

Impressions from the ESI Symposium

The ESI Symposium took place on April 9 in the Auditorium of Eindhoven University of Technology. The theme this year was “Intelligence, the next challenge in system complexity?” and featured keynotes from Edward Lee (Professor, UC Berkley) and Henk van Houten (CTO and Head of Research for Royal Philips).  The event was visited by some 300 participants, with a good balance between academia and industry. For those of you who could not attend, feel free to read about the program on the ESI website, and look at the video below for an impression of the event.

Book Chapter Published by Elsevier

I am pleased to announce that our chapter “Reducing Design Time and Promoting Evolvability using Domain-specific Languages in an Industrial Context” has been accepted for publication in the Elsevier book “Model Management and Analytics for Large Scale Systems“.

This work is the result of an industrial ESI project addressing the need for new methodologies to reduce development time, simplify customization, and improve evolvability of complex software systems. The chapter explains how these challenges are addressed by an approach to model-based engineering (MBE) based on domain-specific languages (DSLs). However, applying the approach in industry has resulted in 5 technical research questions, namely how to: RQ1) achieve modularity and reuse in a DSL ecosystem, RQ2) achieve consistency between model and realizations, RQ3) manage an evolving DSL eco-system, RQ4) ensure model quality, RQ5) ensure quality of generated code. The five research questions are explored in the context of the published state-of-the-art, as well as practically investigated through a case study from the defense domain.

Paper Accepted at WMC 2018

A paper entitled “Decoupling Criticality and Importance in Mixed-Criticality Scheduling” has been accepted at the 6th International Workshop on Mixed Criticality Systems (WMC).The paper addresses the need for more expressive task models for mixed-criticality systems by presenting an extension to the well-known mode-based adaptive mixed-criticality model by Vestal. The proposed model allows a task’s criticality and its importance to be specified independently from each other. A task’s importance is the criterion that determines its presence in different system modes. Meanwhile, the task’s criticality (reflected in its Safety Integrity Level (SIL) and defining the rules for its software development process), prescribes the degree of conservativeness for the task’s estimated WCET during schedulability testing.

We indicate how such a task model can help resolve some of the perceived weaknesses of the Vestal model, in terms of how it is interpreted, and demonstrate how the existing scheduling tests for the classic variant’s of Vestal’s model can be mapped to the new task model essentially without changes.

Paper Accepted at RTCSA 2018

We celebrate the acceptance of our paper “Mixed-criticality Scheduling with Dynamic Memory Bandwidth Regulation” at RTCSA. This paper is the next step in my research collaboration with CISTER on mixed-criticality systems.

The paper aims to safely reduce the cost of mixed-criticality multi-core systems by addressing inefficient usage of memory bandwidth. This is achieved by combining per-core memory access regulation with the well-established Vestal model, which improves on the state-of-the-art in two respects: 1) We allow the memory access budgets of the cores to be dynamically adjusted, when the system undergoes a mode change, reflecting the different needs in each mode, for better schedulability. 2) We devise memory regulation-aware and stall-aware schedulability analysis for such systems, based on AMC-max. By comparison, the state-of-the-art offered no option of dynamic adjustment of core budgets, and only offered regulation-aware schedulability analysis based on AMC-rtb, which is inherently more pessimistic. Finally, 3) we consider different task assignment and bandwidth allocation heuristics, to assess the improvement from the dynamic memory budgets and new analysis. Our results show improvements in schedulability ratio of up to 9.1% over the state-of-the-art.