Advancing Design Space Exploration: Literature Review Explores Network Delay Models for Distributed Cyber-Physical Systems

Another literature review has been completed in the context of the DSE2.0 research project. William Ford completed his review entitled “Network Delay Model Creation and Validation for Design Space Exploration of Distributed Cyber-Physical Systems“.

Design-space exploration (DSE) in early phases of design of a distributed cyber-physical system (dCPS) requires models. In the DSE2.0 project, we are particularly interested in models that capture the timing behavior of hardware and software, allowing temporal system performance to be evaluated for different design points. One important part of the system to model is the network that connects the subsystems of the CPS. This study reviews previous work in the fields of analytical network modeling, network simulation, and network model validation. In addition, a recommended plan is presented to create and validate such a network model for the DSE2.0 project, based on this previous work. Two main directions are recommended at different levels of abstraction. For the lower level of abstraction, we will make a model using the existing INET framework that models each network element explicitly. At a higher level of abstraction, we will use a latency-rate server to capture the behavior of the network using only two parameters, latency and rate.

Having delivered his literature review. William has started his master project to pursue this research along these directions. The team looks forward to working with him.

Literature Review on Scalable System-level Simulation

Herman Kelder has joined the DSE2.0 research project as a master student. DSE2.0 is a project that aims to propose a methodology for design-space exploration of complex distributed cyber-physical systems, like lithography machines manufactured by ASML. One of the great challenges is to improve the scalability to handle the complexity of such systems, a challenge that needs to be addressed both in terms of how the system (performance) is modelled and evaluated (simulated) for a particular design point, as well as how design points to evaluate is being chosen. Hermans thesis will focus on how to improve the scalability of system-level simulation to allow more design points to be evaluated faster.

One of Herman’s first assignments was to put together a literature review on this topic. The literature review, entitled “Exploring Scalability in System-Level Simulation Environments for Distributed Cyber-Physical Systems“, investigates state-of-the-art scalability techniques for system-level simulation environments, i.e. Simulation Campaigns, Parallel Discrete Event Simulations (PDES), and Hardware Accelerators. The goal is to address the challenge of scalable Design Space Exploration (DSE) for dCPS, discussing such approaches’ characteristics, applications, advantages, and limitations. The conclusion recommends starting with simulation campaigns as those provide increased throughput, adapt to the number of tasks and resources, and are already implemented by many state-of-the-art simulators. Nevertheless, further research has to be conducted to define, implement, and test a sophisticated general workflow addressing the diverse sub-challenges of scaling system-level simulation environments for the exploration of industrial-size distributed Cyber-Physical Systems.

We look forward to working with Herman and seeing how his research develops along these directions.

Paper Accepted at PNSE 2022

I am happy to announce that the paper “Partial Specifications of Component-based Systems using Petri Nets” has been accepted for publication at the International Workshop on Petri Nets and Software Engineering (PNSE) 2022. This paper was first-authored by Bart-Jan Hilbrands, a (former) student in the Master of Software Engineering program at the University of Amsterdam, who did his master thesis project under the supervision of myself and my ESI colleague Debjyoti Bera. The master thesis project was conducted in the context of the DYNAMICS project, a bi-lateral research project between ESI and Thales, which looked into specification, verification, and adaptation of software interfaces.  This publication is a good example of how a good master thesis can be turned into a publication.

The paper addresses the problem of verifying correctness properties, such as absence of deadlocks, livelocks, and buffer overflows, in software components with multiple inter-dependent interfaces. An approach based on partial specification of dependencies between interfaces, expressed as a set of functional constraints, is proposed in the paper. The papers presents and formalizes three commonly occurring functional constraints and provides algorithms for encoding them into a Petri net representation of the interfaces, enabling interface verification through reachability analysis. The approach has been implemented and demonstrated using ComMA.

Master Thesis on Formal Verification of Software Interfaces Defended

Today, Bart-Jan Hilbrands, a master student from UvA supervised by myself and my ESI colleague Debjyoti Bera, successfully defended his master thesis “Verification of Inter-Dependent Interfaces in Component-Based Architectures”. The thesis considers formal verification of ComMA components with multiple interfaces with inter-dependent behavior, caused by three different types of functional constraints. The four main contributions of the thesis are: 1) A formalization of each type of interface constraint, defining how they should restrict behavior, 2)  a set of assumptions, describing properties that help ComMA users avoid creating specifications with termination issues, 3) methods for encoding the behavior of these constraints into existing Petri net representations of interfaces, and 4) methods for validating whether a set of given constraints is encoded correctly into a given Petri net. The theory is supported by a prototype implementation ComMA.

Bart presented his thesis well and expertly answered questions from the committee. We thank Bart for his excellent work and wish him good luck is his future career. First off, he will continue working with me and Deb to publish his work as a paper.

Open-source ComMA v0.1.0 officially released

Last week, the open sourcing of ComMA (Component Modelling and Analysis) in the context of the Eclipse Foundation, saw another milestone. The first version Eclipse CommaSuite is now online in the form of Release 0.1.0. ComMA is a set of DSLs used to (partially) specify the behavior of components and their interfaces, including time and data constraints. On the basis of these specifications, a number of artifacts can be automatically generated, including run-time monitors that validate compliance with the specification can be generated, visualizations, timing statistics, documentation, test cases, and adapters. Many of these features will be included in later releases of ComMA, and some of them have yet to emerge from research projects as mature features.

ComMA was originally developed by ESI and Philips, but more recently in collaboration with a growing number of other companies. For example, the DYNAMICS project in which ESI works together with Thales, we are currently investigating how adapters can be semi-automatically generated to bridge differences between components implementing different versions of interfaces. This work has been previously mentioned in an article in Bits & Chips, as well as in a paper. Currently, three master students from my Embedded Software and Systems course at UvA are also doing their graduation projects in the context of evolution of ComMA interfaces, looking into aspects of data dependencies, interface dependencies, and static impact analysis. We look forward to seeing the results of their work this summer.

You can read more about ComMA in this news article TNO published this week.
Update: The news article is now also published in Bits & Chips

Embedded Software and Systems Course @ UvA Continues to Evolve

The fall semester of the very special academic year 2020/2021 is over. Most of the students following the Master of Software Engineering program at the University of Amsterdam have just completed my course Embedded Software and Systems (ESS). The ESS course had changed in a three important ways this year.

Firstly, a generic lecture about Petri Nets was changed to a series about two lectures, explaining how Petri Nets can be used to model and analyze software interfaces and components. Part of the material for this course was reused from the course Modelling and Analysis of Component-based Systems (MOANA-CBS), developed together with Thales targeting an industrial audience. These new lectures also prime students nicely for a lecture about the DYNAMICS project, a research collaboration between ESI and Thales. This allows me to show how these models and analyses can be used in practice to address problems related to software evolution by detecting incompatibilities and generating adapters when updating software interfaces. A generic lecture about the data-flow model of computation was removed to create room for this new material, but I am happy to teach fewer modelling formalisms and have more time to go in depth and show how they can be used to solve industrial problems. A nice result of this change to the course is that three master students have accepted thesis projects in the area of modelling and analysis of software components and interfaces in collaboration with ESI under the supervision of myself and my colleague Debjyoti Bera.

Secondly, the course project was redeveloped this year. Previously, students used Mathworks Stateflow to program Lego Mindstorm EV3 rovers to follow a line, avoid obstacles, and count objects. However, this project felt a bit too much like a toy and there were technical problems with both rovers and tools that were hard to overcome and limited the education experience. In particular, it was not easily possible to see or influence how code was generated for the Lego Mindstorm robots, which felt like a missed opportunity when teaching model-based engineering. 

Two bachelor students did their theses in spring to evaluate the suitability of using the TurtleBot3 Burger robot, both in reality and in simulation using Gazebo, in the course. In addition, Stateflow was exchanged for Yakindu Statechart Tools, which is easier to use and gives us the flexibility we need in code generation. The new application developed in the project is to use Yakindu to program the TurtleBot to autonomously drive through a maze and map it.

Lastly, the COVID-19 pandemic required the entire course to be taught online. As a result, used a blended learning approach and prerecorded the lectures so that the students could watch them when they wanted to. Online interactive sessions were added to the course where the students could ask questions about the lectures, and participate in quizzes and group discussions. Online teaching meant that the students did not have access to the four physical TurtleBots that we had purchased. Luckily, the newly developed course project could be done with simulations in Gazebo. Below is a demo from one of the groups that very successfully solved the assignment. 

The ESS course is continuously evolving and maturing and next year will be no different. Most importantly, we hope that the pandemic will be over by then and that we can put our three physical TurtleBots to good use.

Jasper Kuijsten Graduates from the Memory Team

Another master student has graduated from the Memory Team. Jasper Kuijsten joined the team in March 2012 and has worked on predictable and composable reconfiguration of the memory controller front-end. His work has been very diverse and contains theoretical comparisons between different approaches to composability in terms of efficiency and reconfiguration effort, but also implementation of his concepts and ideas in both SystemC and VHDL. The Memory Team thanks Jasper for his hard work and good team spirit during the project and wishes him the best of luck in his future career.

Paper Accepted at ESTIMedia 2012

Andrew Nelson just had a paper “Power Versus Quality Trade-offs for Adaptive Real-Time Applications” accepted at ESTIMedia 2012. The paper is based on the work of Sjoerd te Pas, one of my graduated master students, and discusses how power consumption can be traded for application quality for adaptive real-time applications using existing DVFS techniques. The techniques are demonstrated for an H.263 application on an FPGA instance of the CompSOC platform. Stay tuned for the camera-ready version.

Update: The paper is now available online. Click here to read it.