Benny Akesson

Senior Research Fellow @ TNO-ESI | Endowed Professor @ University of Amsterdam

Final MasCot Program Day Highlights Results, Lessons Learned, and Future Collaborations

I had a fantastic time hosting the fourth Mastering Complexity (MasCot) Program Day on October 9 at TU Delft. MasCot, a €3M academic program co-funded by TNO-ESI and NWO, addresses the pressing need for new advanced engineering methodologies through four projects covering essential topics, such as design space exploration during early system design, scheduling, verification, and restructuring of evolving software.

The day started with an update from the four academic projects in the program, focusing on updates and new results from the last year. It was interesting to hear a mix of positive results, e.g. new scheduling methods that outperform previous approaches, as well as negative results, a counter-example that demonstrated that further attempts at proving a particular theory were not worthwhile.

In the afternoon, TNO-ESI and industry partners from the projects shared their user stories, in which they reflected on the value of the program and the knowledge and proof-of-concept implementations developed in it. The user stories were positive and included examples of planned and ongoing technology transfer from the projects.

There were also breakout sessions where TNO-ESI, academic staff, industry representatives, and PhD students separately discussed what went well during the organization and execution of the program, and what should be done differently in the future. This feedback will be consolidated in a document describing the lessons learned from the MasCot Program, which will be used as a basis to refining the method for academic collaborations at TNO-ESI. It was clear from the feedback from all groups that everyone appreciated the program and how TNO-ESI brought academia and industry together to solve relevant problems. A main challenge for the future is to better align stakeholders from industry and academia and their different goals, environments, and timelines.

The day program was concluded with an interactive session, structured around our PMCs, where participants worked together to identify interesting research challenges and hot research topics for future academic collaborations. What stood out in terms of challenges was a clear need to address testing and integration challenges, also in the context of microservices. When looking at hot research topics and technological opportunities … you guessed it … safe, explainable, responsible, … , AI for Systems Engineering!

The day concluded with a social program at Stadsbrouwerij De Koperen Kat with a short tour given by the owner and a BBQ buffet with beer tasting. That concluded the fourth, and last MasCot Program Day.

 

Master’s Thesis Explores User Behavior’s Impact on Digital Service Energy Consumption

Just before the end of summer, Nsidibe Onoyom Bassey, master student at the Vrije Universiteit Amsterdam, has successfully defended her thesis “Impact of Users’ Behavior on Digital Service Energy Consumption“. Congratulations on the defense and completing your studies Nsidibe!

This work was supervised by Ana Lucia Varbanescu and myself in the context of our research project Energy Labels for Digital Services, which studies the energy consumption of applications distributed over the compute continuum. In particular, the research addresses the growing concerns over energy consumption in the ICT sector, which poses challenges to achieving net-zero emissions. While ICT solutions are often seen as efficient and low-cost, their energy impact is significant, particularly due to the high demand for digital services, such as online shopping. Energy consumption in the digital domain is largely driven by hardware, software, and infrastructure, but the role of user behavior in influencing this consumption is often overlooked. The thesis focuses on understanding how user behavior affects energy consumption in digital services, using a commonly used open-source online shop implemented as microservices as a case study. The energy consumption on both the client and server side is studied and experiments are conducted with different client browsers, user interactions, and number of users. Based on the experiments, an analytical model is proposed to estimate the energy impact of user behavior on the server side and recommendations are made to both users and developers for how to limit energy consumption.

Paper on Multi-Application Energy Analysis in Edge Computing Accepted at FMEC 2024

Good news everyone! Our paper “Analysing Per-Application Energy Consumption in a Multi-Application Computing Continuum” was accepted at the 9th International Conference on Fog and Mobile Edge Computing (FMEC 2024). This paper was first-authored by Saeedeh Baneshi, a PhD student at the University of Amsterdam, and complements her earlier work “Estimating the Energy Consumption of Applications in the Computing Continuum with iFogSim“. Congratulations on another accepted paper Saeedeh!

The paper addresses the challenge of analyzing the energy consumption of applications distributed over edge devices and data centers in the compute continuum. The goal is to enable stakeholders, such as cloud providers, developers, users, and researchers, to improve energy efficiency, optimize resource usage, and reduce the environmental impact of such applications. To this end, the work proposes a fine-grained simulation approach for analyzing application energy behavior in edge/cloud environments, based on the iFogSim framework. The three main contributions of the work are: 1) An extension is proposed to iFogSim’s energy model to also consider the energy consumption of communication, 2) iFogSim’s reporting is improved to collect finer-grained data, an essential improvement for analysis of multi-application scenarios, and 3) The effectiveness of the approach is demonstrated by evaluating different multi-application scenarios and configurations for a distributed video surveillance application.

Call for Papers: Special Issue on Model-Driven System-Performance Engineering for CPS

I’m honored to serve as Guest Editor for a special issue of IET Cyber-Physical Systems: Theory and Applications focused on Model-Driven System-Performance Engineering for CPS. This issue is a collaboration with Twan Basten (Eindhoven University of Technology), Arvind Easwaran (Nanyang Technological University), and Marilyn Wolf (University of Nebraska-Lincoln).

We invite submissions from both academia and industry across various application domains. If you’re working in this area, consider contributing your research! The submission deadline is November 1, 2024. Feel free to reach out if you have any questions!


Model-Driven System-Performance Engineering for CPS

Submission deadline: Friday, 1 November 2024
Expected Publication Month: June 2025

System performance refers to the amount of useful work done by a system within predefined quality constraints. System performance often brings the competitive advantage for cyber-physical systems in domains like autonomous driving, chip manufacturing and production systems in general, healthcare, the smart grid, precision agriculture, and so on. To meet market demands for product and system quality, system customization, and a low total cost of ownership, systems need to meet ever more ambitious targets relating to system performance. Performance is a cross-cutting system-level concern, with intricate relations to other system-level concerns like quality, cost, energy efficiency, security, reliability, and customizability. Model-driven system-performance engineering (MD-SysPE) for CPS is essential to improve time-to-quality and the cost-performance ratio of complex systems.

This special issue invites any contributions in model-driven system-performance engineering for CPS that are of interest to the academic and industrial CPS community at large. Original research papers, industrial applications and case studies, and surveys on relevant topics are welcome.

Topics for this call for papers include but are not restricted to:

  • Multi-domain modelling, analysis, and optimization of performance aspects
  • Performance views in system architecture
  • Modelling and analysis of trade-offs with other system qualities
  • Modelling and analysis across abstraction levels
  • Design-space exploration methods
  • Synthesis methods targeting performance
  • Scheduling, control in relation to performance
  • Time-predictable (software) execution
  • Data-driven performance analysis and optimization
  • AI methods for performance analysis, optimization, diagnostics
  • Performance monitoring
  • Run-time adaptation and optimization
  • Performance debugging and diagnostics
  • Model learning for performance
  • Performance validation, verification, and testing

Master Thesis Project Leads to Conference Publication on Microservice Architecture Anti-Patterns at SEAA 2024

I am delighted to announce that our paper, “Graph-based Anti-Pattern Detection in Microservice Applications,” has been accepted for publication at the 50th Euromicro Conference Series on Software Engineering and Advanced Applications (SEAA). This paper stems from Amund Lunke Røhne’s master thesis project, which he conducted as an internship with TNO-ESI under the supervision of myself and Ben Pronk. This achievement showcases how exceptional work by master students can lead to publications in established conferences.

Our paper addresses a significant challenge in the evolution of microservice applications: as the microservice architecture evolves, architectural anti-patterns may emerge. These anti-patterns are challenging to detect and manage due to their informal natural language definitions and the lack of automated tools. To tackle this, we propose an automated methodology for detecting architectural anti-patterns related to microservice dependencies. A key component of this methodology is the novel Granular Hardware Utilization-Based Service Dependency Graph (GHUBS) model, which is automatically inferred from telemetry data. We have formalized three commonly known anti-patterns and developed algorithms to detect them within the GHUBS model. This methodology is supported by an open-source tool that automatically identifies and visualizes these anti-patterns. We validated our approach using both synthetic data and a case study of a popular microservice benchmarking suite, demonstrating successful detection of the formalized anti-patterns.

Congratulations to Amund on the acceptance of your paper! Your work has made both TNO-ESI and the Software Engineering program at the University of Amsterdam very proud!

Merrick Oost-Rosengren Successfully Defends Thesis on Early Component Verification using Colored Petri Nets

Just before the summer holidays, another master student has finished his project. This time, it is Merrick Oost-Rosengren who successfully defended his thesis “Formal Verification of Components through Mirroring of Coloured Petri Nets“. Parts of this work was done as an internship with TNO-ESI in collaboration with Thales.

This research addresses a challenge in distributed component-based systems, where different components are developed by different teams, perhaps even different organizations, over time. The problem is that when components are ultimately integrated, their interactions may cause deadlocks, livelock, or unbounded memory behavior. Fixing such problems late in the development process is very costly. An alternative approach is to model components, or component interfaces, early in the design process and use model checking to verify the behavior of the component and its interactions. However, we may not know which components it will interact with yet. Perhaps they have not yet been developed?

The thesis addresses this challenge by proposing a methodology and corresponding tool chain, where components as modelled as Colored Petri Nets from which a verification model, a mirror of the component that captures relevant possible behaviors of interacting components, is automatically generated. As a part of the methodology, the thesis proposes a new class of Colored Petri Nets called Mirrorable Open Colored Petri Nets. This class combines features of existing Colored Petri Nets and Open Petri Nets, and also adds extra semantics to allow the component to be mirrored. It also describes methods for mirroring such a net and fusing the mirror with the original component, such that the components and its interactions can be verified using reachability analysis.

We congratulate Merrick on his successful defense and wish him a lovely summer!

Software variability is as relevant as ever as a driver of complexity in high-tech equipment

Earlier this week, TNO-ESI arranged a webinar with Jacob Krüger, Assistant Professor at Eindhoven University of Technology. In the presentation “Do We Still Need This? Managing Variability in Modern Software Systems” he presented his research on development and evolution of variant-rich software systems. The presentation explained how successful systems are often cloned to create new variants until managing the variability becomes too complex and expensive. It discussed the transition from cloning towards platform-based software architectures and compared the development costs for new features and new variants, respectively, for the two cases, based on empirical data from industry. These insights are valuable to inform decision-making about when adopting a platform-based approach is strategic. However, Jacob also made clear that moving to a platform-based approach introduces its own challenges, such as ensuring software comprehension and quality, analyzing variability, aligning software and hardware release schedules, and deprecation of variable features.

The webinar attracted an audience of approximately 40 people from TNO-ESI, ASML, Thales, Canon, Vanderlande, ThermoFisher, and Radboud University Nijmegen. This strongly suggests that variability is still a main concern both in systems and software engineering that affects all parts of system development, from early architecting to implementation, testing, and evolution. I was thrilled to see that there was a lively discussion with questions and remarks. In retrospect, I wish we would have reserved more time to keep the conversation going. If you would like to discuss your particular variability challenges or ideas with Jacob, feel free to contact him.

TNO-ESI looks forward to arranging more webinars with experts from our eco-system of academic and industry partners in the field of software and system engineering for high-tech equipment.

TNO-ESI and Academic Partners Deliver ASCI PhD Course on Design and Implementation of Real-time Systems

The Netherlands boasts a world-leading high-tech manufacturing industry renowned for constructing distributed real-time systems of continuously growing complexity. These systems must meet stringent timing requirements to ensure the delivery of mission-critical functionalities. To create interest in the high-tech equipment domain and prepare PhD students in Computer Science to address its performance challenges, TNO-ESI has co-created and delivered a one-week PhD course Design and Implementation of Real-time Systems together with academic partners from Eindhoven University of Technology, University of Twente, and University of Amsterdam. The course is given in the context of the Advanced School for Computing and Imaging (ASCI), a Dutch research school for high-quality research and education in computer systems and imaging systems. ASCI encompasses almost all Dutch universities with computer-science departments. The main goals of ESI involvement in this course were to make participants aware of TNO and its role in society and industry and position it as a possible future employer, and creating awareness of TNO-ESIs vision and work in the area of system performance engineering.

The course is focused on providing an overview of selected timing-sensitive applications and the current research landscape on real-time systems and explaining the rationale behind considering real-time requirements in system software design. Through a series of lectures and hands-on labs, the course covers selected topics from scheduling algorithms, priority assignments, resource sharing, resource reservation, together with their implementation in real-time operating systems. It further discusses emerging challenges and practices in an industrial context, based on empirical surveys and experience from TNO-ESIs applied research on telemetry-based system performance engineering for purposes of performance optimization, verification, and diagnostics.

This first instance of the course was given at the Carlton President Hotel in Maarsen, outside Utrecht between June 10 – 14. 15 PhD students from universities all over the Netherlands researching a broad range of topics in computer science participated in the course. TNO-ESI was in the spotlight during the last day of the course. In the morning, I introduced the high-tech equipment domain and its complexity drivers and explained how new model-based engineering methodologies where needed to address them. Next, my colleague Bram van der Sanden presented our view on the field of System Performance Engineering, along with its focus areas and best practices. This was followed by two concrete examples from our system performance research: Kostas Triantafyllidis presented his work on performance analysis and diagnosis with ASML, followed by a presentation by me about performance verification and conformance checking in microservice systems based on our work with Thales.

The course was well-received by the participants and the contents were rated 8.7/10 in the evaluation. We very much enjoyed the experience of creating and delivering this course together with our academic partners. Thank you Kuan-Hsun Chen (leader of the initiative), Mitra Nasri, and Geoffrey Nelissen for the excellent collaboration in organizing this course. Thanks to Kay Heider and Christian Hakert for leading the hands-on exercises. We are also thankful to invited speakers Bram van der Sanden and Kostas Triantafyllidis.

Faezeh Saadatmand Wins Best Paper Award at ICPS

Good news everyone! Our paper “Automated Derivation of Application Workload Models for Design Space Exploration of Industrial Distributed Cyber-Physical Systems” won the Best Paper Award at the 7th IEEE International Conference on Industrial Cyber-Physical Systems (ICPS). This is an impressive feat, especially considering that it is the first paper first-authored by Faezeh Saadatmand, PhD student at Leiden University. Congratulations Faezeh!

The paper tackles an urgent issue: the growing complexity of industrial cyber-physical systems, which is driving up development and maintenance costs. As these systems incorporate more functions, the number of hardware and software components increases rapidly, making it harder to analyze and optimize their performance. Model-based methodologies have been proposed as a means to manage complexity and increase productivity of engineers by using models as a base for specification, communication, analysis, and synthesis of artifacts like documentation, simulation models, and code. But who is going to make models of systems with tens of compute nodes and hundreds of software processes, especially when increased customization results in a unique configuration for each manufactured system? This research addresses this need by introducing an automatic method for deriving an application workload model. This model, based on trace analysis, captures computation and communication activities within an application in a timing-agnostic manner. The method was validated through a case study on an ASML Twinscan lithography machine, showing high accuracy in representing real application workloads.

This paper is a result from the Design Space Exploration 2.0 (DSE2.0) project, one of four academic projects co-funded by TNO-ESI and NWO as a part of the Mastering Complexity (MasCot) program.

 

 

Reflections on RTAS 2024: A Successful Symposium in Hong Kong

The 30th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS 2024) is over. As I am enjoying a last local beer at Hong Kong airport before getting on my flight home, it feels like a good opportunity to reflect on this years successful edition of the conference.

The preparation of this conference has taken me, as the Program Chair, about one year. While it was a lot of work, I enjoyed it a lot because I got to work with dedicated, clever, and proactive people from the real-time systems community to make it happen. This year, we received 124 paper submissions from around the world, marking a 40% increase from last year. This suggests that the community is recovering well post-pandemic and there is a clear interest in the topics covered by the conference.

The Program Committee comprised 61 reviewers, supported by 87 sub-reviewers, blending expertise from a diverse group of experts in both academia and industry. Each submission was evaluated through at least four reviews, resulting in a total of nearly 500 double-anonymous expert reviews. Based on these reviews, a brief author response to the reviews to clarify misunderstandings, and online discussions, 29 papers This resulted in an acceptance rate of 23.3%, which means it was very competitive! The accepted papers formed the basis for the outstanding technical program.

Having spent so much time preparing the conference, I really wanted the execution to go smoothly, giving all 100 registered participants a good experience of the technical program. I was happy to see that the preparation had paid off and that there was very little work for me during the conference itself. The session chairs did an excellent job introducing the speakers and managing the sessions. The only curve ball was that one author did not get their visa on time, so we had to quickly improvise a setup for giving remote presentations. This was handled beautifully by the local organizers and I would like to thank Nan Guan and his colleagues for their hard work and attentiveness. From my perspective, the local arrangements worked perfectly!

I was impressed with the quality of the presentations of this edition. Despite the theoretical nature of much of the research, I was pleased to see that presenters managed to focus on their main messages and used lots of figures and animations to get high-level concepts across and referring to the papers for the details. I am asking myself if we, as a community, are getting better at presenting or if this is a side-effect of that we had to reduce the presentation time of the papers from 25 minutes to 18 minutes to fit the increased number of papers in the sessions. Whatever it was, I liked it and hope that this sets the bar for next time!

There were many excellent contributions in the technical program. From Marco Caccamo’s Outstanding Technical Achievement and Leadership Award lecture, we learned that there are many software-based memory management techniques and execution models that can improve the predictability of commercial-of-the-shelf (COTS) multi-processor systems-on-chip and make them suitable for hard real-time or mixed-criticality applications. This is an area where I feel we are making good progress. COTS systems are getting increasingly configurable and observable, allowing our community to propose solutions for real-time systems that do not require custom hardware. This lowers the threshold for transferring our research results to industry significantly.

Looking at the topics addressed at the conference, I was surprised by the large number of papers looking at the intersection of real-time systems and security, so many we needed two sessions to fit all of them! I particularly remember work considering how to ensure control-flow integrity when faced with malicious actors. Two papers looked into how this could be addressed leveraging features recently introduced in COTS platforms. There were also works looking at the effects of performance interference, such as random delays, on cyber-physical systems and how they could be mitigated using robust control strategies in stochastic control systems.

Considering the technical solutions that were presented, I really enjoyed the work by Soni et al. that addressed the scalability of timing analysis of AFDX networks in the avionics domain. The paper proposed a hybrid approach that combined an exact analysis using model checking with a faster and more pessimistic analysis using network calculus. The key idea was to use the bounds provided by network calculus to prune the state space for the model checker to reduce analysis time. I really liked that this hybrid approach worked both ways and allowed the exact analysis done so far by the model checker, to be leveraged by the network calculus to reduce its pessimism. This allowed the proposed analysis to scale to large industrial use cases with more than 1000 network flows.

There is of course a lot more to say about the conference and the papers featured in it, but it is time to fly home.
I want to conclude by thanking all the people that contributed to the organization of the conference. I also want to thank all authors who submitted their work to RTAS 2024. Lastly, I want to thank all conference participants for coming to Hong Kong to listen, learn, discuss, and network. That is what the community is all about!

For more information about RTAS 2024 and the papers featured in its program, please refer to the RTAS 2024 website.