Project Proposal about Energy Labels for Digital Services Granted

It is my pleasure to announce our project proposal entitled “Zero-Waste Computing: Energy Labels for Digital Services” has been granted for the Science and Design PhD program at the University of Amsterdam. Ana Lucia Varbanescu is the main applicant for this project, with Anuj Pathania and myself as co-applicants. The project proposal was supported by Surf, ESI (TNO), Barcelona Supercomputing Center, and ASTRON.

The project addresses the issue that digital services are getting increasingly prevalent in society and are vital to the Dutch economy, already reaching 60% of GDP. However, they come with a significant, rapidly-increasing energy cost, raising sustainability concerns, since a mid-size datacenter alone consumes as much energy as a small town. However, datacenters are only the final link in a digital chain. Users interacting with devices — mobile phones, tablets, or laptops — trigger entire digital chains, combining multiple communicating computing layers and data transfers: from the device itself, through the edge, to the datacenter. Each layer has its own computing infrastructure (see figure). At each layer, decisions are made about how, where and when applications are running and/or data are transferred. These decisions have a significant impact on the user-perceived quality-of-service (QoS), but also on the energy consumption – per layer, and for the entire digital chain. The energy footprint of different devices along the chain might be known, but the actual energy consumed by the application is unknown, because it depends on infrastructure choices, and on user QoS requirements, and on mapping decisions made on the edge and in the datacenter. Thus, the energy efficiency, i.e., the amount of energy consumed to perform the actual task at hand, is largely unknown, for most digital chains.

We argue that the first step to reduce waste in computing is to quantify the energy efficiency of end-to-end digital chains. Our project focuses on designing an integrated framework (i.e., the methods, metrics, and tools) for this quantification effort. Specifically, we aim to define a reference architecture of digital chains, use it to define an analytical digital-chain energy-efficiency model that exposes the factors that impact energy efficiency along the chain, and support it with a high-level functional simulator to assess different operational scenarios and parameters that affect the energy efficiency of digital chains.

This is a small project funding only a single PhD student. More momentum is required to further advance this area and make a step from only monitoring the energy consumption of digital chains to also include actuation, e.g. energy minimization through workload redistribution, subject to performance constraints. We are currently looking for interested parties to collaborate with us on this topic in future project proposals.

Official Project Kick-off for DSE 2.0

Today was the official project kick-off for the research project “Design Space Exploration 2.0: Towards Optimal Design of Complex, Distributed Cyber Physical Systems”. This project is a part of the Partnership Program Mastering Complexity (MasCot), funded by NWO Domain Applied and Engineering Sciences (AES) together with ESI (TNO). The University of Amsterdam and Leiden University are the academic partners, spearheaded by Andy Pimentel and Todor Stefanov. The carrying industrial partner is ASML, but with Philips, Siemens and ESI as parts of the user committee.

The main goal of the project is to extend existing methods for design-space exploration, often developed for on-chip systems, to cover complex distributed cyber-physical systems (dCPS), such as the lithography machines made by ASML. Designers of such systems need quick answers to so-called “what-if” questions with respect to possible design decisions/choices and their consequences on non-functional properties, such as system performance and cost. This calls for efficient and scalable system level design space exploration (DSE) methods that integrate appropriate application workload and system architectures models, simulation and optimization techniques, as well as supporting tools to facilitate the exploration of a wide range of design decisions. However, such DSE technology for complex dCPS does currently not exist. This projects hence tries to answer the question of how perform efficient and effective DSE for complex, distributed cyber-physical systems.

In today’s kick-off meeting, all stakeholders in the project had an opportunity to introduce themselves and refamiliarize themselves with the project and its goals. The two PhD students who will be working on the project, Marius and Faezeh, from UvA and Leiden, respectively, also gave a brief overview of the work they had done in the first three months of the project, which included a literature review and generation of high-level simulation models for different parameter settings.

I am directly involved in this project through my part-time appointment at UvA. As Marius’ second promotor, I will help him on his journey towards a PhD. I also have an interest in this project as an ESI Research Fellow and part of the MasCot Core Team. In this capacity, I am happy to help linking this project to ESI’s applied research projects, in particular at ASML, to exploit possible synergies, and to stimulate exchanges with other projects in the MasCot program.

DYNAMICS Project in Keynote at Software-Centric Systems Conference

Two months ago, I mentioned that Bits & Chips had published an article about the ComMA (Component Modelling and Analysis) language and how it is being used in Philips and Thales to address challenges related to integration and evolution. The latter part, about semi-automatic detection and correction of interface incompatibilities as interfaces evolve is the topic of the DYNAMICS project, a research project between ESI (TNO) and Thales. This joint story, where two companies from different domains together presented their challenges and how it was addressed by technology developed by ESI was much appreciated by Bits & Chips and was invited as a keynote at the Software-Centric Systems Conference (SC2), which takes place on Thursday November 5. If you are interested in hearing this keynote, please register for the event. All presentations are also available on-demand after the event in case you cannot attend in real time.

Component Modelling and Analysis (ComMA) in Bits & Chips

Bits & Chips just published an article about ComMA (Component Modelling and Analysis). ComMA addresses key design and verification challenges for complex systems comprising many components developed by different parties, challenges that are frequently encountered in the high-tech industry across application domains. The challenges are tackled by allowing structure and behavior of component interfaces to be formally specified using a set of domain-specific languages. From this specification, a number of artifacts are automatically generated, including system tests, run-time monitors that detect protocol violations, performance metrics, and documentation. Together, these artifacts reduce the time to design, integrate, and evolve complex high-tech systems, allowing the next generation of these systems to be developed faster and with higher quality.

ComMA was developed by ESI (TNO) in applied research projects with Philips. Successfully proving the approach in an industrial context at Philips has sparked interest from other companies, including Thermo Fisher Scientific, Thales, and Kulicke & Soffa. This eco-system of high-tech companies is expected to increase further as the ComMA tooling becomes open source as part of the Eclipse Foundation.

The article also mentions the applied research project DYNAMICS, for which I am the technical lead. Here, ESI and Thales have been looking at challenges and opportunities related to the evolution of interfaces. The strong point of interfaces is that they abstract from the component providing a particular functionality, allowing it to be changed or even replaced without compromising the overall functionality of the system. However, eventually the interfaces themselves need to be updated to prevent technical debt, and at that point all components relying on that interface are affected simultaneously. In the DYNAMICS project, we study how to automatically detect whether a change to the protocol of an interface is backwards compatible and if this is not the case, semi-automatically generate adapters that bridge the differences with previous versions. The benefit of this approach is that it reduces the time and cost of interface updates, allowing them to evolve faster and avoid creative workarounds that ultimately lead to unreliable systems and lower software quality. If you are interested in reading more about this work and how it leverages ComMA and Petri Net technology to achieve this, read this overview paper from last year.

Comma interfaces open the door to reliable high-tech systems

Four Projects Granted to Fight the Complexity of Cyber-Physical Systems

During the past two years, I have been involved with setting up the Partnership Program Mastering Complexity (MasCot), funded NWO Domain Applied and Engineering Sciences together with ESI (TNO). After a long process of defining the key topics, writing the call, and aligning with applicants, four innovative research projects have finally been granted, allocating three million euros to research on software restructuring, testing, scheduling and design of cyber-physical systems. Congratulations to Andy Pimentel, Twan Basten, Jan Tretmans, Eelco Visser, and their collaborators for the accepted projects. I am looking forward to seeing the results!

The full story is available on the ESI website.

H2020 Project HERCULES in Grant Agreement Preparation

The European Commission just notified us that our H2020 IA project HERCULES (High-pErformance Real-time arChitectUres for Low-power Embedded Systems) has reached the stage of grant agreement preparation. Earlier this year, I took the lead on this proposal on behalf of CTU Prague and also contributed more generally to the preparation. Given the competitive nature of H2020, I am pleased to see that the work was well received. A particular congratulations to Marko Bertogna and his team at University of Modena for their hard work on coordinating this proposal. Now let’s hope the negotiation phase goes well!

Project HERCULES has the ambitious goal to provide the required technological infrastructure to obtain an order-of-magnitude improvement in the cost and power consumption of next generation real-time applications. It will develop an integrated framework to allow achieving predictable performance on top of cutting-edge heterogeneous COTS multi-core platforms, implementing real-time scheduling techniques and execution models recently proposed in the research community. The framework will be applied to two innovative industrial use cases: a pioneering autonomous driving system for the automotive domain, and a visual recognition system for the avionic domain.

Two Articles Appeared in Journal of Systems Architecture

Two articles that were submitted to a Journal of Systems Architecture Special Issue on High-performance and Real-time Embedded Systems have now appeared online. The first article is called “T-CREST: Time-predictable Multi-Core Architecture for Embedded Systems” and summarizes the work done in the recently concluded FP7 STREP project T-CREST, where me and my students worked on time-predictable memory controllers.

The second article is entitled “Dataflow Formalisation of Real-Time Streaming Applications on a Composable and Predictable Multi-Processor SOC” and shows how data-flow graphs can be used to model streaming applications mapped to the CompSoc platform and predict its minimum throughput. The basic idea is to start from a data-flow graph of the application and add additional nodes and edges that capture the mapping and timing behavior of all hardware components software libraries, and schedulers in the system. The approach is verified by comparing the predicted performance to the actual performance of an application executing on a CompSoc instance on an FPGA. The article clearly demonstrates the potential of modeling systems in which the behavior of all hardware and software components are known.