Component Modelling and Analysis (ComMA) in Bits & Chips

Bits & Chips just published an article about ComMA (Component Modelling and Analysis). ComMA addresses key design and verification challenges for complex systems comprising many components developed by different parties, challenges that are frequently encountered in the high-tech industry across application domains. The challenges are tackled by allowing structure and behavior of component interfaces to be formally specified using a set of domain-specific languages. From this specification, a number of artifacts are automatically generated, including system tests, run-time monitors that detect protocol violations, performance metrics, and documentation. Together, these artifacts reduce the time to design, integrate, and evolve complex high-tech systems, allowing the next generation of these systems to be developed faster and with higher quality.

ComMA was developed by ESI (TNO) in applied research projects with Philips. Successfully proving the approach in an industrial context at Philips has sparked interest from other companies, including Thermo Fisher Scientific, Thales, and Kulicke & Soffa. This eco-system of high-tech companies is expected to increase further as the ComMA tooling becomes open source as part of the Eclipse Foundation.

The article also mentions the applied research project DYNAMICS, for which I am the technical lead. Here, ESI and Thales have been looking at challenges and opportunities related to the evolution of interfaces. The strong point of interfaces is that they abstract from the component providing a particular functionality, allowing it to be changed or even replaced without compromising the overall functionality of the system. However, eventually the interfaces themselves need to be updated to prevent technical debt, and at that point all components relying on that interface are affected simultaneously. In the DYNAMICS project, we study how to automatically detect whether a change to the protocol of an interface is backwards compatible and if this is not the case, semi-automatically generate adapters that bridge the differences with previous versions. The benefit of this approach is that it reduces the time and cost of interface updates, allowing them to evolve faster and avoid creative workarounds that ultimately lead to unreliable systems and lower software quality. If you are interested in reading more about this work and how it leverages ComMA and Petri Net technology to achieve this, read this overview paper from last year.

Comma interfaces open the door to reliable high-tech systems

Four Projects Granted to Fight the Complexity of Cyber-Physical Systems

During the past two years, I have been involved with setting up the Partnership Program Mastering Complexity (MasCot), funded NWO Domain Applied and Engineering Sciences together with ESI (TNO). After a long process of defining the key topics, writing the call, and aligning with applicants, four innovative research projects have finally been granted, allocating three million euros to research on software restructuring, testing, scheduling and design of cyber-physical systems. Congratulations to Andy Pimentel, Twan Basten, Jan Tretmans, Eelco Visser, and their collaborators for the accepted projects. I am looking forward to seeing the results!

The full story is available on the ESI website.

H2020 Project HERCULES in Grant Agreement Preparation

The European Commission just notified us that our H2020 IA project HERCULES (High-pErformance Real-time arChitectUres for Low-power Embedded Systems) has reached the stage of grant agreement preparation. Earlier this year, I took the lead on this proposal on behalf of CTU Prague and also contributed more generally to the preparation. Given the competitive nature of H2020, I am pleased to see that the work was well received. A particular congratulations to Marko Bertogna and his team at University of Modena for their hard work on coordinating this proposal. Now let’s hope the negotiation phase goes well!

Project HERCULES has the ambitious goal to provide the required technological infrastructure to obtain an order-of-magnitude improvement in the cost and power consumption of next generation real-time applications. It will develop an integrated framework to allow achieving predictable performance on top of cutting-edge heterogeneous COTS multi-core platforms, implementing real-time scheduling techniques and execution models recently proposed in the research community. The framework will be applied to two innovative industrial use cases: a pioneering autonomous driving system for the automotive domain, and a visual recognition system for the avionic domain.

Two Articles Appeared in Journal of Systems Architecture

Two articles that were submitted to a Journal of Systems Architecture Special Issue on High-performance and Real-time Embedded Systems have now appeared online. The first article is called “T-CREST: Time-predictable Multi-Core Architecture for Embedded Systems” and summarizes the work done in the recently concluded FP7 STREP project¬†T-CREST, where me and my students worked on time-predictable memory controllers.

The second article is entitled “Dataflow Formalisation of Real-Time Streaming Applications on a Composable and Predictable Multi-Processor SOC” and shows how data-flow graphs can be used to model streaming applications mapped to the CompSoc platform and predict its minimum throughput. The basic idea is to start from a data-flow graph of the application and add additional nodes and edges that capture the mapping and timing behavior of all hardware components software libraries, and schedulers in the system. The approach is verified by comparing the predicted performance to the actual performance of an application executing on a CompSoc instance on an FPGA. The article clearly demonstrates the potential of modeling systems in which the behavior of all hardware and software components are known.