Please check out this short video for a quick introduction to my research on Design Methodologies for Cyber-physical Systems.
Problem Statement
Cyber-physical systems across application domains are getting increasingly complex, driven by five technological and market trends: 1) all current design parameters, e.g. number of interfaces and lines of code, are increasing by an order of magnitude, 2) increased customization of systems at design time, 3) continuous evolution of systems after deployment, 4) increased system autonomy, and 5) integration into complex connected and distributed systems-of-systems of which nobody is in complete control. The consequences of increasing complexity are visible in daily practice in which industry experiences major setbacks in their attempts to efficiently develop well-performing cyber-physical systems.
Design Methodologies for Cyber-Physical Systems
To manage this increasing complexity, new design methodologies are required. The goal of my research is to develop and apply new methodologies to address this challenge. This is done through a combination of my academic work at the Systems and Networking Lab at the University of Amsterdam and the applied research done at ESI (TNO). My chair Design Methodologies for Cyber-physical Systems at the University of Amsterdam combines two research areas, described below, to address the stated complexity problem.
The first area considers design methodologies for cyber-physical systems in which abstraction, provided by models used for specification, analysis, simulation, or synthesis, play an essential role. While this area applies to cyber-physical systems in general, the second area focuses on design aspects of real-time systems. Together, these two areas capture much of my existing work in both academic (TU/e, CTU Prague, CISTER) and applied research (ESI) in different application domains and industries in which I have worked, e.g. avionics (Airbus), consumer electronics (Philips & NXP Semiconductors), and defense (Thales).
More information about the two research areas is provided below. This text primarily targets a technical audience and contains links to relevant publications.
Model-based Design Methodologies for Cyber-Physical Systems
The trends towards rapidly growing design parameters and increasing customization have resulted in complex systems with many variants that require long time to develop and are difficult to evolve during development in response to changing requirements, business needs, or technology. For systems with long life times, it is furthermore important to continuously evolve after deployment. Lacking this ability not only prevents systems from quickly reacting to the previously mentioned changes, but also increases risk, as many small updates are collected into big infrequent upgrades. New methodologies are hence required to reduce development time, simplify customization for a particular customer, and improve evolvability both during development and after deployment.
My research in this area draws primarily on my experiences with applied research at ESI (TNO), in particular work done in collaboration with Thales. We address the problem by inventing design methodologies based on Model-Based Engineering (MBE). Challenges related to reducing development time, and improving customization and evolvability have been addressed using Domain-Specific Languages (DSLs), which are custom-made languages for a particular application domain. The benefit of these languages are that they allow domain experts to quickly and independently specify instances of the language, e.g. a system configuration, and generate, and regenerate in response to changes, consistent artifacts, such as code and documentation. When applying this approach in industry, we have investigated important aspects, such as how to achieve modularity in a DSL eco-system, achieve consistency between model and realizations, manage an evolving DSL eco-system, ensure model quality, and ensure quality of generated code. These aspects have been explored in the context of the published state-of-the-art, as well as practically investigated through case studies from the defense domain.
My work on continuous evolution after deployment has focused on service-oriented architectures, which decouples the application from a particular product, technology, and implementation using service interfaces that hide the component implementing the service. However, this arrangement results in a large number of possible interactions between different components of different versions in different products and their variants, making it difficult and time-consuming to detect and correct incompatibilities caused by updating service interfaces.
Initial steps towards enabling continuous evolution in service-oriented architectures include: 1) the state-of-the-art in the areas of specification of service interfaces, and detection and correction of incompatible service interactions has been surveyed, 2) a methodology to (semi-)automatically detect and correct incompatible interactions is currently under development, and 3) the feasibility of the methodology is evaluated in the context of a simplified industrial case study from the defense domain.
Design of Real-time Cyber-Physical Systems
My research in this area focuses is on cyber-physical systems with real-time requirements. The work primary targets two industrial application domains. The first is consumer electronics and relates to my experiences from Philips Research and NXP Semiconductors in Eindhoven, and the second is safety-critical automotive/avionics systems, which incorporates my experiences from Airbus Group Innovations. Systems in both of these application domains are getting increasingly complex, as a growing number of applications are integrated. These applications consist of communicating tasks mapped on (heterogeneous) multi-core platforms with distributed memory hierarchies that strike a good balance between performance, cost, power consumption and flexibility. The platforms exploit an increasing amount of application-level parallelism by enabling concurrent execution of more and more applications. To reduce cost, platform resources, such as processors, hardware accelerators, interconnect, and memories, are shared between applications. However, resource sharing causes interference between applications, making their temporal behaviors inter-dependent and challenging to verify. This challenge contributes to making the integration and verification process a dominant part of system development, both in terms of time and money.
My research addresses this problem by using two complexity-reducing concepts: predictability and composability. Predictability provides bounds on system performance, enabling formal verification of applications using a performance analysis framework. Composability provides a complementary verification approach by temporally isolating applications, enabling independent development and verification by simulation without relying on availability of formal models. Predictability and composability both solve important parts of the verification problem, and provide a complete solution when combined. The concepts of predictability and composability are applied both at the higher level of applications and the lower level of platform resources, as detailed next.
Application-level Research
At application level, the goal of my work is to reduce development time and cost of real-time systems based on commercial-of-the-shelf platforms, while guaranteeing their timeliness. Such systems can be found in a number of domains, including safety-critical ones, such as avionics and automotive, where timeliness and safety go hand in hand. The goal is achieved by researching new design methodologies that enable efficient sharing of platform resources, such as CPU cycles, memory bandwidth, and cache lines, among tasks that may have diverse resource requirements, which can change over time. Our work in this area covers mapping and schedulingmulti-core or many-core platforms that may be both heterogeneous and distributed, like the car in the figure below.
There are three main directions of this work, described below in order from higher to lower level of abstraction:
Mixing Models of Computation
The second direction considers applications sharing a platform may have different models of computation, such as synchronous data-flow (SDF) graphs and others as independent periodic tasks. The main contribution of this work in this direction is a unified design approach, illustrated below, for two very different application models with different assumptions on the system. The unified design approach comprises four steps: 1) the synchronous data-flow graph is transformed into an equivalent homogeneous data-flow (HSDF) graph, 2) a reduced-size HSDF graph is generated that respect all latency and throughput of the original HSDF graph. This is done through slack-based merging, which clusters nodes in the HSDF graph if possible, sacrificing parallelism to reduce complexity, 3) the HSDF graph is turned into a set of arbitrary deadline periodic tasks with offsets through timing parameter extraction, and 4) communication-aware mapping maps the converted periodic tasks along with the original set of periodic tasks on a homogeneous multi-core platform.
Mixed-criticality Systems
Much of our work in this area considers mixed-criticality real-time systems, where applications with different safety levels share resources. Different avionics applications and their corresponding design assurance level (DAL), indicating their criticality, is shown below.
Our research in this direction contributes mapping and scheduling techniques, for CPUs, memories (one or two memory controllers), and caches, along with automatic resource budgeting methods that reduce development time, and schedulability analyses that allows timing requirements of tasks to be analytically verified. These techniques promote efficient resource usage by considering and managing variations in supply and demand of resources during execution, e.g. in response to a mode-switch in an executing task, or the system as a whole. This may reduce the cost of the platform required to schedule a given task set, or allow more tasks to be scheduled on a given platform. Enforcing resource budgets furthermore promotes safety by temporally isolating accesses from different applications, which may reduce the time and cost of safety certification.
Better Than Worst-case Design
The third direction is mostly relevant to the consumer electronics and high-performance domains and involves investigating how process variation during manufacturing of chips impacts the number of chips where the applications satisfy their timing requirements (timing yield). The main contribution of this is a framework for designing streaming applications constrained by a throughput requirement with reduced design margins, referred to as better than worst-case design. Key aspects of this framework are variation-aware algorithms for mapping and voltage-frequency island partitioning that maximize this timing yield or highlight the trade-off between timing yield and design cost.
An application constrained by a throughput requirement is allocated to a multiprocessor platform under (a) worst-case (WCD) and (b) better than worst-case designs. Each hardware component is operated at its (a) worst-case maximum supported (target) frequency, and (b) actual maximum supported frequency. Better than worst-case design (reduced guard-bands) results in more good dies due to a combination of smaller die area and high application yield (c).