Published: 13th December 2016 DOI: 10.4204/EPTCS.232 ISSN: 2075-2180 |
A CPS is an integration of networked computational and physical processes with meaningful inter-effects; the former monitors, controls, and affects the latter, while the latter also impacts the former. CPSs have applications in a wide-range of systems spanning robotics, transportation, communication, infrastructure, energy, and manufacturing. Many safety-critical systems such as chemical processes, medical devices, aircraft flight control, and automotive systems, are indeed CPS. The advanced capabilities of CPS require complex software and synthesis algorithms, which are hard to verify. In fact, many problems in this area are undecidable. Thus, a major step is to find particular abstractions of such systems which might be algorithmically verifiable regarding specific properties of such systems, describing the partial/overall behaviors of CPSs.
The reviewing process of the workshop involved 12 program committee members. Each submission received at least two reviews. In total, 6 submissions were accepted by the program committee for oral presentation. The workshop program includes a keynote talk on analysis-based timing predictability by Sanjoy Baruah (University of North Carolina at Chapel Hill, USA); 4 invited talks by Alessandro Abate (University of Oxford, UK), Sriram Sankaranarayanan (University of Colorado, Boulder, USA), Georgios Fainekos (Arizona State University, USA), and Jyotirmoy Deshmukh (Toyota Technical Center, Gardena, CA, USA); and 6 more contributions. Out of the 6 oral presentations, 5 were selected by the program committee to be included in the proceedings.
V2CPS 2016 would not be possible without the hard work of a number of people involved in the workshop organization. They include the organizing committee, program committee, additional reviewers, session chairs, the keynote speaker, and the invited speakers. The workshop could not have taken place without the support of Reykjavik University and the iFM 2016 local organizers. Many thanks to the iFM General Chair Marjan Sirjani, and her team of local organizers, the workshop chairs Marcel Kyas (University of Reykjavik, Iceland) and Wojciech Mostowski (Halmstad University, Sweden). We would also like to thank the iFM Steering Committee, the support of Easychair and also EPTCS.
Finally, we would like to thank all of the authors who submitted their work to V2CPS 2016; without them, this workshop would not be possible. We hope you had an interesting and exciting workshop and also an enjoyable stay in Reykjavik.
Mehdi Kargahi (University of Tehran, Iran)
Ashutosh Trivedi (University of Colorado Boulder, USA)
V2CPS Program Co-Chairs
Organization of the 1st International Workshop on Verification and Validation of Cyber-Physical Systems (V2CPS 2016):
General chair
Marjan Sirjani (Reykjavic University, Iceland)
Program chairs
Mehdi Kargahi (University of Tehran, Iran)
Ashutosh Trivedi (University of Colorado Boulder, USA)
Organizing committee
Ebrahim Ardeshir-Larijani (Institute for Research in Fundamental Sciences (IPM), Iran)
Sadegh Soudjani (University of Oxford, UK)
Program Committee
In order to enable the effective validation and verification of run-time properties of safety-critical cyber-physical systems, it is necessary that these properties be predictable prior to run-time. Such predictability has traditionally been achieved via determinism: (i) implement the systems as deterministic programs; (ii) enforce deterministic behavior on the platforms upon which these programs execute; and (iii) model the interaction between the implementation and the environment via deterministic models.
Such an approach is indeed effective in the sense that predictability is achieved, but such predictability comes at a cost. Enforcing deterministic behavior upon the implementation platforms, and modeling the interaction of the implementation with the physical world using deterministic models, often leads to system implementations that under-utilize platform resources during running time. (Such under-utilization is particularly severe if the implementation platform includes commercial off-the-shelf components that are typically designed to optimize average-case performance rather than provide deterministic worst-case guarantees.) As the systems we seek to build become increasingly more computationally demanding and therefore need to be implemented upon ever more powerful platforms, and as their functionalities become increasingly more complex and they are therefore required to interact with the physical world in increasingly more complex ways, the fraction of the computing capacity of the implementation platform that ends up remaining un-utilized at run-time as a consequence of enforcing determinism is reaching unacceptable levels; alternative approaches to establishing system predictability in cyber-physical systems are desirable.
One possible such alternative approach is beginning to emerge from ongoing research happening in the real-time scheduling theory community. This approach recognizes that not all run-time properties of a system may be equally critical to establishing its overall correctness, and that properties that are less critical may have their correctness established to lower levels of assurance than more critical properties. In somewhat greater detail
Preliminary evaluations of such an approach indicate that it offers the promise of more resource-efficient implementation than is achieved under the current practice of enforcing determinism.
This contribution discusses a new and formal, measurement-driven and model-based automated verification and synthesis technique, to be applied on quantitative properties over systems with partly unknown dynamics. We focus on typical CPS systems, driven by external inputs and accessed under noisy measurements. This new setup is formulated as a data-driven Bayesian model inference problem, formally embedded within a quantitative, model-based verification procedure. The contribution further zooms in on systems represented via stochastic hybrid models, which are probabilistic models with heterogeneous dynamics. With focus on model-based verification procedures, we provide the characterisation of general temporal specifications. Further, their computation and the synthesis of related control architectures optimising properties of interest, is attained via the development of abstraction techniques based on quantitative approximations. Theory is complemented by algorithms, all packaged in a software tool that is freely available to users.
The design of complex, high-tech, safety-critical systems such as autonomous vehicles, intelligent robots, and cyber-physical infrastructures, demands guarantees on their correct and reliable behaviour. Correct functioning and reliability over models of systems can be attained by the use of formal methods, such as model checking. The formal verification of computer systems allows for the quantification of their properties and for their correct functioning. Within the computer sciences, the formal verification of software and hardware has successfully led to industrially relevant and impactful applications, becoming a standard in the avionics, automotive, and railway industries
Whilst verification has classically focused on finite-state models, with the ever more ubiquitous embedding of digital components into physical systems richer models are needed, as correct functioning can only be expressed over the combined behaviour of both a digital computer and its surrounding physical system.
The strength of formal techniques, such as model checking, is bound to a fundamental requirement: having access to a model of the system of interest. However, in practice for most physical systems their dynamical behaviour is known only in part.
We have developed a learning approach to enable reasoning on cyber-physical systems via formal methods [7]. Data-based inference used to learn the model of the physical system is combined with logicbased deduction, leading to a credibility level in the formal specification defined over the complex system. This credibility characterises the confidence level that the user can put in, for example, a specification expressing system safety
Additionally, based on the gathered knowledge over the physical system, additional experiments can be proposed to efficiently gain more information on the satisfaction of the specifications of interest [8, 9]. More precisely, in order to increase the knowledge of the system at hand, it is often possible to collect measurements of their behaviour before deployment. Such measurements of physical systems are usually available as time series of input and noisy output signals, and carry information about the system dynamics. As such, by exploiting inference-based learning methods, potential information in the samples can be added to the available knowledge of the system at hand, allowing to refine the confidence in the validity of the specification of interest.
In literature only limited work within the formal methods community deals with the verification of models with partly unknown dynamics. The new approach, allowing both partly unknown dynamics over uncountable (continuous) variables and noisy output measurements, is the usage of a Bayesian framework relating the credibility (confidence) in the asserted validity of a formal property to the uncertainty built over a model from data. This approach provides a new integration of model-based verification methods (for reasoning) and data-driven inference techniques (for learning) [7].
While emphasising the generality of the approach over a number of diverse model classes, this talk zooms in on systems represented via stochastic hybrid models (SHS) [4], which are probabilistic models with heterogeneous dynamics (continuous/discrete, i.e. hybrid, as well as nonlinear). As such, SHS are quite a natural framework for CPS [1]. With focus on model-based verification procedures, this line of research provides the characterisation of general temporal specifications based on Bellman's dynamic programming [4, 11]. The computation of such properties and the synthesis of related control architectures optimising properties of interest, is attained via the development of abstraction techniques based on quantitative approximations [2, 3]. This abstraction approach employs methods and concepts from the formal verification area, such as that of (approximate probabilistic) bisimulation [5], over models and problems known in the field of systems and control [10]. Theory is complemented by algorithms, all packaged in a software tool — FAUST2 [6] — that is freely available to users.
This contribution introduces key challenges in the verification and validation of industrial-scale embedded control systems, such as those found in the domain of automotive powertrain. Even some of the most basic models of such systems contain features such as nonlinear and hybrid dynamics, which makes the verification problem intractable. A common abstraction for models of such complexity is to treat them as black-box systems, i.e. systems that we can stimulate with a given set of inputs, and can observe the system behaviors for a given set of outputs. We introduce past work that uses a simulation-guided approach for testing and analysis of such black-box models and review some recent progress. We also introduce a logical formalism for specifying behavioral requirements, and review some recent progress on making such requirement formalisms accessible to control designers
At the heart of an automobile is its powertrain, and the operation of a powertrain component such as an engine is controlled by embedded software deployed on electronic control units. The paradigm of model-based development (MBD) has become the de facto standard for designing such control software. Typically, the MBD paradigm uses two kinds of models: plant models that use dynamical equations to represent the continuously evolving physical aspects of a vehicle, and controller models that represent the discrete software controlling the plant. The advantage of MBD is in its ability to design, validate, and analyze the closed-loop model of the plant and the controller, often well before the actual hardware components become available. Unfortunately, even the simplest closed-loop model of an automotive powertrain subsystem is a complex cyber-physical system with highly nonlinear and hybrid dynamics, and reasoning about the correctness of such closed-loop models is a formidable task. We introduce two key challenges in reasoning about industrial-scale closed-loop control models: (1) scaling falsification techniques to closed-loop engine control systems, and (2) formalisms to express correctness and performance requirements for such models. We mention some of the most relevant existing work done to answer these challenges in the context of industrial-scale embedded control models, and present some recent progress on each item.
As the verification problem for closed-loop embedded control models is formidable, the prevalent practice in industry is extensive model-based testing. Here, control designers use heuristics and previous experience to pick a set of test input signals and system parameters to perform a numerical simulation of the system-under-test. Designers typically perform a finite number of such simulations, and identify undesirable behaviors by manual inspection of the resulting outputs. These techniques are incomplete in the sense that they provide no guarantees on whether a bug is found; however, they significantly increase the possibility of identifying problems early in the system design. Falsification techniques [1, 3] seek to automate this process in two ways. First, they allow designers to express a set of machine-checkable requirements on the system behaviors using the logical formalism of real-time temporal logics. Second, they formulate the search for errors as an optimization problem by encoding satisfaction of the requirements by a function that maps a property and a simulation trace to a real number.
Existing falsification tools have two shortcomings. First, tools such as S-TaLiRo [1] and Breach [3] offer a number of global optimizers to perform the search for falsifying inputs; however, when the cost surface induced by the property satisfaction function is a nonconvex or discontinuous surface, global optimizers are often limited in their efficacy.
The second shortcoming is the way a user is required to specify the search space in the falsification tools. As the search space of input signals is an infinite-dimensional space, to make the search tractable, these tools often utilize a finite parameterization of the input signals, where, the designer specifies an input signal using a fixed number of control points. Control points are essentially time-points at which the global optimizer is allowed to pick signal values. The actual input signal is constructed by interpolating the signal values for time-points between the control points. For most falsification problems, the right number of control points for parameterizing the input space is difficult to judge. As a result, users have to manually try different number of control points in a somewhat ad-hoc fashion.
To address both these shortcomings, we propose a new falsification tool called Sitar [2], that builds on an optimization heuristic drawn from the discrete optimization literature. We allow the user to specify a discrete grid (possibly non-uniform) over the input signals. An input signal is then obtained by picking one of the discrete signal values at each discrete time-point on the grid. Interpolation is used to obtain values at the intermediate time-points. This way of specifying signals makes the search space finite (in contrast to search spaces for S-TaLiRo or Breach). We then use a local search procedure (that employs a stochastic gradient descent) to find a signal in this finite space that has the lowest cost, and use a Tabu list to avoid revisiting points that have been previously visited. The method also permits refinement of the discrete grid across iterations. This permits the method to discover the parameterization required to find a falsifying input, through iterative refinements. We allow the algorithm to randomly restart from some new input signal if the local search procedure is deemed to be converging to a local optimum. This heuristic often allows the optimization procedure to effectively search the nonconvex and possibly discontinuous cost surface.
Machine-checkable formal requirements are not commonplace in the automotive industry. There are several reasons for this; first, traditional metrics used to evaluate control system performance, such as overshoot and settling time, may be difficult to formulate in the languages typically used to create formal requirements. This is because formal requirement languages are traditionally used to specify behaviors of finite state systems, rather than the continuous behaviors considered by control engineers. The second reason is that formal requirements take time to develop, and automotive design cycles are usually tight, meaning that the time available for activities that lie outside of the traditional automotive development processes is quite limited. The final issue is that of cross-domain knowledge: automotive engineers are not typically familiar with the languages used to create formal requirements. Such languages and associated logics are typically better known to computer scientists, while automotive engineers are typically trained as mechanical and electrical engineers.
We would thus ideally like control designers to use formal machine-checkable requirements in the model-based development process without requiring extensive knowledge of logical formalisms. To this end, we developed ST-Lib (signal template library) [5], a library of formal requirements for behavior classes that are often the focus of testing processes performed by automotive control engineers. Each signal template is a parameterized requirement, which can be instantiated to specific usage scenarios by specifying the values for the template parameters. For example, a requirement on the settling behavior of a signal is specified using parameters to define a settling region (region within which the signal settles) and the settling time, or the time after which the signal never leaves the settling region. A key advantage of using ST-Lib is that each template corresponds to a parametric Signal Temporal Logic formula. Signal Temporal Logic (STL) is a real-time temporal logic, a formalism used by tools such as Breach [3]. Techniques such as requirement mining [4] that identify concrete requirements from system models given requirement templates have been developed for parameteric STL formulas, and can be used to automatically identify worst-case requirements from system models.
We conclude by highlighting some of the challenges for future work. A key challenge is to enrich the library of signal templates to further include requirements that are of interest to control designers, such as those based on frequency domain analysis of signals. Also of interest is requirement elicitation, or the process by which requirements can be identified by a designer using examples of good or bad signal behaviors. Possible directions of research include application of machine learning approaches to such requirement engineering tasks.
In formal verification of Cyber-Physical Systems (CPS), a system is verified with respect to formal specifications. However, developing formal specifications using formal logics is a challenging and error prone task even for experts who have formal mathematical training. To assist in the elicitation of formal specifications, in [6], we presented a graphical formalism and the accompanying tool VISPEC that can be utilized by both expert and non-expert users. Namely, a user-developed graphical input is translated to a Metric Interval Temporal Logic (MITL) formula [2]. The formal specifications in MITL can be used for testing and verification with tools such as S-TALIRO [3].
In [6], the tool was evaluated through a usability study which showed that both expert and non-expert users were able to use the tool to elicit formal specifications. The usability study results also indicated that in many cases the developed specifications were incorrect. In addition, using an on-line survey (available at: http://goo.gl/forms/YW0reiDtgi), we tested how well formal method experts can translate natural requirements to MITL. The preliminary results indicate that even experts can make errors in their specifications. For example, for the natural language specification " At some time in the first 30 seconds, the vehicle speed (v) will go over 100 and stay above 100 for 20 seconds", the specification φ = ◊ [0,30] ((v > 100) ⇒ □ [0,20] (v > 100)) was provided as an answer by an expert user. Here, ◊[0,30] stands for "eventually within 30 time units" and □ [0,20] for "always from 0 to 20 time units". However, the specification φ is a tautology, i.e., it evaluates to true no matter what the system behavior is and, thus, the requirement φ is invalid.
This indicates that the specification correctness is a major issue in testing and verification since effort can be wasted in checking incorrect requirements, or even worse, the system can pass the incorrect requirements. Clearly, this can lead to a false sense of system correctness. In [5], we developed a specification analysis framework that would enable the debugging of specifications. The specification debugging algorithm identifies invalid and wrong specifications.
In this extended abstract, which accompanies the invited talk at V2CPS 2016 by Georgios Fainekos, we briefly review the results in [5] along with some newer results on trace (trajectory) based specification debugging.
We first provide a framework that helps the users detect specification errors for which the requirement issues can be corrected before any test and verification process is initiated.
Problem 1 (System Independent MITL Analysis)
Given an MITL formula φ, find whether φ has any of
the following specification issues:
In particular, issues 2 and 3 usually indicate some misunderstanding in the requirements. The overview of our proposed solution to Problem 1 is provided in Fig. 1. We present a specification debugging algorithm for a fragment of MITL specifications. This framework appeared in [5], and it can solve the specification correctness problem for VISPEC [6] requirements. The user of VISPEC can benefit from our feedback and fix any reported issues.
In order to find any issues in MITL specifications, we need to utilize the MITL satisfiability solver [4]. For checking the validity issue, we can easily use the MITL satisfiability solver. To find the redundancy and vacuity issues, we need to apply syntactic manipulations on the MITL formulas. In [5], we proved that for the fragments of MITL where only eventually (◊) and always (□) operators are used, we can apply the syntactic manipulations for detecting the redundancy and vacuity of specifications (Problem 1). Therefore, we can find redundancy and vacuity issues using MITL satisfiability checking on the manipulated MITL formulas [5]. As a next step, we implemented the whole framework of Fig. 1 using the MITL satisfiability solver [4]. As a case study, we checked the correctness of the MITL specifications provided in the usability study of [6]. The experimental results of applying Fig. 1's framework are reported in [5]. The application of our specification debugging tool on user derived requirements shows that all three types of specification issues are common [5].
However, some specification issues cannot be detected unless we consider the system, and test the system behaviors with respect to the specification. Thus, we provide algorithms to detect specification issues with respect to signals in order to help the CPS developers find more in depth specification issues during system testing. For example, consider the MITL specification φ = □ [0,5] (req ⇒ ◊ [0,10]ack). This formula will pass the MITL Specification Debugging method provided in Fig. 1. However, any signal μ that does not satisfy req at any point in time during the test will vacuously satisfy φ. We refer to signals that do not satisfy the antecedent (precondition) of the subformula as vacuous signals.
Problem 2 (System Dependent Vacuity Checking) Given an MITL formula φ, and signal μ, check whether μ satisfies the antecedent failure mutation of φ.
An antecedent failure mutation is a new formula that is created with the assertion that the precondition (for example req in case of φ = □[0,5] (req ⇒ ◊[0,10]ack)) never happens. To detect the antecedent failure of signals, we extract all the antecedent subformulas of ⇒ and create a new MITL formula φ' (for each antecedent) that asserts that the antecedent never happens. As a result, we run the monitoring algorithm in S-TALIRO [3] to check whether signal s satisfies φ'. If s satisfies φ', then s is a vacuous signal and it is reported to the user.
In order to use vacuity detection (Problem 2) in CPS testing, we provide our framework in Fig. 2. The input generator creates initial conditions and inputs to the system under test. An example of a test generation technology that implements the architecture in Fig. 2 is presented in [1]. The system executes or simulates to generate an output trace. Then, a monitor checks the trace with respect to the specification and reports to the user whether the system trace satisfies or falsifies the specification. Signal vacuity checking is conducted, and vacuous traces (signals) are reported to the user for more inspection (see Fig. 2). Our experimental results showed that for CPS applications, antecedent failure is a serious problem and it needs to be addressed in the testing and monitoring of CPS.
We introduced a framework that helps developers correct their specifications and avoid wasted effort in checking incorrect requirements. Currently, we are working on using vacuous signals to improve the counter example generation process and system debugging using signal vacuity.
This research was partially funded by NSF awards CNS 1350420, CNS 1319560, IIP-1361926 and the NSF I/UCRC Center for Embedded Systems.
We present an approach for synthesizing switched controllers for continuous-time plants, so that the resulting closed-loop system satisfies specifications expressed in temporal logic. Our approach uses a combination of control certificates such as control Lyapunov functions and control barriers. Such a certificate provides a controller switching strategy and associated property proof in one shot. However, the difficulty of our approach lies in synthesizing such certificates automatically from the given plant description and property specification. We present solutions to this synthesis problem by combining SMT solvers commonly used in program verification/synthesis applications with techniques for relaxing polynomial optimization problems commonly used in control literature. We illustrate our approach on a series of challenging benchmark problem instances.
Automatic synthesis of controllers from specifications has had a long history of research both within computer science and control theory. A recent convergence of techniques from both areas has given rise to techniques for the formal synthesis of control software from high level behavioral specifications using dynamical models of the plant's behaviors [5, 4, 12, 16, 8, 6].
The formal approach to controller synthesis promises many advantages including (a) the ability to handle nonlinear plant models without requiring linearization, (b) the ability to incorporate constraints on the control inputs and (c) the ability to handle temporal specifications including multiple requirements inside the synthesis procedure.
At the same time, achieving controller synthesis as a push-button technology is hard due to the high complexity of the algorithms involved. As a result current approaches are restricted to a limited class of plant models, state spaces with low dimensions, or a restricted class of specifications. Much research has focused on tackling these limitations to enable truly practical approaches to the synthesis of control software.
The dominant research theme has included the construction of discrete abstractions from the original continuous plant specification and the use of automata-based control synthesis from high level specifi- cations expressed in suitable logics such as LTL [4, 5, 16]. Whereas this approach has been shown to be promising using tools such as Pessoa and TuLiP, the main bottleneck involves the construction of the abstraction which involves a gridding of the state-space into finitely many cells followed by a discrete controller synthesis problem that typically involves solving games over automata. As such the complexity of these algorithms is high: discretization yields automata whose sizes scale exponentially in the number of dimensions of the plant model. Synthesis of winning strategies over finite state machines can also exhibit exponential complexity in the size of the specification, though efficient symbolic algorithms have been shown to scale to large systems.
Our work presents an approach that restricts the class of specifications to commonly encountered classes such as safety, stability, reachability and reach-while-stay specifications [11, 10, 9]. Rather than construct an abstraction, we search for control certificates in the form of Lyapunov-like functions whose existence directly yields a proof that the plant model can be controlled to satisfy the specification of interest. The controller itself is extracted directly from the resulting control certificate.
Formulating Control Certificates: A control certificate takes the form of a Lyapunov-like function V(x) over states x ∈ X that provides a proof that at each state x there is a control input u that can be used to enforce the property of interest.
Consider the problem of stabilizing a plant model described as an ODE x_dot = f(x,u) to an equilibrium x_star = 0, wherein x ∈ X and u ∈ U. The corresponding control certificate takes the form of a control Lyapunov function V(x) defined over X such that:
Given a plant model, if such a control certificate is found, the system can be stabilized. In fact, the controller that stabilizes the system can be extracted from a control Lyapunov function. Control Lyapunov functions were first proposed by Artstein [1]. Sontag showed the correspondence between control Lyapunov functions and corresponding feedback laws [15]
Dimitrova and Majumdhar introduced the idea of proof certificates for dealing with ATL* properties of control systems [2]. Our work has formulated specialized certificates for simpler classes of properties that are more amenable to techniques for automatically discovering certificates that we will address subsequently [11].
Finding Control Certificates: A key problem involves the discovery of control certificates to synthesize the controllers. A standard approach is to fix an ansatz or template form such as V(x) : ∑α cαmα(x) wherein cα range over unknown coefficients and mα(x) range over a finite set of basis functions. Typically, the basis functions are chosen to be monomials x α where |α| ≤ D for some chosen degree bound D. The conditions for the control certificates are encoded as constraints over the unknowns cα. However, it is well known that the resulting constraints are nonlinear, involving quantifier alternation. As a result, they are hard to solve in general.
Our approach uses a procedure called Counter Example Guided Inductive Synthesis (CEGIS), originally proposed for the problem of program sketching by Solar-Lezama et al [14, 13]. This procedure alternates between solving quantifier free formulas using existing decision procedures. We have investigated the use of CEGIS procedure combining a linear arithmetic decision procedure [7] with a nonlinear arithmetic procedure [3]. A recent work has used linear matrix inequalities (LMI) as efficient relaxations for the nonlinear decision procedure involved [10]. As a result, we have successfully solved a series of benchmarks obtained from the related work on nonlinear control design ranging from 2 – 6 state variables.
Open Challenges: Despite some progress many open challenges remain that are topics of current and future work. (A) To be practical the synthesis approach needs to scale to plant models with 10 or more state variables and beyond polynomial differential equations. We are working on techniques that can potentially scale to large systems by decomposing them structurally into smaller subsystems. (B) Many approaches focus on state-feedback controllers, wherein the feedback is computed as a function of the state x of the plant model. However, output feedback is often more desirable than state feedback. Directly synthesizing output feedback controllers is an open challenge in this area. (C) Currently, the synthesis stops after it finds a single feasible control implementation. Often, we encounter performance criteria that need to be optimized. Incorporating performance criteria will enable us to design controllers that guarantee correctness specifications as well as optimize performance criteria.
We thank the organizers of V2CPS for inviting us to present our work. This work was supported by the US National Science Foundation (NSF) under CAREER award # 0953941 and CCF-1527075. All opinions expressed are those of the authors and not necessarily of the NSF.