Published: 7th July 2016 DOI: 10.4204/EPTCS.217 ISSN: 2075-2180 |
These proceedings contain the papers that were presented at the Workshop on FORmal methods for the quantitative Evaluation of Collective Adaptive SysTems (FORECAST), which took place on 8 July 2016 in Vienna, Austria, as a satellite event of the 4th federated event on Software Technologies: Applications and Foundations (STAF 2016).
FORECAST's primary goal is to raise awareness in the software engineering and formal methods communities of the particularities of Collective Adaptive Systems (CAS) and the design and control problems which they bring. Therefore, the workshop is sponsored by the FP7-ICT-600708 European project QUANTICOL (A Quantitative Approach to Management and Design of Collective and Adaptive Behaviours), that runs from 2013-2017.
CAS consist of a large number of spatially distributed heterogeneous entities with decentralised control and varying degrees of complex autonomous behaviour that may be competing for shared resources even when collaborating to reach common goals. It is important to carry out thorough quantitative modelling and analysis and verification of their design to investigate all aspects of their behaviour before they are put into operation. This requires combinations of formal methods (e.g. stochastic process algebras and associated verification techniques, like quantitative model checking) and applied mathematics (e.g. mean field/continuous approximation and control theory) which moreover scale to large-scale CAS.
ICT-based CAS are at the core of the envisioned smart cities of the future, which are on the research agenda of many EU and other international institutions and think-tanks. In fact, case studies in the QUANTICOL project concern smart urban transport (e.g. bike-sharing systems) and smart grids, i.e. systems of heterogeneous components with competing goals that must manage resources in a fair and efficient way. This is particularly challenging when designing for behaviour that is emergent and spatially inhomogeneous, but must nevertheless be guaranteed to satisfy operational requirements.
As a satellite event of STAF 2016, FORECAST is co-located with the 14th International Conference on Software Engineering and Formal Methods (SEFM 2016), thus providing an excellent opportunity for exchanging results and experiences on applying quantitative modelling and analysis techniques from formal methods and software engineering and technologies.
The workshop received five regular paper submissions and we invited two active researchers from the QUANTICOL project, Diego Latella (ISTI-CNR, Italy) and Mirco Tribastone (IMT Lucca, Italy), to submit extended abstracts of invited talks on their latest results from the project presented into a broader context. No less than four Program Committee members reviewed these submissions. Furthermore, the workshop included three keynote talks by renowned experts on the workshop topics, on the crossroads of SEFM and QUANTICOL, not participating in the project: Adelinde Uhrmacher (University of Rostock, Germany), Gul Agha (University of Illinois at Urbana-Champaign, USA), and Mirko Viroli (University of Bologna, Italy). We hereby thank all these invited speakers for accepting our invitation.
Thanks are also due to the Program Committee members, listed below, for their careful and swift reviewing. We are grateful to the STAF Workshop Chairs, Manuel Wimmer (TU Wien, Austria), Dániel Varró (Budapest University of Technology and Economics, Hungary), and Paolo Milazzo (University of Pisa, Italy), for accepting FORECAST as a satellite event at STAF 2016 and to the STAF organisers, General Chair Gerti Kappel (TU Wien, Austria) and Organization Chair Tanja Mayerhofer (TU Wien, Austria) for the smooth organisation and the pleasant interaction concerning organisational matters. We would also like to take this opportunity to thank Andrei Voronkov for his excellent EasyChair system that automates most of the tasks involved in organising and chairing a workshop. Finally, we thank EPTCS and its editor-in-chief, Rob van Glabbeek, for agreeing to publish the proceedings of FORECAST 2016.
Collective adaptive systems we interpret as systems that operate on different organizational levels and whose structures, in terms of components and interactions, are changing over time. These systems provide specific challenges for modeling as well as simulation methods. First, modeling languages have to be sufficiently expressive to be able to capture the essentials of collective adaptive systems. We present the multi-level modeling languages ML-Rules, ML-Space, and ML3, and discuss their features that facilitate developing and maintaining compact models of collective adaptive systems. Second, experimentation languages like SESSL can facilitate specifying, executing, and documenting complex "in-silico" experiments, e.g., parameter scans, statistical model checking, or optimization experiments comprising multiple executions and analysis steps.
However, executing experiments with complex models comes at a price. E.g., in the case of ML-Rules, allowing arbitrary functions not only on attributes and content but also for specifying reaction kinetics on the fly prevents us from taking short cuts that simulators for less expressive languages can take. A family of execution algorithms, from which a suitable algorithm can be selected and configured on demand before or during simulation, helps an efficient execution. Intelligent strategies for automatically selecting and configuring algorithms do not only come handy for executing models, but also for identifying and configuring suitable methods for the diverse analysis steps involved in simulation experiments. A pre-requisite for this kind of self-adaptation is a clear separation of concern and a component-based software design, as realized in the modeling and simulation framework James II.
Thus, similarly as domain-specific languages support users in developing and maintaining models of collective adaptive system, and specifying, executing, and documenting experiments, flexible, self-adaptive simulation environments contribute to more efficient and effective experimenting with these models.
In many real-world applications, such as those involving biological systems or big data, only probabilistic "guarantees" of approximate behavior are feasible. Moreover, we are often interested in aggregate behavior of such systems. The talk will describe probabilistic programming and methods for estimation which can guide the execution of probabilistic programs. Such methods are based on sampling the behavior of a concurrent system and closely connected to statistical model checking. I will then present Euclidean model checking, a method for analyzing quantitative properties of systems. The approach will be illustrated with applications such as Pharmacokinetics and sensor networks.