Published: 18th April 2017
DOI: 10.4204/EPTCS.248
ISSN: 2075-2180


Proceedings 8th Workshop on
Developments in Implicit Computational complExity
and 5th Workshop on
FOundational and Practical Aspects of Resource Analysis
Uppsala, Sweden, April 22-23, 2017

Edited by: Guillaume Bonfante and Georg Moser

Guillaume Bonfante and Georg Moser
Invited Talk: Resource Analysis of Distributed and Concurrent Programs
Elvira Albert
Invited Talk: Whole Systems Energy Transparency: More Power to Software Developers!
Kerstin Eder
Invited Talk: Challenges for Timing Analysis of Multi-Core Architectures
Jan Reineke
Invited Talk: On Resource Analysis of Imperative Programs
Lars Kristiansen
Automated Sized-Type Inference and Complexity Analysis
Martin Avanzini and Ugo Dal Lago
GUBS Upper Bound Solver (Extended Abstract)
Martin Avanzini and Michael Schaper
Towards Practical, Precise and Parametric Energy Analysis of IT Controlled Systems
Bernard van Gastel and Marko van Eekelen
Computability in the Lattice of Equivalence Relations
Jean-Yves Moyen and Jakob Grue Simonsen
Loop Quasi-Invariant Chunk Motion by peeling with statement composition
Jean-Yves Moyen, Thomas Rubiano and Thomas Seiller


This proceedings collects accepted regular papers and selected extended abstracts presented at the first joint international workshop on Developments in Implicit Computational complExity (DICE) and FOundational and Practical Aspects of Resource Analysis (FOPARA) which is held in Uppsala, Sweden, from April 22-23, 2017 as part of ETAPS.

The DICE workshop explores the area of Implicit Computational Complexity (ICC), which grew out from several proposals to use logic and formal methods to provide languages for complexity-bounded computation (e.g. PTIME, Logspace computation). It aims at studying the computational complexity of programs without referring to external measuring conditions or a particular machine model, but only by considering language restrictions or logical/computational principles entailing complexity properties.

The FOPARA workshop serves as a forum for presenting original research results that are relevant to the analysis of resource (e.g. time, space, energy) consumption by computer programs. The workshop aims to bring together the researchers that work on foundational issues with the researchers that focus more on practical results. Therefore, both theoretical and practical contributions are encouraged. We also encourage papers that combine theory and practice.

Given the complementarity and the synergy between these two communities, and following the successful experience of co-location of DICE-FOPARA 2015 in London at ETAPS 2015, we hold these two workshops together for the first time at ETAPS 2017.

DICE-FOPARA serves as a forum for presenting original and established research results that are relevant to the implicit computational complexity theory and to the analysis of resource (e.g. time, space, energy) consumption by computer programs. The workshop aims to bring together the researchers that work on foundational issues with the researchers that focus more on practical results. Therefore, both theoretical and practical contributions are encouraged, as well as papers that combine theory and practice.

We acknowledge the support by the program ANR-14-CE25-0005-3, ELICA and DARPA/AFRL contract number FA8750-17-C-088 as well as the provided help by the ETAPS organisation. Last but not least, we want to thank EPTCS for their continuing support in publishing these proceedings.

Resource Analysis of Distributed and Concurrent Programs

Elvira Albert

Distributed and concurrent systems are composed of distributed nodes that communicate and coordinate their actions, and concurrent tasks that interleave their execution within the distributed nodes. Resource analysis of distributed concurrent systems needs to consider the distribution, communication and task interleaving aspects of the systems. In this talk, we will describe the basic framework proposed for the resource analysis of distributed concurrent systems, together with the new notions of cost that arise in such context. In particular, we will discuss the notions of: peak cost that captures the maximum amount of resources that each distributed node might require along the whole execution; and parallel cost which corresponds to the maximum cost of the execution by taking into account that, when distributed tasks run in parallel, we need to account only for the cost of the most expensive one.

Whole Systems Energy Transparency: More Power to Software Developers!

Kerstin Eder

Energy efficiency is now a major, if not the major, constraint in electronic systems engineering. Significant progress has been made in low power hardware design for more than a decade. The potential for savings is now far greater at the higher levels of abstraction in the system stack. The greatest savings are expected from energy consumption-aware software [1]. Designing software for energy efficiency requires visibility of energy consumption from the hardware, where the energy is consumed, all the way through to the programs that ultimately control what the hardware does. This visibility is termed energy transparency [3].

My lecture emphasizes the importance of energy transparency from hardware to software as a foundation for energy aware software engineering. Energy transparency is a concept that enables a deeper understanding of how algorithms and coding impact on the energy consumption of a computation when executed on hardware. It is a key prerequisite for informed design space exploration and helps system designers find the optimal tradeoff between performance, accuracy and energy consumption of a computation [2]. Promoting energy efficiency to a first class software design goal is an urgent research challenge that must be addressed to achieve truly energy efficient systems.

I will start by illustrating how to measure energy consumption of software for embedded platforms [7]. This enables monitoring energy consumption of software at runtime, providing insights into the power, energy and timing behaviour of algorithms with some seemingly surprising results [4].

Energy models can be built to predict the energy consumption of programs based on execution statistics or trace data obtained from a simulator, or statically by employing advanced static resource consumption analysis techniques. I will introduce our approach to energy consumption modelling at the Instruction Set Architecture (ISA) level [8], comparing energy profiles of different instructions as well as exposing the impact of data width on energy consumption.

We then focus on two approaches to static analysis for energy consumption estimation: one based on solving recurrence equations, the other based on implicit path enumeration (IPET). With the former it is possible to extract parameterized energy consumption functions [10,6,9], while the latter produces absolute values [5]. Analysis can be performed either directly at the ISA level [10,5] or at the Intermediate Representation (IR) of the compiler, here the LLVM IR [9,6,5]. The latter is enabled through a novel mapping technique that associates costs at the ISA level with entities at the LLVM IR level [5].

A critical review of the assumptions underlying both analysis approaches, in particular, an investigation into the impact of using a cost model that assigns a single energy consumption cost to each instruction, leads to a better understanding of the limitations of static resource bound analysis techniques on the safety and tightness of the bounds retrieved. The fact that energy consumption is inherently data dependent will be illustrated for a selected set of instructions, some with rather beautiful heat maps, with conclusions being drawn on how energy consumption analysis differs from execution time analysis, and an intuition into why analysing for worst-case dynamic energy is infeasible in general [11]. This leads me to discuss new research challenges for energy consumption modelling as well as for static energy consumption analysis.

I will close with a call to giving ``more power'' to software developers so that ``cooler'' programs can be written in the future.


  1. (2010): A Conversation with Steve Furber. Queue 8(2), pp. 1–8, doi:10.1145/1716383.1716385.
  2. Kerstin Eder & John P. Gallagher (2017): Energy-Aware Software Engineering. In: Giorgos Fagas, Luca Gammaitoni, John P. Gallagher & Douglas J. Paul: ICT - Energy Concepts for Energy Efficiency and Sustainability, chapter 05. InTech, Rijeka, doi:10.5772/65985.
  3. Kerstin Eder, John P. Gallagher, Pedro López-García, Henk Muller, Zorana Bankovi\'c, Kyriakos Georgiou, Rémy Haemmerlé, Manuel V. Hermenegildo, Bishoksan Kafle, Steve Kerrison, Maja Kirkeby, Maximiliano Klemen, Xueliang Li, Umer Liqat, Jeremy Morse, Morten Rhiger & Mads Rosendahl (2016): ENTRA. Microprocess. Microsyst. 47(PB), pp. 278–286, doi:10.1016/j.micpro.2016.07.003.
  4. Hayden Field, Glen Anderson & Kerstin Eder (2014): EACOF: A Framework for Providing Energy Transparency to Enable Energy-aware Software Development. In: Proceedings of the 29th Annual ACM Symposium on Applied Computing, SAC '14. ACM, New York, NY, USA, pp. 1194–1199, doi:10.1145/2554850.2554920.
  5. Kyriakos Georgiou, Steve Kerrison, Zbigniew Chamski & Kerstin Eder (2017): Energy Transparency for Deeply Embedded Programs. ACM Trans. Archit. Code Optim. 14(1), pp. 8:1–8:26, doi:10.1145/3046679.
  6. Neville Grech, Kyriakos Georgiou, James Pallister, Steve Kerrison, Jeremy Morse & Kerstin Eder (2015): Static Analysis of Energy Consumption for LLVM IR Programs. In: Proceedings of the 18th International Workshop on Software and Compilers for Embedded Systems, SCOPES '15. ACM, New York, NY, USA, pp. 12–21, doi:10.1145/2764967.2764974.
  7. Steve Kerrison, Markus Buschhoff, Jose Nunez-Yanez & Kerstin Eder (2017): Measuring Energy. In: Giorgos Fagas, Luca Gammaitoni, John P. Gallagher & Douglas J. Paul: ICT - Energy Concepts for Energy Efficiency and Sustainability, chapter 03. InTech, Rijeka, pp. 59–82, doi:10.5772/65989.
  8. Steve Kerrison & Kerstin Eder (2015): Energy Modeling of Software for a Hardware Multithreaded Embedded Microprocessor. ACM Trans. Embed. Comput. Syst. 14(3), pp. 56:1–56:25, doi:10.1145/2700104.
  9. Umer Liqat, Kyriakos Georgiou, Steve Kerrison, Pedro Lopez-Garcia, John P. Gallagher, Manuel. V. Hermenegildo & Kerstin Eder (2016): Inferring Parametric Energy Consumption Functions at Different Software Levels: ISA vs. LLVM IR, pp. 81–100. Springer International Publishing, Cham, doi:10.1007/978-3-319-46559-3_5.
  10. Umer Liqat, Steve Kerrison, Alejandro Serrano, Kyriakos Georgiou, Pedro Lopez-Garcia, Neville Grech, Manuel V. Hermenegildo & Kerstin Eder (2014): Energy Consumption Analysis of Programs Based on XMOS ISA-Level Models, pp. 72–90. Springer International Publishing, Cham, doi:10.1007/978-3-319-14125-1_5.
  11. Jeremy Morse, Steve Kerrison & Kerstin Eder (2016): On the infeasibility of analysing worst-case dynamic energy. CoRR abs/1603.02580.

Challenges for Timing Analysis of Multi-Core Architectures

Jan Reineke (Saarland University, Saarbrücken, Germany)

In real-time systems, timely computation of outputs is as important as computing the correct output values. Timing analysis is a fundamental step in proving that all timing constraints of an application are met. Given a program and a microarchitecture, the task of timing analysis is to determine an upper bound on the response time of the program on the given microarchitecture under all possible circumstances. While this task is in general undecidable, it can be solved approximatively with sound abstractions. Current microarchitectural developments make timing analysis increasingly difficult: contemporary processor architectures employ deep pipelines, branch predictors, and caches to improve performance. Further, multi-core processors share buses, caches, and other resources, introducing interference between logically-independent programs. I will discuss three challenges arising from these developments, and approaches to overcome these challenges:

  1. Modeling: How to obtain faithful models--the basis of any static analysis--of the microarchitecture? [1–3]
  2. Analysis: How to precisely and efficiently bound a program's timing on a particular microarchitecture? [4, 9, 6, 5]
  3. Design: How do design microarchitectures to enable precise and efficient timing analysis without sacrificing average-case performance? [12, 13, 11, 10, 14, 8, 7]


  1. Andreas Abel & Jan Reineke (2013): Measurement-based Modeling of the Cache Replacement Policy. In: 19th IEEE Real-Time and Embedded Technology and Applications Symposium, RTAS 2013, Philadelphia, PA, USA, April 9-11, 2013, pp. 65–74, doi:10.1109/RTAS.2013.6531080.
  2. Andreas Abel & Jan Reineke (2014): Reverse Engineering of Cache Replacement Policies in Intel Microprocessors and Their Evaluation. In: 2014 IEEE International Symposium on Performance Analysis of Systems and Software, ISPASS 2014, pp. 141–142, doi:10.1109/ISPASS.2014.6844475.
  3. Andreas Abel & Jan Reineke (2016): Gray-box Learning of Serial Compositions of Mealy Machines. In: NASA Formal Methods - 8th International Symposium, NFM 2016, Minneapolis, MN, USA, June 7-9, 2016, Proceedings, pp. 272–287, doi:10.1007/978-3-319-40648-0_21.
  4. Sebastian Altmeyer, Robert I. Davis, Leandro Soares Indrusiak, Claire Maiza, Vincent Nélis & Jan Reineke (2015): A generic and compositional framework for multicore response time analysis. In: Proceedings of the 23rd International Conference on Real Time and Networks Systems, RTNS 2015, Lille, France, November 4-6, 2015, pp. 129–138, doi:10.1145/2834848.2834862.
  5. Tobias Blass, Sebastian Hahn & Jan Reineke (2017): Write-back Caches in WCET Analysis. In: Proceedings of the 29th Euromicro Conference on Real-Time Systems, ECRTS 2017.
  6. Sebastian Hahn, Michael Jacobs & Jan Reineke (2016): Enabling Compositionality for Multicore Timing Analysis. In: Proceedings of the 24th International Conference on Real-Time Networks and Systems, RTNS 2016, Brest, France, October 19-21, 2016, pp. 299–308, doi:10.1145/2997465.2997471.
  7. A Sebastian Hahn, Jan Reineke & Reinhard Wilhelm (2015): Toward Compact Abstractions for Processor Pipelines. In: Roland Meyer, André Platzer & Heike Wehrheim: Correct System Design - Symposium in Honor of Ernst-Rüdiger Olderog on the Occasion of His 60th Birthday, Oldenburg, Germany, September 8-9, 2015. Proceedings, pp. 205–220, doi:10.1007/978-3-319-23506-6_14.
  8. Sebastian Hahn, Jan Reineke & Reinhard Wilhelm (2015): Towards compositionality in execution time analysis: definition and challenges. SIGBED Review 12(1), pp. 28–36, doi:10.1145/2752801.2752805.
  9. Wen-Hung Huang, Jian-Jia Chen & Jan Reineke (2016): MIRROR: symmetric timing analysis for real-time tasks on multicore platforms with shared resources. In: Proceedings of the 53rd Annual Design Automation Conference, DAC 2016, Austin, TX, USA, June 5-9, 2016, pp. 158:1–158:6, doi:10.1145/2897937.2898046.
  10. Jan Reineke (2014): Randomized Caches Considered Harmful in Hard Real-Time Systems. Leibniz Transactions on Embedded Systems 1(1), pp. 03:1–03:13, doi:10.4230/LITES-v001-i001-a003.
  11. Jan Reineke, Sebastian Altmeyer, Daniel Grund, Sebastian Hahn & Claire Maiza (2014): Selfish-LRU: Preemption-aware caching for predictability and performance. In: 20th IEEE Real-Time and Embedded Technology and Applications Symposium, RTAS 2014, Berlin, Germany, April 15-17, 2014, pp. 135–144, doi:10.1109/RTAS.2014.6925997.
  12. Jan Reineke, Daniel Grund, Christoph Berg & Reinhard Wilhelm (2007): Timing Predictability of Cache Replacement Policies. Real-Time Systems 37(2), pp. 99–122, doi:10.1007/s11241-007-9032-3.
  13. Jan Reineke, Isaac Liu, Hiren D. Patel, Sungjun Kim & Edward A. Lee (2011): PRET DRAM Controller: Bank Privatization for Predictability and Temporal Isolation. In: 9th International Conference on Hardware/Software Codesign and System Synthesis, CODES+ISSS 2011, Taipei, Taiwan, October 9-14, 2011, pp. 99–108, doi:10.1145/2039370.2039388.
  14. Jan Reineke & Alejandro Salinger (2015): On the Smoothness of Paging Algorithms. In: Approximation and Online Algorithms - 13th International Workshop, WAOA 2015, Patras, Greece, September 17-18, 2015. Revised Selected Papers, pp. 170–182, doi:10.1007/978-3-319-28684-6_15.

On Resource Analysis of Imperative Programs

Lars Kristiansen

To what extent is it possible to estimate a program's resource requirements by analyzing the program code? I started my research on resource analysis of imperative programs about 17 years ago, and I published my last paper on the subject about five years ago. I will try to share some of the insights I gained during those years. I will also reflect a little bit upon the nature of this type of research: Is this pure theoretical computer science? Should we aim for real-life applications? To what extent can we expect such applications? Recently it has turned out the theory I (together with Amir Ben-Amram and Neil Jones) developed for resource analysis may have real-life applications when it is re-used as compiler theory. Towards the end of my talk I will explain these applications.


  1. Kristiansen, L. and Niggl, K-H.: On the computational complexity of imperative programming languages. Theoretical Computer Science 318 (2004), 139-161, doi: 10.1016/j.tcs.2003.10.016.
  2. Jones, N.D. and Kristiansen, L.: A flow calculus of mwp-bounds for complexity analysis. ACM Transactions of Computational Logic 10 (2009), doi: 10.1145/1555746.1555752.
  3. Ben-Amram, A.M, Jones, N.D. and Kristiansen, L.: Linear, Polynomial or Exponential? Complexity Inference in Polynomial Time. CiE'08:Logic and Theory of Algorithms, Springer LNCS 5028, pp. 67-76, Springer-Verlag 2008, doi: 10.1007/978-3-540-69407-6_7.
  4. Ben-Amram, A.M. and Kristiansen, L.: On the edge of decidability in complexity analysis of loop programs. International Journal of Foundations of Computer Science 23 (2012), 1451-1464, doi: 10.1142/S0129054112400588