Award given to outstanding papers published at ETAPS more than ten years ago.
The ETAPS Test of Time Award, instituted 2017, recognizes outstanding papers published more than 10 years in the past in one of the constituent conferences of ETAPS. The Award recognises the impact of excellent research results that have been published at ETAPS.
The Test of Time Award is selected by an Award Committee consisting of a representative of each of the constituent ETAPS conferences, the ETAPS Steering Committee Chair, the General Chair of the current ETAPS, and a Chair appointed by the ETAPS Steering Committee Chair.
The Award Committee is expected to select 1–2 papers each year. It may choose to select no paper in a given year. The winners of the ETAPS Test of Time Award receive a recognition plaque at ETAPS and a cash award of 1200€ which is shared among the authors. A paper can receive the ETAPS Test of Time Award only once.
Nominations should be sent by Monday 5 February 2024 to the chair of the award committee, Don Sannella.
Nominations for the 2024 ETAPS Test of Time Award are solicited from the ETAPS community. A nomination should include the title and publication details of the nominated paper, explain the influence it has had since publication, and why it merits the award. The nomination should phrase it in terms that are understandable by the members of the award committee and suitable for use in the award citation and should be endorsed by at least 2 people other than the person submitting nomination. Self-nominations are not allowed.
Nominations should be sent by Monday 5 February 2024 to the chair of the award committee, Don Sannella.
The core of software engineering is to integrate different methods, techniques, and tools to produce better software. In this work, the ideas of abstraction, CEGAR, lazy abstraction refinement, and interpolation are integrated into a simple, standard explicit-value analysis. By combining refinement-based explicit-value analysis with predicate analysis, performance and precision were improved to benefit from the complementary advantages of the methods. As a result, this approach can compete with the world’s leading symbolic model checkers without the need to use sophisticated SMT-solvers during abstraction refinement, leading to significant influence on the architecture of modern automated software verification tools. A recent survey article on Software Model Checking states that a quarter of all verification tools that have ever participated in the Competition on Software Verification (SV-COMP) implement this combination of techniques. More significantly, two-thirds of the category winners (places one to three) and all the overall winners in 2022 use these techniques.
Luis Caires, Rance Cleaveland, Ugo Dal Lago, Marieke Huisman, Fabrice Kordon, Don Sannella (chair), and Gabriele Taentzer.
Computational effects – mutable memory, non-deterministic and probabilistic choice, exceptions, and so on – have long been the awkward squad of semantic modelling and mathematically structured programming. This paper builds on a line of work connecting the computational theory of monads, used to model computational effects, with its universal algebraic facets, making formal connections between a language’s programming constructs and algebraic operations that present the language’s monadic semantics. It proposes an algebraic treatment of exception handlers and show that the mechanism underlying exception handling generalises to other kinds of computational effects, going far beyond exceptions. This discovery enabled a new paradigm: by keeping the theories free and exposing handlers syntactically, programmers as well as semanticists could investigate the connections between programming languages and equational theories.
This paper initiated a fruitful line of research into the semantics, implementation, design, and logic of programs with computational effects. Effect handlers have proven to be a versatile, modular, expressive, efficient, and intuitive programming abstraction. They have made their way into some of the main functional programming languages, and are crucial to the design of several large-scale programming platforms, well outside the functional programming communities in which they originated. These include GitHub (underlying its semantic analysis of every pull request on more than 50% of its open-source repositories) and the React GUI web framework (underlying the “React Hooks” feature).
Rance Cleaveland, Ugo Dal Lago, Marieke Huisman, Jan Křetínský, Don Sannella (chair), Gabriele Taentzer and Peter Thiemann.
This paper is noteworthy for its combined use of symbolic execution and model checking to both verify software applications and to generate exhaustive test cases for them. At the time of the work’s publication, the predominant approaches to conducting these validation tasks on actual software involved applying user-defined abstractions to the source code in question to derive models that could then be fed into existing model-checking / test-generation tools. This paper showed that by using symbolic execution, the abstraction step could be omitted, and that constraint solvers could be used in conjunction with symbolic execution to undertake both full verification as well as the task of automated test generation. The resulting technology has been implemented in the widely used Java PathFinder tool and is still a key piece of that technology’s functionality, almost 20 years later. In addition, the combination of symbolic execution and model checking is used nowadays in many different applications such as the verification of implementations of communication protocols.
Rance Cleaveland, Ugo Dal Lago, Marieke Huisman, Peter Y. A. Ryan, Don Sannella (chair), Gabriele Taentzer, Peter Thiemann
Smith’s paper represents a cornerstone in the field of Secure Information Flow. Before this paper, the community widely relied on Shannon Entropy to represent the (lack of) knowledge of the adversary about the secret. In his paper Smith showed, in his lucid and crystal-clear style, that a realistic notion of adversary with limited attack opportunities is more suitably modeled by Rényi min-entropy. The message, in its simplicity and the evidence of the arguments, completely convinced the community. The paper became the starting point of a systematic study of entropies and vulnerabilities for quantitative information flow, and has had a deep influence on the field–for instance in the work of Mario Alvim, Gilles Barthe, Michele Boreale, Kostas Chatzikokolakis, Yusuke Kawamoto, Boris Köpf, Annabelle McIver, Carroll Morgan, Catuscia Palamidessi and Tachio Terauchi. Indeed it is fair to say that it caused a shift in the community; most of the subsequent works on quantitative information flow in this area have adopted the new model, and the paper is highly cited.
The paper was also the starting point of the so-called g-leakage framework, which is on the cutting edge of the research on Quantitative Information Flow. In a sense, the g-leakage approach is the natural evolution of the min-entropy approach.
Rance Cleaveland, Ugo Dal Lago, Marieke Huisman, Tiziana Margaria, Don Sannella (chair), Gabriele Taentzer, Peter Thiemann
This is one of the seminal papers on session types. Session types are type-theoretic specifications of communication protocols, allowing protocol implementations to be verified by static type-checking. During the last 20 years, the research topic of session types has grown steadily, and has been applied to programming languages including Haskell, OCaml, Java, Python and Go. Connections have been made to software contracts, model-checking, logic, runtime verification and hardware specification. Session types regularly feature at major programming language conferences: for example, there are 5 papers on session types at ETAPS 2018. Session types have formed the basis for numerous funded research projects, including COST Action IC1201 (www.behavioural-types.eu), a network of more than 120 researchers in 22 countries.
This is the paper that is most frequently read and cited (832 in Google Scholar, up from 749 last year) as an original reference for session types. It made two key advances with respect to earlier papers by Honda et al. First, it covered the whole of pi-calculus by introducing delegation, which is the ability to transfer a session as a communication channel, from one process to another, so that the receiver continues the protocol started by the sender. Second, it included a broad range of examples of the use of session types - functional programming, concurrent objects, and network protocols - thus anticipating implementations and applications of session types that are now widely used.
Bruno Blanchet, Rance Cleaveland, Ugo Dal Lago, Joost-Pieter Katoen, Jan Kofron, Don Sannella (chair), Gabriele Taentzer, Peter Thiemann
This paper introduces the Z3 “satisfaction modulo theories” (SMT) solver and describes some of its uses within Microsoft Research at the time of the paper’s writing. The so-called SMT problem solved by Z3 is easy to state: given a list of quantifier-free first-order formulas, determine if there exists an assignment of values to free variables in the formulas that makes all the formulas true. Z3 combines SAT-solving with constraint solvers for specific theories to accomplish this task. The tool is structured in an elegant way, making it relatively easy to extend the theories that the tool supports. It is also well-engineered, making it very efficient, and supporting materials that have been developed over the years make it straightforward to learn and use.
For these reasons, Z3 has become the standard SMT used in program-analysis and system verification engines, from static analyzers for programming languages, to test-case generators for modeling languages such as Simulink, to model checkers for various system-modeling tools in which data features prominently. The tool is now available under an open-source license. It has bindings for a number of programming languages, including C, C++, Java, OCaml and Python, and has become the default SMT solver used in research projects worldwide.
Despite the short length (4 pages!) of this paper, it is the standard reference for the Z3 tool. It is also almost certainly the most heavily cited ETAPS paper to date, at least according to Google Scholar.
Bruno Blanchet, Rance Cleaveland, Joost-Pieter Katoen, Panagiotis Katsaros, Luke Ong, Don Sannella (chair), Gabriele Taentzer, Hongseok Yang
This paper made two fundamental and high-impact contributions to the field of model checking and automated verification.
The first involves the use of general-purpose SAT solvers to replace specialized decision procedures based on Boolean Decision Diagrams (BDDs) during the model-checking process. Before this paper, BDDs had been the cornerstone of so-called symbolic model checkers, which use implicit rather than explicit representations of the sets of system states they manipulate during their execution. BDDs had revolutionized model checking by enabling the handling of much larger systems than was previously possible. This paper was a revolutionary advance beyond BDDs, as the authors showed how the sets of states in question could also be viewed as propositional formulas that can be handled using general-purpose SAT solvers. This simple idea has had profound impact in the field of automated verification, as it is also the basis for the use of SMT solvers for model-checking of systems with, for example, infinite state spaces. Most modern model checkers now rely on a SAT-solving backend as a result of this paper.
The second fundamental contribution of the paper is the invention of the so-called bounded model checking technique for system verification. Before this paper, the goal of model checking was to prove systems correct; model checkers either found such a proof, or detected that no such proof was possible. This paper instead advocated the idea that the purpose of model checking was to detect bugs, and to give simplest-possible explanations for these bugs when they are uncovered. This change in perspective had a massive impact on model checking: it opened researchers up to the possibility of model-checking-inspired verification techniques that could give incremental information during the checking procedure, even if the procedure might not terminate in reasonable time with the classical “yes, there is a proof / no, no such proof exists” answer. Later “bug-search” techniques, such as statistical model checking and run-time verification, owe much of their inspiration to the bounded model-checking idea in this paper, even as this idea on its own continues to be a thriving area of research.
Parosh Aziz Abdulla, Bruno Blanchet, Rance Cleaveland, Joost-Pieter Katoen, Luke Ong, Don Sannella (chair), Zhong Shao, Gabriele Taentzer