TACAS 2018
24th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS)
Proceedings: TACAS (Vol.1) - TACAS (Vol.2)
Artifacts: figshare.com
TACAS is a forum for researchers, developers and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, flexibility and efficiency of tools and algorithms for building systems.
Theoretical papers with clear relevance for tool construction and analysis as well as tool descriptions and case studies with a conceptual message are all encouraged. The topics covered by the conference include, but are not limited to:
- specification and verification techniques;
- software and hardware verification;
- analytical techniques for real-time, hybrid, or stochastic systems;
- analytical techniques for safety, security, or dependability;
- SAT and SMT solving;
- theorem-proving;
- model-checking;
- static and dynamic program analysis;
- testing;
- abstraction techniques for modeling and verification;
- compositional and refinement-based methodologies;
- system construction and transformation techniques;
- machine-learning techniques for synthesis and verification;
- tool environments and tool architectures;
- applications and case studies.
Important dates and submission
See the ETAPS 2018 joint call for papers. Submit your paper via the TACAS 2018 author interface of EasyChair.
TACAS 2018 will not have a rebuttal phase.
TACAS paper categories
TACAS accepts four types of submissions: research papers, case-study papers, regular tool papers, and tool demonstration papers. Papers of all four types will appear in the proceedings and have presentations during the conference.
- Research papers clearly identify and justify a principled advance to the theoretical foundations for the construction and analysis of systems. Where applicable, they are supported by experimental validation. Research papers can have a maximum of 15 pp (excluding bibliography of max 2 pp).
- Case-study papers report on case studies, preferably in a real world setting. They should provide information about the following aspects: the system being studied and the reasons it is of interest, the goals of the study, the challenges the system poses to automated analysis, research methodologies and approaches used, the degree to which goals were attained, and how the results can be generalized to other problems and domains. Case-study papers can have a maximum of 15 pp (excluding bibliography of max 2 pp).
- Regular tool papers present a new tool, a new tool component, or novel extensions to an existing tool. They should provide a short description of the theoretical foundations with relevant citations, and emphasize the design and implementation concerns, including software architecture and core data structures. A regular tool paper should give a clear account of the tool's functionality, discuss the tool's practical capabilities with reference to the type and size of problems it can handle, describe experience with realistic case studies, and where applicable, provide a rigorous experimental evaluation. Papers that present extensions to existing tools should clearly focus on the improvements or extensions with respect to previously published versions of the tool, preferably substantiated by data on enhancements in terms of resources and capabilities. Authors are strongly encouraged to make their tools publicly available, preferably on the web, even if only for the evaluation process. Tool papers can have a maximum of 15 pp (excluding bibliography of max 2 pp).
- Tool-demonstration papers focus on the usage aspects of tools. As with regular tool papers, authors are strongly encouraged to make their tools publicly available, preferably on the web. Theoretical foundations and experimental evaluation are not required, however, a motivation as to why the tool is interesting and significant should be provided. Tool demonstration papers can have a maximum of 6 pages.
Submission and evaluation criteria
Evaluation: All papers will be evaluated by the program committee, coordinated by the PC chairs for research papers, by the case-study chair for case-study papers, and by the tools chair for regular tool papers and tool demonstration papers. All papers will be judged on novelty, significance, correctness, and clarity.
Replicability of results: We consider that reproducibility of results is of the utmost importance for the TACAS community. Therefore, we encourage all authors of submitted papers to include support for replicating the results of their papers. For theorems, this would mean providing proofs; for algorithms, this would mean including evidence of correctness and acceptable performance, either by a theoretical analysis or by experimentation; and for experiments one should provide access to the artifacts used to generate the experimental data. Material that does not fit into the paper or an appendix may be provided on a supplementary web site, with access appropriately enabled and license rights made clear. For example, the supplemental material for reviewing case-study papers and papers with experimental results could be classified as reviewer-confidential if necessary (e.g., if proprietary data are investigated or software is not open source).
Limit of 3 submissions: Each individual author is limited to a maximum of three submissions as an author or co-author. Authors of co-authored submissions are jointly responsible for respecting this policy. In case of violations, all submissions of this (co-)author will be desk-rejected.
Artifact evaluation
Authors of all accepted regular papers, tool papers, tool-demonstration papers, and case-study papers will be invited to submit (but are not required to submit) the relevant artifact for evaluation by the artifact evaluation committee (AEC). The AEC will read the paper and evaluate the artifact on the following criteria:
- consistency with and replicability of results in the paper,
- completeness,
- documentation, and
- ease of use.
More information can be found on the artifact webpage: https://tacas.info/artifacts.php
Competition on software verification
TACAS 2018 hosts the 7th Competition on Software Verification with the goal to evaluate technology transfer and compare state-of-the-art software verifiers with respect to effectiveness and efficiency. More information can be found on the webpage of the competition: https://sv-comp.sosy-lab.org/2018/
Program chairs
Dirk Beyer (LMU Munich, Germany)
Marieke Huisman (Universiteit Twente, The Netherlands)
Tools chair
Goran Frehse (Université Grenoble Alpes, France)
Artifact evaluation chairs
Arnd Hartmanns (Universiteit Twente, The Netherlands)
Philipp Wendler (LMU Munich, Germany)
Case study chair
Holger Hermanns (Universität des Saarlandes, Germany)
Competition chair
Tomáš Vojnar (TU Brno, Czechia)
Program committee
Wolfgang Ahrendt (Chalmers University of Technology, Sweden)
Dirk Beyer (LMU Munich, Germany)
Armin Biere (Johannes-Kepler-Universität, Austria)
Lubos Brim (Masaryk University, Czechia)
Franck Cassez (Macquarie University, Australia)
Alessandro Cimatti (Fondazione Bruno Kessler, Italy)
Rance Cleaveland (University of Maryland, USA)
Goran Frehse (Université Grenoble Alpes, France)
Jan Friso Groote (Technische Universität Eindhoven, The Netherlands)
Gudmund Grov (FFI, Norway)
Orna Grumberg (Technion, Israel)
Arie Gurfinkel (University of Waterloo, Canada)
Klaus Havelund (Jet Propulsion Laboratory, USA)
Matthias Heizmann (Universität Freiburg, Germany)
Holger Hermanns (Universität des Saarlandes, Germany)
Falk Howar (Technische Universität Clausthal, Germany)
Marieke Huisman (Universiteit Twente, The Netherlands)
Laura Kovács (Technische Universität Wien, Austria)
Jan Kretinsky (Technische Universität München, Germany)
Kim G. Larsen (Aalborg University, Denmark)
Salvatore La Torre (Università di Salerno, Italy)
Axel Legay (INRIA Rennes, France)
Yang Liu (Nanyang Technological University, China)
Rupak Majumdar (MPI-SWS Kaiserslautern, Germany)
Tiziana Margaria (LERO, Ireland)
Rosemary Monahan (NUI Maynooth, Ireland)
David Parker (University of Birmingham, UK)
Corina Pasareanu (NASA Ames, USA)
Alexander K. Petrenko (Institute for System Programming, Russia)
Zvonimir Rakamaric (University of Utah, USA)
Kristin Yvonne Rozier (University of Cincinnati, USA)
Natasha Sharygina (Università della Svizzera italiana, Switzerland)
Stephen F. Siegel (University of Delaware, USA)
Bernhard Steffen (Technische Universität Dortmund, Germany)
Stavros Tripakis (University of California at Berkeley, USA)
Frits Vaandrager (Radboud University Nijmegen, The Netherlands)
Tomáš Vojnar (Brno University of Technology, Czechia)
Heike Wehrheim (Universität Paderborn, Germany)
Thomas Wies (New York University, USA)
Damien Zufferey (MPI-SWS, Germany)