TACAS

29th International Conference on Tools and Algorithms for the Construction and Analysis of Systems

TACAS is a forum for researchers, developers and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, flexibility and efficiency of tools and algorithms for building systems.

Theoretical papers with clear relevance for tool construction and analysis as well as tool descriptions and case studies with a conceptual message are all encouraged. The topics covered by the conference include, but are not limited to:

  • specification and verification techniques;
  • software and hardware verification;
  • analytical techniques for real-time, hybrid, or stochastic systems;
  • analytical techniques for safety, security, or dependability;
  • SAT and SMT solving;
  • theorem proving;
  • model checking;
  • static and dynamic program analysis;
  • testing;
  • abstraction techniques for modeling and verification;
  • compositional and refinement-based methodologies;
  • system construction and transformation techniques;
  • machine-learning techniques for synthesis and verification;
  • tool environments and tool architectures;
  • applications and case studies.

Important dates, paper submission and review

The details about paper submission and important dates can be found in the ETAPS joint Call for Papers.

For the important dates, please check the important dates page.

TACAS supports artifact evaluation.

The review process of TACAS 2022 is double-blind, with a rebuttal phase for selected papers.

The double-blind review process applies to all paper categories (see the list later). Authors should follow the usual rules for anonymizing the papers, e.g., refer to prior work in third person. Tool and tool demonstration papers should be anonymous but they should still use the original tool name and explain the relation with previous versions (if any) of the tool. For example, a successor tool called “xyz2” of some tool “xyz” might still be called “xyz2” in the paper submission. The submission might still cite (without anonymization) all the published works related to the tool, such as the papers describing the algorithms the tool implements or papers that already use the tool. Tool and tool demonstration papers will not be rejected in the case of a possible loss of anonymity caused by this.

Paper categories

TACAS accepts four types of submissions: research papers, case-study papers, regular tool papers, and tool demonstration papers.

Research papers clearly identify and justify a principled advance to the theoretical foundations for the construction and analysis of systems. Where applicable, they are supported by experimental validation.

Case-study papers report on case studies, preferably in a real-world setting. They should provide information about the following aspects: the system being studied and the reasons why it is of interest, the goals of the study, the challenges the system poses to automated analysis/testing/synthesis, research methodologies and approaches used, the degree to which the goals were met, and how the results can be generalized to other problems and domains.

Regular tool papers present a new tool, a new tool component, or novel extensions to an existing tool, and are subject to an artifact submission requirement (see below). They should provide a short description of the theoretical foundations with relevant citations, and emphasize the design and implementation concerns, including software architecture and core data structures. A regular tool paper should give a clear account of the tool’s functionality, discuss the tool’s practical capabilities with reference to the type and size of problems it can handle, describe experience with realistic case studies, and where applicable, provide a rigorous experimental evaluation. Papers that present extensions to existing tools should clearly focus on the improvements or extensions with respect to previously published versions of the tool, preferably substantiated by data on enhancements in terms of resources and capabilities.

Tool-demonstration papers focus on the usage aspects of tools and are also subject to the artifact submission requirement. Theoretical foundations and experimental evaluation are not required, however, a motivation as to why the tool is interesting and significant should be provided. Further, the paper should describe aspects such as, for example, the assumptions about application domain and/or extent of potential generality, demonstrate the tool workflow(s), explain integration and/or human interaction, evaluate the overall role and the impact to the development process.

The length of research, case study, and regular tool papers is limited to 16 pp llncs.cls (excluding the bibliography). Additional material can be placed in a clearly marked appendix. The reviewers are, however, not obliged to read such appendices. If the paper is accepted, a full version including the appendix should be made publicly available via arXiv or a similar platform. The length of tool demonstration papers is limited to 6 pp llncs.cls (excluding the bibliography). Tool demonstration papers should also have a mandatory appendix limited to 6 pp that provides the details of the demonstration (e.g., screenshots of the relevant steps of the demo showing the input to and output from the tool).

Artifact submission and evaluation

Regular tool papers and tool demonstration papers must be accompanied by an artifact submitted by the artifact submission deadline and the results of the artifact evaluation will be taken into account by the program committee during the paper’s discussion. The artifact evaluation of the accepted research papers and case study papers is optional and the artifact is submitted after the paper’s acceptance.

The submitted artifact must not be anonymized (in contrast to the paper submission that must be anonymous). 

Refer to the specific instructions and guidelines for the artifact evaluation on the TACAS-specific web site.

Competition on software verification

TACAS 2023 will host the next, 12th instance of the Competition on Software Verification, SV-COMP 2023.

Program chairs

Natasha Sharygina (Università della Svizzera Italiana)
Sriram Sankaranarayanan (University of Colorado, Boulder, CO)

Program committee

Christel Baier (TU Dresden)
Haniel Barbosa (Universidade Federal de Minas Gerais)
Ezio Bartocci (TU Wien)
Dirk Beyer (LMU Munich, Germany)
Armin Biere (Freiburg)
Nikolaj Bjørner (Microsoft)
Roderick Bloem (Graz University of Technology)
Ahmed Bouajjani (IRIF, University Paris Diderot)
Hana Chockler (King's College London)
Alessandro Cimatti (Fondazione Bruno Kessler)
Rance Cleaveland (University of Maryland)
Javier Esparza (Technical University of Munich)
Chuchu Fan (MIT)
Bernd Finkbeiner (CISPA Helmholtz Center for Information Security)
Martin Fränzle (Carl von Ossietzky Universität Oldenburg)
Khalil Ghorbal (INRIA)
Laure Gonnord (Grenoble-INP / LCIS)
Orna Grumberg (Technion - Israel Institute of Technology)
Kim Guldstrand Larsen (Computer Science, Aalborg University)
Arie Gurfinkel (University of Waterloo)
Ranjit Jhala (University of California San Diego)
Bettina Könighofer (technical university of graz)
Laura Kovacs (Vienna University of Technology)
Alexander Kulikov (St. Petersburg Department of Steklov Institute of Mathematics)
Wenchao Li (Boston University)
Peter Müller (ETH Zurich)
Kedar Namjoshi (Nokia Bell Labs)
Aina Niemetz (Stanford University)
Corina Pasareanu (CMU, NASA, KBR)
Nir Piterman (University of Gothenburg)
Philipp Ruemmer (University of Regensburg)
Krishna Shankara Narayanan (IIT Bombay)
Cesar Sanchez (IMDEA Software Institute)
Sharon Shoham (Tel Aviv University)
Fabio Somenzi (University of Colorado, Boulder, CO)
Cesare Tinelli (The University of Iowa)
Stavros Tripakis (Northeastern University)
Frits Vaandrager (Radboud University)
Yakir Vizel (The Technion)
Tomas Vojnar (Brno University of Technology)
Naijun Zhan (Institute of Software, Chinese Academy of Sciences)
Lijun Zhang (Institute of Software, Chinese Academy of Sciences)
Florian Zuleger (Vienna University of Technology)

 

Tools and Artifact Evaluation Committee Chairs

Grigory Fedyukovich (Florida State University)
Sergio Mover (Ecole Polytechnique)

Competition chair

Dirk Beyer (Ludwig-Maximilians-Universität München, Germany)

 

Who's online

We have 83 guests and no members online

Site Hosted by