TACAS 2025

31st International Conference on Tools and Algorithms for the Construction and Analysis of Systems

Scope

TACAS is a forum for researchers, developers and users interested in rigorously based tools and algorithms for the construction and analysis of systems.

The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, flexibility and efficiency of tools and algorithms for building systems.

Theoretical papers with clear relevance for tool construction and analysis as well as tool descriptions and case studies with a conceptual message are all encouraged. The topics covered by the conference include, but are not limited to:

  • specification and verification techniques;
  • software and hardware verification;
  • analytical techniques for real-time, hybrid, or stochastic systems;
  • analytical techniques for safety, security, or dependability;
  • SAT and SMT solving;
  • theorem proving;
  • model checking;
  • static and dynamic program analysis;
  • testing;
  • abstraction techniques for modeling and verification;
  • compositional and refinement-based methodologies;
  • system construction and transformation techniques;
  • machine-learning techniques for synthesis and verification;
  • tool environments and tool architectures;
  • applications and case studies.

Important Dates

The important dates are available in the Joint Call for Papers.

Submission Categories

TACAS accepts four types of submissions: research papers, case-study papers, regular tool papers, and tool demonstration papers.

Research Papers identify and justify a principled advance to the theoretical foundations for the construction and analysis of systems. Where applicable, they are supported by experimental validation. Artifact evaluation for regular research papers is not mandatory but strongly encouraged.

We do not discourage authors of regular research papers to put their submission on arXiv (or similar repos). Yet, we strongly encourage authors to not put the work on arXiv (or similar repos) around 2 weeks before and after the submission deadline, because potential reviewers may be subscribed to receive updates on recently posted papers.

Case-Study Papers are full length case-studies that describe the application of techniques developed by the community to a single problem or a set of problems of practical importance, preferably in a real-world setting. Case-study papers must describe the problem of interest and its novelty. It must provide a detailed description of the case-study (or studies) and various related techniques to solve the problem of interest for the case-study application. Modeling aspects of the case-study must be discussed. The case-study paper must provide detailed results on the application of formal techniques and lead to a conclusion that could be of interest beyond TACAS (to the research community pertaining to the case study). Artifact evaluation is currently not mandatory for case-study papers but undergoing artifact evaluation is strongly encouraged.

Regular Tool Papers are full length papers that describe a tool of interest to the community or a new version of the tool built using novel algorithmic and engineering techniques. Regular tool papers must describe tools of broad interest and utility to the TACAS community. The papers must clearly describe the problem to be solved, its importance, related work, the techniques used in the tool and their novelty, the construction of the tool, its unique features, discuss how the tool is used and present benchmarking of the tool including comparisons with other tools and previous versions of the tool. For tools from the industry, the description should be useful in allowing the community members to reproduce some of the key techniques or “tricks” in their own tools. All tool papers must undergo mandatory artifact evaluation.

Tool-Demonstration Papers are short papers that demonstrate a new tool or application of existing tool on a significant and novel case-study. As such, tool demo papers will be accepted if the PC decides that they are sufficiently interesting and novel to the community. Artifact evaluation is mandatory for these papers.

Paper Submission

Submit paper

Submissions must follow the formatting guidelines of Springer’s LNCS (use the llncs.cls class) and be submitted electronically in pdf through the Easychair author interface of TACAS 2025.

The review process of TACAS 2025 is:

  • double-blind for regular research papers. As such, authors of regular research papers must omit their names and institutions; refer to prior work in the third person, just as prior work by others; not to include acknowledgments that might identify them.
  • single-blind for case study papers, regular tool papers, and tool-demonstration papers.

The length of regular research papers, case study papers, and regular tool papers is limited to 16 pp llncs.cls (excluding the bibliography and appendices). The length of tool demonstration papers is limited to 6 pp llncs.cls (excluding the bibliography and appendices). Additional material can be placed in a clearly marked appendix, at the end of the paper. The reviewers are, however, not obliged to read such appendices. If the paper is accepted, a full version including the appendix should be made publicly available via arXiv or a similar platform.

TACAS 2025 allows at most four submissions per author.

Submissions not adhering to the formatting and length requirements specified above will be rejected immediately.

Conflict of Interest

Upon paper submission, authors are asked to declare their Conflict of Interest (COI) with TACAS 2025 PC members. A COI includes the following sources:

  • Family member or close friend.
  • Ph.D. advisor or advisee (no time limit), or postdoctoral or undergraduate mentor or mentee within the past five years.
  • Person with the same affiliation.
  • Involved in an alleged incident of harassment. (It is not required that the incident be reported.)
  • Reviewer owes author a favour (e.g., recently requested a reference letter).
  • Frequent or recent collaborator (within last 5 years) cannot objectively review your work. If an author believes that they have a valid reason for a COI not listed above, then the author can contact TACAS 2025 PC chairs. Falsely declared conflicts (i.e., do not satisfy one of the listed reasons) risk rejection without consideration of merit. If an author is uncertain of COI, the author is encouraged to email TACAS 2025 PC chairs.

Paper Evaluation

All papers will be evaluated by the program committee (PC), coordinated by the PC chairs and aided by the artifact evaluation chairs. All papers will be judged on novelty, significance, correctness, and clarity.

Reproducibility of results is of the utmost importance for TACAS. Therefore, we encourage all authors to include support for replicating the results of their papers. For theorems, this would mean providing proofs; for algorithms, this would mean including evidence of correctness and acceptable performance, either by a theoretical analysis or by experimentation; and for experiments, one should provide access to the artifacts used to generate the experimental data. Material that does not fit into the paper page limit may be provided as appendix and/or on a supplementary website (without violating double-blind submission requirements of regular research papers), with access appropriately enabled and license rights made clear. For example, the supplemental material for reviewing case-study papers and papers with experimental results could be classified as reviewer-confidential if necessary (e.g. if proprietary data are investigated or software is not open source).

TACAS 2025 will use rebuttal for selected submissions (those in the grey zone).

Artifact Evaluation

TACAS 2025 implements mandatory artifact evaluation for regular tool paper and tool demonstration paper submissions. Authors of these submissions must submit the respective artifacts by October 24, 2024 (AoE). The artifact will be evaluated, and the outcome will be considered in the paper’s acceptance decision.

TACAS 2025 also implements a voluntary artifact evaluation for regular paper and case study paper submissions. Authors of these submissions are encouraged to submit their respective artifacts. The artifacts will be evaluated post-paper-acceptance, provided the submission has been accepted as a regular paper or case study paper at TACAS 2025. Authors of accepted regular/case-study papers may revise/submit their artifact for evaluation shortly (in one week time and not longer than January 9, 2025) after the paper has been accepted. The outcome of the voluntary artifact evaluation will not alter the paper acceptance decision.

Details on artifacts and their evaluation criteria are available on the artifacts page.

Artifact Submission

Artifact submission is handled via EasyChair. After the paper submission deadline, the paper submission will be copied by us from the main track to the artifact evaluation track. In this track, you should amend your submission’s abstract and upload a ZIP archive containing your artifact. Do not attempt to create a new submission, nor change your submission’s other details such as authors or title. Because of this copying process, you can upload or update your artefact either in the main track before the paper submission deadline, or in the AE track during a window before the deadline. The TACAS PC or AEC chairs will inform authors when the AE track is open for different paper categories.

Competition on Software Verification (SV-COMP)

TACAS 2025 will host the 14th instance of the Competition on Software Verification, SV-COMP.

Program Committee

PC Chairs

  • Arie Gurfinkel
  • Marijn Heule

PC members

  • Erika Ábrahám (RWTH Aachen University, Germany)
  • Elvira Albert (Complutense University of Madrid, Spain)
  • Cyrille Artho (KTH Royal Institute of Technology, Sweden)
  • Haniel Barbosa (Universidade Federal de Minas Gerais, Brazil)
  • Clark Barrett (Stanford, USA)
  • Dirk Beyer (LMU Munich, Germany)
  • Armin Biere (Freiburg University, Germany)
  • Nikolaj Bjorner (Microsoft Research, USA)
  • Roderick Bloem (TU Graz, Austria)
  • Rose Bohrer (Worcester Polytechnic Institute, USA)
  • Borzoo Bonakdarpour (Michigan State University, USA)
  • Mingshuai Chen (Zhejiang University, China)
  • Mila Dalla Preda (University of Verona, Italy)
  • Rayna Dimitrova (CISPA, Germany)
  • Grigory Fedyukovich (Florida State University, USA)
  • Hadar Frenkel (CISPA – Helmholtz Center for Information Security, Germany)
  • Alberto Griggio (Fondazione Bruno Kessler, Italy)
  • Ichiro Hasuo (National Institute of Informatics, Japan)
  • Holger Hermanns (Saarland University, Germany)
  • Alan Hu (University of British Columbia, Canada)
  • Sebastian Junges (Radboud University, Netherlands)
  • Temesghen Kahsai (Amazon, USA)
  • Joost-Pieter Katoen (RWTH Aachen University, Germany)
  • Guy Katz (The Hebrew University of Jerusalem, Israel)
  • Umang Mathur (National University of Singapore, Singapore)
  • Anastasia Mavridou (KBR / NASA Ames Research Center, USA)
  • Kuldeep Meel (University of Toronto, Canada)
  • Magnus Myreen (Chalmers University of Technolog, Sweden)
  • Nina Narodytska (VMware, USA)
  • Jorge Navas (Certora, USA)
  • Laura Nenzi (University of Trieste, Italy)
  • Aina Niemetz (Stanford, USA)
  • Andre Platzer (Karlsruhe Institute of Technology, Germany)
  • Elizabeth Polgreen (University of Edinburgh, Scotland)
  • Kristin Rozier (Iowa State, USA)
  • Christian Schilling (Aalborg University, Denmark)
  • Anne-Kathrin Schmuck (Max Planck Institute, Germany)
  • Natasha Sharygina (USI, Switzerland)
  • Sharon Shoham (Tel-Aviv University, Israel)
  • Xujie Si (University of Toronto, Canada)
  • Mihaela Sighireanu (ENS Paris-Saclay, France)
  • Tanel Tammet (Tallinn University of Technology, Estonia)
  • Yong Kiam Tan (Institute for Infocomm Research, A*STAR, Singapore)
  • Silvia Lizeth Tapia Tarifa (University of Oslo, Norway)
  • Cesare Tinelli (The University of Iowa, USA)
  • Hazem Torfah (Chalmers University of Technology, Sweden)
  • Yakir Vizel (Technion, Israel)
  • Tomas Vojnar (Brno University of Technology, Czechia)
  • Jingbo Wang (University of Southern California, USA)
  • Georg Weissenbacher (TU Wien, Austria)
  • Anton Wijs (Eindhoven University of Technology, Netherlands)
  • Sarah Winkler (Free University Bolzano, Italy)
  • Haoze Wu (Amherst College, USA)

Artifact Evaluation Committee

Artifact Evaluation Chairs

Artifact Evaluation Committee (AEC)

  • The complete list of the AEC is available here.