TACAS 2026

32nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems

Scope

TACAS is a forum for researchers, developers and users interested in rigorously based tools and algorithms for the construction and analysis of systems.

The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, flexibility and efficiency of tools and algorithms for building systems.

Theoretical papers with clear relevance for tool construction and analysis as well as tool descriptions and case studies with a conceptual message are all encouraged. The topics covered by the conference include, but are not limited to:

  • specification and verification techniques;
  • software and hardware verification;
  • analytical techniques for real-time, hybrid, or stochastic systems;
  • analytical techniques for safety, security, or dependability;
  • SAT and SMT solving;
  • theorem proving;
  • model checking;
  • static and dynamic program analysis;
  • testing;
  • abstraction techniques for modeling and verification;
  • compositional and refinement-based methodologies;
  • system construction and transformation techniques;
  • machine-learning techniques for synthesis and verification;
  • tool environments and tool architectures;
  • applications and case studies.

Important Dates

The important dates are available in the Joint Call for Papers. Note that all deadlines are strict and firm.

Submission Categories

TACAS accepts four types of submissions: research papers, case-study papers, regular tool papers, and tool demonstration papers.

Research Papers identify and justify a principled advance to the theoretical foundations for the construction and analysis of systems. Where applicable, they are supported by experimental validation. Artifact evaluation for regular research papers is not mandatory but strongly encouraged.

We do not discourage authors of regular research papers to put their submission on arXiv (or similar repos). Yet, we strongly encourage authors to not put the work on arXiv (or similar repos) around 2 weeks before and after the submission deadline, because potential reviewers may be subscribed to receive updates on recently posted papers.

Case-Study Papers are full length case-studies that describe the application of techniques developed by the community to a single problem or a set of problems of practical importance, preferably in a real-world setting. Case-study papers must describe the problem of interest and its novelty. It must provide a detailed description of the case-study (or studies) and various related techniques to solve the problem of interest for the case-study application. Modeling aspects of the case-study must be discussed. The case-study paper must provide detailed results on the application of formal techniques and lead to a conclusion that could be of interest beyond TACAS (to the research community pertaining to the case study). Artifact evaluation is currently not mandatory for case-study papers but undergoing artifact evaluation is strongly encouraged.

Regular Tool Papers are full length papers that describe a tool of interest to the community or a new version of the tool built using novel algorithmic and engineering techniques. Regular tool papers must describe tools of broad interest and utility to the TACAS community. The papers must clearly describe the problem to be solved, its importance, related work, the techniques used in the tool and their novelty, the construction of the tool, its unique features, discuss how the tool is used and present benchmarking of the tool including comparisons with other tools and previous versions of the tool. For tools from the industry, the description should be useful in allowing the community members to reproduce some of the key techniques or “tricks” in their own tools. All tool papers must undergo mandatory artifact evaluation.

Tool-Demonstration Papers are short papers that demonstrate a new tool or application of existing tool on a significant and novel case-study. As such, tool demo papers will be accepted if the PC decides that they are sufficiently interesting and novel to the community. Artifact evaluation is mandatory for these papers.

Paper Submission

Submit paper

Submissions must follow the formatting guidelines of Springer’s LNCS (use the llncs.cls class) and be submitted electronically in pdf through the Easychair author interface of TACAS 2026.

The review process of TACAS 2026 is:

  • double-blind for regular research papers. As such, authors of regular research papers must omit their names and institutions; refer to prior work in the third person, just as prior work by others; not to include acknowledgments that might identify them.
  • single-blind for case study papers, regular tool papers, and tool-demonstration papers.

The length of regular research papers, case study papers, and regular tool papers is limited to 16 pp llncs.cls (excluding the bibliography and appendices). The length of tool demonstration papers is limited to 6 pp llncs.cls (excluding the bibliography and appendices). Additional material can be placed in a clearly marked appendix, at the end of the paper. The reviewers are, however, not obliged to read such appendices. If the paper is accepted, a full version including the appendix should be made publicly available via arXiv or a similar platform.

TACAS 2026 allows at most four submissions per author.

Submissions not adhering to the formatting and length requirements specified above will be rejected immediately.

Conflict of Interest

Upon paper submission, authors are asked to declare their Conflict of Interest (COI) with TACAS 2026 PC members as listed below. A COI includes the following sources:

  • Family member or close friend.
  • Ph.D. advisor or advisee (no time limit), or postdoctoral or undergraduate mentor or mentee within the past five years.
  • Person with the same affiliation.
  • Involved in an alleged incident of harassment. (It is not required that the incident be reported.)
  • Reviewer owes author a favour (e.g., recently requested a reference letter).
  • Frequent or recent collaborator (within last 5 years) cannot objectively review your work. If an author believes that they have a valid reason for a COI not listed above, then the author can contact the TACAS PC chairs. Falsely declared conflicts (i.e., do not satisfy one of the listed reasons) risk rejection without consideration of merit. If an author is uncertain of COI, the author is encouraged to email the TACAS PC chairs.

Paper Evaluation

All papers will be evaluated by the program committee (PC), coordinated by the PC chairs and aided by the artifact evaluation chairs. All papers will be judged on novelty, significance, correctness, and clarity.

Reproducibility of results is of the utmost importance for TACAS. Therefore, we encourage all authors to include support for replicating the results of their papers. For theorems, this would mean providing proofs; for algorithms, this would mean including evidence of correctness and acceptable performance, either by a theoretical analysis or by experimentation; and for experiments, one should provide access to the artifacts used to generate the experimental data. Material that does not fit into the paper page limit may be provided as appendix and/or on a supplementary website (without violating double-blind submission requirements of regular research papers), with access appropriately enabled and license rights made clear. For example, the supplemental material for reviewing case-study papers and papers with experimental results could be classified as reviewer-confidential if necessary (e.g. if proprietary data are investigated or software is not open source).

All papers submitted to TACAS 2026 will be given the opportunity to submit a rebuttal response.

Artifact Evaluation

TACAS 2026 will include an artifact evaluation (AE).

  • For regular tool papers and tool demonstration papers, AE is mandatory and artifacts must be submitted by the end of October 30, 2025, “Anywhere on Earth” (AoE, UTC-12).
  • For accepted research and case study papers, AE is optional and artifacts may be submitted by the end of January 8, 2026, “Anywhere on Earth” (AoE, UTC-12).

Important Dates

Tool and Tool Demonstration Papers:

  • Oct 16 - Paper submission deadline
  • Oct 30 - Artifact submission deadline
  • Nov 10-21 - Smoke test for selected artifacts (in case of technical issues)
  • Dec 22 - Paper and artifact notification

Research and Case Study Papers:

  • Jan 8 - Artifact submission
  • Jan 16-26 - Smoke test for selected artifacts (in case of technical issues)
  • Feb 12 - Artifact notification

Artifacts and Evaluation Criteria

An artifact is any additional material (software, data sets, machine-checkable proofs, etc.) that substantiates claims made in the paper and ideally renders them fully replicable. For example, an artifact might consist of a tool and its documentation, input files used for tool evaluation in the paper, and configuration files or documents describing parameters used in the experiments. The Artifact Evaluation Committee (AEC) will read the corresponding paper and evaluate the artifact according to the following criteria:

  • Consistency with and replicability of results presented in the paper
  • Completeness
  • Documentation and ease of use
  • Availability in a permanent online repository (e.g., Zenodo)

The evaluation will be based on the ETAPS Artifact Badge guidelines and the AEC will decide which of the badges — among Available, Functional, and Reusable — will be assigned to a given artifact and added to the title page of the paper in case of acceptance.

We summarize the main evaluation criteria for each badge in the following.

Available

The artifact is permanently archived with a persistent identifier (DOI) and publicly accessible.

Example: The tool’s source code and its dependencies are publicly available on Zenodo with a permanent DOI (it can also be hosted on e.g. GitHub in addition to the permanently available repository, but only hosting it on GitHub is not sufficient.)

Functional

The artifact works as described and can reproduce all (or almost all) key results from the paper.

Example: A tool submission compiles, runs, and reproduces the results from the paper (e.g. correctly verifies all benchmark models from the result tables).

Reusable

The artifact is well-documented, modular, and easy to adapt for future research.

Example: A tool is accompanied with clear APIs, code documentation, example scripts, and instructions for adding and running new benchmarks (i.e., it can easily be run on additional inputs).

Compulsory AE for Tool and Tool Demonstration Papers

Regular tool papers and tool demonstration papers are required to submit an artifact for evaluation by October 30, 2025. These artifacts will be expected to satisfy the requirements for the Functional and Available badges. Results of the AE will be taken into consideration in the paper reviewing and rebuttal phase of TACAS 2026. Exemption from some aspects of AE is possible in exceptional circumstances.

Optional AE for Research and Case Study Papers

Authors of accepted research and case study papers are invited to submit an artifact no later than January 8 2026.

Artifact submission is optional, and the artifact will be reviewed only after paper acceptance.

Artifact Submission

Artifact submission is handled via HotCRP (link will be available soon). After the paper submission deadline, the paper submission will be copied by us from the main track to the artifact evaluation track. You should supply your submission’s artifact abstract, a copy of the paper and upload a ZIP archive as supplementary material. Do not attempt to create a new submission, nor change your submission’s other details such as authors or title. Due to this copying process, you can upload and update your artifact until the artifact submission deadline. The AEC chairs will inform authors when the AE track is open for different paper categories. If further clarifications will be required after the deadline (e.g., a requested author response to the smoke test), you will be asked to answer in HotCRP.

An artifact submission must contain:

  • an abstract summarizing the artifact and its relation to the paper (which was uploaded to EasyChair)
  • the paper submission number as stated in the EasyChair submission
  • a ZIP archive (uploaded to HotCRP) containing:
    • a license document (LICENSE) for the artifact. It is required that the license at least allows the AEC to evaluate the artifact.
    • instructions (README) including:
      • a hyperlink to the artifact
      • additional requirements for the artifact, such as installation of proprietary software or particular hardware resources
      • detailed instruction for an early smoke test review that allows reviewers to: (1) verify that the artifact can properly run; and (2) perform a short evaluation of the artifact before the full evaluation and detect any difficulties (see Rebuttal)
      • detailed instructions for use of the artifact and replication of results in the paper, including estimated resource use and required time if non-trivial
      • list of badges you are applying for in your submission. In particular, if you are not applying for the Reusable badge, please indicate it
      • Platforms you tested this on (in particular: the host architecture: arm64 vs amd64, the operating system, etc.)

The artifact hyperlink should point to a self-contained archive that allows the AEC to evaluate the artifact on either the TACAS virtual machine or the docker image (see below for more information). Authors should test their artifact prior to the submission and include all relevant instructions. The instructions should be clear and specify every step required to build and run the artifact, assuming no specific knowledge and including steps that the authors might consider trivial. Ideally, there will be a single command to build the tool and a single command to run the experiments.

Guidelines for Artifacts

Artifacts can be submitted in two formats: runnable on the TACAS Virtual Machine, or as a Docker image

TACAS Virtual machine

We expect artifact authors to package their artifact and write instructions such that AEC members can evaluate the artifact using the TACAS 2026 Artifact Evaluation Virtual Machine (hereafter VM) for VirtualBox. The VM runs on Ubuntu 25.04 and supports Python, C++, Java, Mono, Ruby, and Rust; and has the following installations: build-essential, git, curl cmake, clang, mono-complete, default-jdk, python3-full, python3-pip, and ruby. Notice that there are two VM image versions (amd64 and arm64). Furthermore, VirtualBox guest additions are installed on the VM; it is therefore possible to connect a shared folder from the host computer.

Docker image

Instead of the VM, authors may also provide a Docker image that can be run on the AEC’s machine. Do not assume that all reviewers are familiar with Docker. Please provide explicit instructions on how to run the Docker image, including complete command line arguments.

Network access

Your artifact is allowed reasonable network access, following a wider trend for software builds requiring network access. Authors can therefore decide whether to (a) supply a completely self-contained artifact, (b) rely on network access for external resources, or (c) combine both. However, the main artifact described in the paper must be included in the archive, and use of external resources is at the author’s own risk. If you are using external resources, please ensure that they are version pinned to ensure long-term replicability. We anticipate typical usage of this relaxation to be for the installation of third-party dependencies. If the artifact requires additional software or libraries that are not part of the virtual machine, the instructions must include all necessary steps for their installation and setup on a “clean” VM. In particular, authors should not rely on instructions provided by external tools. Again, the ideal setup is a single command to install the tool and all dependencies and a single command to run the experiments.

Tips

It is to the advantage of authors to prepare an artifact that is easy to evaluate by the AEC. Some guidelines:

  • Your artifact need not be anonymized. In particular, you need not rename your tool in both paper and artifact, even if it yields a potential loss of anonymity. For instance, if you submit a paper/artifact on a new tool called XYZ5 which has a clear connection to a tool called XYZ4 (and the authors of XYZ4 are publicly known) then you should keep calling your tool XYZ5 and do not hide a relationship with XYZ4. Your paper/artifact will not be rejected because of this.
  • Keep the evaluation process simple (e.g. through the use of scripts), and provide detailed documentation assuming minimum user expertise.
  • Document in detail how to replicate most, or ideally all, of the experimental results of the paper using the artifact.
  • State resource requirements and/or the environment in which you successfully tested the artifact.
  • For experiments requiring a large amount of resources, we strongly recommend providing a way to replicate a representative subset of the results such that results can be reproduced on various hardware platforms in reasonable time (within about 8 hours on a modern notebook). Do include the full set of experiments for those reviewers with sufficient hardware or time, alongside an explanation on how to reproduce individual results.
  • If you use the virtual machine, do not submit a virtual machine; only submit your files, which AEC members will copy into the provided virtual machine.
  • Reviewers will likely use a machine that is different to yours. It may be wise to test your artifact in the virtual machine / Docker running on other platforms and other architectures available to you.
  • For the Available badge, you must upload your artifact to a permanent repository (e.g. Zenodo, figshare, or Dryad) that provides a Digital Object Identifier (DOI), and use that link in your submission. This excludes other repositories such as an institutional or personal website, source code hosting services, or cloud storage.

Members of the AEC will use the submitted artifact for the sole purpose of AE. We do, however, encourage authors to make their artifacts publicly and permanently available.

Exemption

Under particular conditions tool papers and tool demonstration papers may be exempted from submitting the artifact, using the provided VM, or acquiring both the Functional and Available badges. Possible reasons for such exemptions include the need for special hardware (GPUs, compute clusters, robots, etc.), extreme resource requirements, or licensing issues. Artifacts should nonetheless be as complete as possible. Contact the AEC chairs as soon as possible if you anticipate the need for exemption.

Rebuttal

There will be no formal rebuttal procedure for AE. However, there will be an early light review (a.k.a. smoke tests) of artifacts to ensure basic functionality. In case of technical issues at this stage and at the discretion of the AEC chairs, a single round of rebuttal may be applied to some artifacts shortly after submission. A single set of questions from reviewers will be put to artifact authors, who may reply with a single set of answers to address issues and facilitate further evaluation. Update of the submitted files or further communication will not be allowed.

Program Committee

PC Chairs

  • Sebastian Junges (Radboud University, Netherlands)
  • Guy Katz (The Hebrew University of Jerusalem, Israel)

Area Chairs

  • Roderick Bloem (Graz University of Technology, Austria)
  • Dana Drachsler Cohen (Technion, Israel)
  • Anastasia Mavridou (KBR / NASA Ames Research Center, USA)
  • Dave Parker (University of Oxford, UK)
  • Natasha Sharygina (Universita della Svizzera Italiana, Italy)
  • Yoni Zohar (Bar-Ilan University, Israel)

PC Members

  • Erika Abraham (RWTH Aachen University, Germany)
  • S. Akshay (IIT Bombay, India)
  • Elvio Gilberto Amparore (Universita di Torino, Italy)
  • Étienne André (Université Sorbonne Paris Nord, France)
  • Christel Baier (TU Dresden, Germany)
  • Haniel Barbosa (Universidade Federal de Minas Gerais, Brazil)
  • Clark Barrett (Stanford University, USA)
  • Dirk Beyer (LMU Munich, Germany)
  • Armin Biere (University of Freiburg, Germany)
  • Roderick Bloem (Graz University of Technology, Austria)
  • Luca Bortolussi (University of Trieste, Italy)
  • Milan Ceska (Brno University of Technology, Czechia)
  • Supratik Chakraborty (IIT Bombay, India)
  • Yu-Fang Chen (Academia Sinica, Taiwan)
  • Mingshuai Chen (Zhejiang University, China)
  • Wei-Ngan Chin (National University of Singapore, Singapore)
  • Alessandro Cimatti (Fondazione Bruno Kessler, Italy)
  • Rayna Dimitrova (CISPA Helmholtz Center for Information Security, Germany)
  • Dana Drachsler Cohen (Technion, Israel)
  • Dana Fisman (Ben-Gurion University, Israel)
  • Hadar Frenkel (Ben-Gurion University, Israel)
  • Florian Frohn (RWTH Aachen University, Germany)
  • Mirco Giacobbe (University of Birmingham, UK)
  • Arie Gurfinkel (University of Waterloo, Canada)
  • Arnd Hartmanns (University of Twente, Netherlands)
  • Marijn Heule (Carnegie Mellon University, USA)
  • Falk Howar (TU Clausthal / IPSSE, Germany)
  • Omri Isac (The Hebrew University of Jerusalem, Israel)
  • Nils Jansen (Ruhr University Bochum, Germany)
  • Daniela Kaufmann (TU Wien, Austria)
  • Orna Kupferman (The Hebrew University of Jerusalem, Israel)
  • Alfons Laarman (Leiden University, Netherlands)
  • Anastasia Mavridou (KBR / NASA Ames Research Center, USA)
  • Aina Niemetz (Stanford University, USA)
  • Petr Novotný (Masaryk University, Czechia)
  • Dave Parker (University of Oxford, UK)
  • Andrea Pferscher (University of Oslo, Norway)
  • Guillermo Perez (University of Antwerp, Belgium)
  • Kristin Yvonne Rozier (Iowa State University, USA)
  • Ocan Sankur (Mitsubishi Electric R&D Centre Europe, France)
  • Christian Schilling (Aalborg University, Denmark)
  • Anne-Kathrin Schmuck (Max-Planck-Institute for Software Systems, Germany)
  • Natasha Sharygina (Universita della Svizzera Italiana, Italy)
  • Mihaela Sighireanu (ENS Paris-Saclay, LMF, France)
  • Sadegh Soudjani (Newcastle University, UK)
  • Jiri Srba (Aalborg University, Denmark)
  • Armando Tacchella (Università di Genova, Italy)
  • Hazem Torfah (Chalmers University of Technology, Sweden)
  • Yakir Vizel (Technion, Israel)
  • Bow-Yaw Wang (Academia Sinica, Taiwan)
  • Naijun Zhan (Institute of Software, Chinese Academy of Sciences, China)
  • Lijun Zhang (Chinese Academy of Sciences, China)
  • Đorđe Žikelić (Singapore Management University, Singapore)
  • Yoni Zohar (Bar-Ilan University, Israel)

Artifact Evaluation Committee

Artifact Evaluation Chairs

Artifact Evaluation Committee

  • Abha Chaudhary (Binghamton University)
  • Abhishek Singh (IIIT Hyderabad)
  • Aditya Senthilnathan (Cornell University)
  • Alberto Bombardelli (Fondazione Bruno Kessler)
  • Alejandro Hernández-Cerezo (Complutense University of Madrid)
  • Alexander Stekelenburg (University of Twente)
  • Andy Oertel (Lund University)
  • Aron Ricardo Perez-Lopez (Stanford University)
  • Aosen Xiong (University of Waterloo)
  • Arshia Rafieioskouei (Michigan State University)
  • Asmund Aqissiaq Arild Kløvstad (University of Oslo)
  • Ayaka Yorihiro (Cornell University)
  • Benjamin Przybocki (Carnegie Mellon University)
  • Bob Rubbens (University of Twente)
  • Bruno Andreotti (Universidade Federal de Minas Gerais)
  • Charles de Haro (École Normale Supérieure | Université PSL)
  • Chris Johannsen (Iowa State University)
  • Christoph Wernhard (University of Potsdam)
  • Clara Rodriguez Nuñez (Universidad Complutense de Madrid)
  • Dapeng Zhi (East China Normal University, Jiangsu University of Technology)
  • David Perera (UFMG)
  • Divyesh Unadkat (Synopsys Inc)
  • Edward Wang (MIT)
  • Elizaveta Pertseva (Stanford University)
  • Étienne André (Université Sorbonne Paris Nord (France))
  • Evgenii Vinarskii (Télécom SudParis / Institut Polytechnique de Paris)
  • Filip Cano (Institute of Science and Technology Austria)
  • Filip Macák (Brno University of Technology)
  • Geoff Sutcliffe (University of Miami)
  • Hanna Lachnitt (Stanford University)
  • Hassan Mousavi (University of Tehran and Tehran Institute for Advanced Studies)
  • Hichem Rami Ait-El-Hara (OCamlPro)
  • Ian Dardik (Carnegie Mellon University)
  • Idan Refaeli (The Hebrew University of Jerusalem)
  • Jiong Yang (Georgia Institute of Technology)
  • Jonathan Spiegelman (Hebrew University of Jerusalem)
  • Juliane Päßler (University of Oslo)
  • Kartik Sabharwal (University of Iowa)
  • Karthik Nukala (SRI International)
  • Kevin Van de Glind (Eindhoven University of Technology)
  • Leyi Cui (Carnegie Mellon University)
  • Luca Marzari (University of Verona)
  • Mara Miulescu (Eindhoven University of Technology)
  • Marco Campion (INRIA & ENS Paris | Université PSL)
  • Marco Lewis (INRIA)
  • Mariem Hammami (UPEC)
  • Mark Peyrer (Johannes Kepler University)
  • Martin Kristjansen (Aalborg University)
  • Matthias Hetzenberger (TU Wien)
  • Max Fan (Cornell University)
  • Mahboubeh Samadi (Tehran Institute for Advanced Studies (TeIAS))
  • Maria Belen Rodriguez (University of Twente)
  • Maurice Laveaux (Eindhoven University of Technology)
  • Miguel Isabel (Universidad Complutense de Madrid)
  • Mingkai Miao (Hong Kong University of Science and Technology)
  • Mohammad Afzal (TCS research Pune and IIT Bombay India)
  • Neea Rusch (Augusta University)
  • Nicolas Amat (ONERA - The French Aerospace Lab)
  • Nils Lommen (RWTH Aachen)
  • Oyendrila Dobe (Amazon Web services)
  • Pablo Gordillo (Complutense University of Madrid)
  • Philipp Schlehuber-Caissier (Telecom SudParis / Samovar)
  • Raya Elsaleh (Hebrew University)
  • Reza Soltani (University of Twente)
  • Robert Modderman (University of Twente)
  • Rui Ge (University of British Columbia)
  • Samuel Akinwande (Stanford University)
  • Sarat Chandra Varanasi (GE Aerospace Research)
  • Sergei Novozhilov (The Hong Kong University of Science and Technology)
  • Simon Robillard (LIRMM, Université de Montpellier & CNRS)
  • Syyeda Zainab Fatmi (University of Oxford)
  • Tobias Seufert (University of Freiburg)
  • Tom Yaacov (Ben-Gurion University of the Negev)
  • Tudor Braicu (Cornell University)
  • Vaibhav Mehta (Cornell University)
  • Vlad-Constantin Craciun (Bitdefender)
  • Weiyi Chen (Purdue University)
  • Wietze Koops (Lund University and University of Copenhagen)
  • Xuyang Li (Purdue University)
  • Yonghui Liu (Australian National University)
  • Yusen Su (University of Waterloo)
  • Yun Chen Tsai (National Institute of Informatics, Japan)
  • Zvika Berger (Bar Ilan University)