32nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems
TACAS is a forum for researchers, developers and users interested in rigorously based tools and algorithms for the construction and analysis of systems.
The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, flexibility and efficiency of tools and algorithms for building systems.
Theoretical papers with clear relevance for tool construction and analysis as well as tool descriptions and case studies with a conceptual message are all encouraged. The topics covered by the conference include, but are not limited to:
The important dates are available in the Joint Call for Papers. Note that all deadlines are strict and firm.
TACAS accepts four types of submissions: research papers, case-study papers, regular tool papers, and tool demonstration papers.
Research Papers identify and justify a principled advance to the theoretical foundations for the construction and analysis of systems. Where applicable, they are supported by experimental validation. Artifact evaluation for regular research papers is not mandatory but strongly encouraged.
We do not discourage authors of regular research papers to put their submission on arXiv (or similar repos). Yet, we strongly encourage authors to not put the work on arXiv (or similar repos) around 2 weeks before and after the submission deadline, because potential reviewers may be subscribed to receive updates on recently posted papers.
Case-Study Papers are full length case-studies that describe the application of techniques developed by the community to a single problem or a set of problems of practical importance, preferably in a real-world setting. Case-study papers must describe the problem of interest and its novelty. It must provide a detailed description of the case-study (or studies) and various related techniques to solve the problem of interest for the case-study application. Modeling aspects of the case-study must be discussed. The case-study paper must provide detailed results on the application of formal techniques and lead to a conclusion that could be of interest beyond TACAS (to the research community pertaining to the case study). Artifact evaluation is currently not mandatory for case-study papers but undergoing artifact evaluation is strongly encouraged.
Regular Tool Papers are full length papers that describe a tool of interest to the community or a new version of the tool built using novel algorithmic and engineering techniques. Regular tool papers must describe tools of broad interest and utility to the TACAS community. The papers must clearly describe the problem to be solved, its importance, related work, the techniques used in the tool and their novelty, the construction of the tool, its unique features, discuss how the tool is used and present benchmarking of the tool including comparisons with other tools and previous versions of the tool. For tools from the industry, the description should be useful in allowing the community members to reproduce some of the key techniques or “tricks” in their own tools. All tool papers must undergo mandatory artifact evaluation.
Tool-Demonstration Papers are short papers that demonstrate a new tool or application of existing tool on a significant and novel case-study. As such, tool demo papers will be accepted if the PC decides that they are sufficiently interesting and novel to the community. Artifact evaluation is mandatory for these papers.
Submissions must follow the formatting guidelines of Springer’s LNCS (use the llncs.cls class) and be submitted electronically in pdf through the Easychair author interface of TACAS 2026.
The review process of TACAS 2026 is:
The length of regular research papers, case study papers, and regular tool papers is limited to 16 pp llncs.cls (excluding the bibliography and appendices). The length of tool demonstration papers is limited to 6 pp llncs.cls (excluding the bibliography and appendices). Additional material can be placed in a clearly marked appendix, at the end of the paper. The reviewers are, however, not obliged to read such appendices. If the paper is accepted, a full version including the appendix should be made publicly available via arXiv or a similar platform.
TACAS 2026 allows at most four submissions per author.
Submissions not adhering to the formatting and length requirements specified above will be rejected immediately.
Upon paper submission, authors are asked to declare their Conflict of Interest (COI) with TACAS 2026 PC members as listed below. A COI includes the following sources:
All papers will be evaluated by the program committee (PC), coordinated by the PC chairs and aided by the artifact evaluation chairs. All papers will be judged on novelty, significance, correctness, and clarity.
Reproducibility of results is of the utmost importance for TACAS. Therefore, we encourage all authors to include support for replicating the results of their papers. For theorems, this would mean providing proofs; for algorithms, this would mean including evidence of correctness and acceptable performance, either by a theoretical analysis or by experimentation; and for experiments, one should provide access to the artifacts used to generate the experimental data. Material that does not fit into the paper page limit may be provided as appendix and/or on a supplementary website (without violating double-blind submission requirements of regular research papers), with access appropriately enabled and license rights made clear. For example, the supplemental material for reviewing case-study papers and papers with experimental results could be classified as reviewer-confidential if necessary (e.g. if proprietary data are investigated or software is not open source).
All papers submitted to TACAS 2026 will be given the opportunity to submit a rebuttal response.
TACAS 2026 will include an artifact evaluation (AE).
An artifact is any additional material (software, data sets, machine-checkable proofs, etc.) that substantiates claims made in the paper and ideally renders them fully replicable. For example, an artifact might consist of a tool and its documentation, input files used for tool evaluation in the paper, and configuration files or documents describing parameters used in the experiments. The Artifact Evaluation Committee (AEC) will read the corresponding paper and evaluate the artifact according to the following criteria:
The evaluation will be based on the ETAPS Artifact Badge guidelines and the AEC will decide which of the badges — among Available, Functional, and Reusable — will be assigned to a given artifact and added to the title page of the paper in case of acceptance.
We summarize the main evaluation criteria for each badge in the following.
Available
The artifact is permanently archived with a persistent identifier (DOI) and publicly accessible.
Example: The tool’s source code and its dependencies are publicly available on Zenodo with a permanent DOI (it can also be hosted on e.g. GitHub in addition to the permanently available repository, but only hosting it on GitHub is not sufficient.)
Functional
The artifact works as described and can reproduce all (or almost all) key results from the paper.
Example: A tool submission compiles, runs, and reproduces the results from the paper (e.g. correctly verifies all benchmark models from the result tables).
Reusable
The artifact is well-documented, modular, and easy to adapt for future research.
Example: A tool is accompanied with clear APIs, code documentation, example scripts, and instructions for adding and running new benchmarks (i.e., it can easily be run on additional inputs).
Regular tool papers and tool demonstration papers are required to submit an artifact for evaluation by October 30, 2025. These artifacts will be expected to satisfy the requirements for the Functional and Available badges. Results of the AE will be taken into consideration in the paper reviewing and rebuttal phase of TACAS 2026. Exemption from some aspects of AE is possible in exceptional circumstances.
Authors of accepted research and case study papers are invited to submit an artifact no later than January 8 2026.
Artifact submission is optional, and the artifact will be reviewed only after paper acceptance.
Artifact submission is handled via HotCRP (link will be available soon). After the paper submission deadline, the paper submission will be copied by us from the main track to the artifact evaluation track. You should supply your submission’s artifact abstract, a copy of the paper and upload a ZIP archive as supplementary material. Do not attempt to create a new submission, nor change your submission’s other details such as authors or title. Due to this copying process, you can upload and update your artifact until the artifact submission deadline. The AEC chairs will inform authors when the AE track is open for different paper categories. If further clarifications will be required after the deadline (e.g., a requested author response to the smoke test), you will be asked to answer in HotCRP.
An artifact submission must contain:
The artifact hyperlink should point to a self-contained archive that allows the AEC to evaluate the artifact on either the TACAS virtual machine or the docker image (see below for more information). Authors should test their artifact prior to the submission and include all relevant instructions. The instructions should be clear and specify every step required to build and run the artifact, assuming no specific knowledge and including steps that the authors might consider trivial. Ideally, there will be a single command to build the tool and a single command to run the experiments.
Artifacts can be submitted in two formats: runnable on the TACAS Virtual Machine, or as a Docker image
We expect artifact authors to package their artifact and write instructions such that AEC members can evaluate the artifact using the TACAS 2026 Artifact Evaluation Virtual Machine (hereafter VM) for VirtualBox. The VM runs on Ubuntu 25.04 and supports Python
, C++
, Java
, Mono
, Ruby
, and Rust
; and has the following installations: build-essential
, git
, curl cmake
, clang
, mono-complete
, default-jdk
, python3-full
, python3-pip
, and ruby
. Notice that there are two VM image versions (amd64
and arm64
). Furthermore, VirtualBox guest additions are installed on the VM; it is therefore possible to connect a shared folder from the host computer.
Instead of the VM, authors may also provide a Docker image that can be run on the AEC’s machine. Do not assume that all reviewers are familiar with Docker. Please provide explicit instructions on how to run the Docker image, including complete command line arguments.
Your artifact is allowed reasonable network access, following a wider trend for software builds requiring network access. Authors can therefore decide whether to (a) supply a completely self-contained artifact, (b) rely on network access for external resources, or (c) combine both. However, the main artifact described in the paper must be included in the archive, and use of external resources is at the author’s own risk. If you are using external resources, please ensure that they are version pinned to ensure long-term replicability. We anticipate typical usage of this relaxation to be for the installation of third-party dependencies. If the artifact requires additional software or libraries that are not part of the virtual machine, the instructions must include all necessary steps for their installation and setup on a “clean” VM. In particular, authors should not rely on instructions provided by external tools. Again, the ideal setup is a single command to install the tool and all dependencies and a single command to run the experiments.
It is to the advantage of authors to prepare an artifact that is easy to evaluate by the AEC. Some guidelines:
XYZ5
which has a clear connection to a tool called XYZ4
(and the authors of XYZ4
are publicly known) then you should keep calling your tool XYZ5
and do not hide a relationship with XYZ4
. Your paper/artifact will not be rejected because of this.Members of the AEC will use the submitted artifact for the sole purpose of AE. We do, however, encourage authors to make their artifacts publicly and permanently available.
Under particular conditions tool papers and tool demonstration papers may be exempted from submitting the artifact, using the provided VM, or acquiring both the Functional and Available badges. Possible reasons for such exemptions include the need for special hardware (GPUs, compute clusters, robots, etc.), extreme resource requirements, or licensing issues. Artifacts should nonetheless be as complete as possible. Contact the AEC chairs as soon as possible if you anticipate the need for exemption.
There will be no formal rebuttal procedure for AE. However, there will be an early light review (a.k.a. smoke tests) of artifacts to ensure basic functionality. In case of technical issues at this stage and at the discretion of the AEC chairs, a single round of rebuttal may be applied to some artifacts shortly after submission. A single set of questions from reviewers will be put to artifact authors, who may reply with a single set of answers to address issues and facilitate further evaluation. Update of the submitted files or further communication will not be allowed.