Artifact Evaluation


FormaliSE 2022 includes, for the first time in the history of the conference, the possibility of an artifact evaluation (AE) procedure. There will be a single round of the AE, to which all papers that contain methodological or application content are strongly encouraged to participate. Accepted papers with accepted artifacts will receive an honorific badge.



Artifacts and Artifact Evaluation

An artifact is any additional material (software, data sets, machine-checkable proofs, etc.) that supports the claims made in the paper and, ideally, makes them fully reproducible. In case of a tool, a typical artifact consists of the binary or source code of the tool, its documentation, the input files (e.g., models analysed or input data) used for the tool evaluation in the paper, and a configuration file or document describing the parameters used to obtain the results. The AE Committee will read the corresponding paper and evaluate the submitted artifact w.r.t. the following criteria:

  • consistency with and reproducibility of results in the paper,
  • completeness,
  • documentation and ease of use. 



All papers can submit an artifact to substantiate the results presented; this is specially encouraged in the case of tool and methodological papers. The results of the evaluation of the artifact will be taken into consideration in the paper reviewing discussion. Note that the results of AE do not imply the acceptance/rejection of the paper. We are aware of the fact that some parts of the paper may not be reproducible (e.g., due to computational demands or technical difficulties). The primary goal of the artifact evaluation is to give positive feedback to the authors and reward reproducible research. Authors of successful artifacts will receive a badge that can be shown on the title page of the accepted paper.



Artifact Submission

An artifact submission consists of

  • an abstract that summarizes the artifact and its relation to the paper,
  • a .pdf file of the paper (uploaded via EasyChair), and
  • a link to the artifact itself (see the Guidelines for Artifacts below).


The artifact itself should contain

  • a text file named LICENSE that contains the license for the artifact (the license must at least allows the Artifact Evaluation Committee to evaluate the artifact w.r.t. the criteria mentioned above),
  • a text file named README, or, that contains step-by-step instructions on how to use the artifact to replicate the results in the paper, and
  • information about the host platform on which you prepared and tested your VM image (OS, RAM, number of cores, CPU frequency) and the expected execution time.


Artifact submission is handled via EasyChair: Artifacts have to be submitted in the AE track (select "FormaliSE 2022 Artifact Evaluation" when making a new submission), with the same title and authors as the submitted paper.



Guidelines for Artifacts

To submit an artifact, please prepare a virtual machine (VM). The image must be kept accessible via a working web link throughout the entire evaluation process. The URL of the image must be submitted within the artifact submission in the EasyChair Artifact Evaluation track (see Artifact Submission above).

As the OS of the VM image, please choose a commonly used Linux distribution that has been tested with the virtual machine software. For preparation of the VM image please use VirtualBox and save the VM image as an Open Virtual Appliance (OVA) file.

Please include the prepared URL in the field "Link to Artifact File" of the artifact submission, and verify that it is publicly accessible. To ensure the integrity of the submitted artifacts, please compute the SHA checksum of the artifact file and provide it within the artifact submission. A checksum can be obtained by running the command sha1sum (Linux, macOS), or File Checksum Integrity Verifier (Microsoft Windows). The memory requirements of the artifact should also be indicated in the submission, e.g. 4 GB, 8 GB, etc.

Finally, we ask the authors to consider the following guidelines when preparing the artifact:

  • Document how to reproduce most, or ideally all, of the (experimental) results of the paper using the artifact.
  • Provide also clear documentation on how to use the artifact, e.g. which files to read and commands to execute, assuming minimum expertise of the user.
  • The evaluation process can be kept simple (and documentation reduced) by providing easy-to-use scripts, which favour artifact usability.
  • The artifact should not require the user to install additional software before running, that is, all required packages etc. have to be installed on the provided VM image.
  • For the special case of experiments that require a large amount of resources (hardware or time), it is recommended to provide a way to replicate a subset of the results of the paper with reasonably modest resources (RAM, number of cores), so that the results can be reproduced on common laptop hardware in a reasonable amount of time. Do include the full set of experiments as well (for those reviewers with sufficient hardware or time), just make it optional. Please indicate in the EasyChair submission form how much memory a reviewer will need to run your artifact (at least to replicate the chosen subset).


Members of the Artifact Evaluation Committee and the PC are asked to use the artifact for the sole purpose of evaluating the contribution associated with the artifact.



Possibility for exemption

In case you intend to present an artifact but cannot comply with the guideline above, please do not hesitate to contact the Artifact Evaluation Chair in advance before the artifact submission deadline. For example, if the VM would have to contain some restrictively-licensed software such as Matlab, etc.



Important dates

  • 2022-01-27 AoE: Artifact submission deadline
  • before 2022-02-08: Communication with authors in case of technical problems with the artifact
  • 2022-03-04: Notification of AE reviews as part of the FormaliSE author notification



Artifact Evaluation Committee


Carlos E. Budde   University of Trento, IT


Arnab Sharma      University of Oldenburg, DE
Depenge Liu       Chinese Academy of Sciences, CN
Jaeyoung Lee      University of Waterloo, CA
Jaime Arias       University Sorbonne Paris Nord, FR
Larisa Safina     INRIA, FR
Laura Bussi       ISTI-CNR, IT
Maik Wiesner      TU Darmstadt, DE

Robert Müller     University of Siegen, DE