Skip to yearly menu bar Skip to main content


Call for Artifact Evaluations 2025

It's becoming increasingly difficult to reproduce results from systems and ML papers. Voluntarily Artifact Evaluation (AE) was successfully introduced at systems conferences and tournaments (ReQuEST, PPoPP and CGO, and Supercomputing) to validate experimental results by an independent AE Committee, share unified Artifact Appendices, and assign reproducibility badges.

MLSys also promotes reproducibility of experimental results and encourages code and data sharing to help the community quickly validate and compare alternative approaches. Authors of accepted MLSys'25 papers are invited to formally describe supporting material (code, data, models, workflows, results) using the standard Artifact Appendix template and submit it to the Artifact Evaluation process (AE). Note that this submission is voluntary and will not influence the final decision regarding the papers. The point is to help authors validate experimental results from their accepted papers by an independent AE Committee in a collaborative way, and to help readers find articles with available, functional, and validated artifacts! For example, ACM Digital Library already allows one to find papers with available artifacts and reproducible results!

You need to prepare your artifacts and appendix using the guidelines below. You can then submit your paper with artifact appendix via dedicated MLSys AE website before March 7th, 2025. Be sure to include a link to the Github (or similar) page so reviewers can access your code. Your submission will be then reviewed according to the guidelines in the Artifact Evaluation process. Please, do not forget to provide a list of hardware, software, benchmark and data set dependencies in your artifact abstract - this is essential to find appropriate evaluators!

AE is run by a separate committee whose task is to assess how submitted artifacts support the work described in accepted papers based on the standard ACM Artifact Review and Badging policy. Since it may be very time consuming to perform full validation of AI/ML experiments while requiring expensive computational resources, we decided to only validate if submitted artifacts are "available" and "functional" at MLSys'25. However, you still need to provide a small sample data set to test the functionality of your artifact. Thus, depending on evaluation results, camera-ready papers will include the artifact appendix and will receive at most two ACM stamps of approval printed on their first page: Available and Functional.

We are looking for members of the Artifact Evaluation Committee (AEC), who will contribute to MLSys’25 Artifact Evaluation (AE) process by reviewing submitted artifacts. AEC membership is especially suitable for researchers early in their career, such as PhD students. Even as a first-year PhD student, you are welcome to join the AEC, provided you are working in a topic area covered by MLSys. For a given artifact, you will be asked to evaluate its public availability and functionality following the instructions in the artifact appendix. You will be able to discuss with other AEC members and anonymously interact with the authors as necessary, for instance if you are unable to get the artifact to work as expected. Finally, you will provide a review for the artifact to give constructive feedback to its authors, discuss the artifact with fellow reviewers, and help award the paper artifact evaluation badges. Please ensure that you have sufficient time and availability for the AEC during the period from March 7th to March 28th 2025. If you are interested in becoming a part of the AEC, please complete the online self-nomination form before March 7th.

Dates

  • Submissions due Mar 7, 2025 GMT
  • AE result notification: Mar 28, 2025

Links