VisioMel Challenge: Predicting Melanoma Relapse

Use digitized microscopic slides to predict the likelihood of melanoma relapse within the next five years. #health

€25,000 in prizes
may 2023
541 joined

Code submission format

Rather than submitting your predicted labels, you'll package everything needed to perform inference and submit that for containerized execution. The runtime repository contains the complete specification for the runtime.

All submissions for inference must run on Python 3.10. No other languages or versions of Python are supported.

What to submit


Your final submission should be a zip archive named with the extension .zip (for example, submission.zip). The root level of the submission.zip file must contain a main.py which performs inference on the test images and writes the predictions to a file named submission.csv in the same directory as main.py. You can see an example of this submission setup in the runtime repository.

Here's an example:

submission root directory
├── assets          # Example of configuration and weights for the trained model
│   ├── model.json
│   └── weights.h5
└── main.py         # Inference script

Note: be sure that when you unzip the submission main.py exists in the folder where you unzip. This file must be present at the root level of the zip archive. There should be no folder that contains it.

During code execution, your submission will be unzipped and run in our cloud compute cluster. The script will have access to the following directory structure:

submission root directory
├── data
│   ├── submission_format.csv
│   ├── test_metadata.csv
│   ├── ...
│   ├── <all of the test images as pyramidal tifs>
│   ├── ...
│   ├── inl4skus.tif
│   └── aeg15vcz.tif
├── main.py
└── <additional assets included in the submission archive>

Submission checklist

  • Submission includes main.py in the root directory of the zip. There can be extra files with more code that is called (see assets folder in the example).
  • Submission contains any model weights that need to be loaded. There will be no network access.
  • Submission does not print or log any information about the test metadata or test images, including specific data values and/or aggregations such as sums, means, or counts. Doing so may be grounds for disqualification.
  • Script loads the data for inference from the data folder in the root directory. All images for inference are in the root level of the data folder. This folder is read-only.
  • Script writes submission.csv to the root directory when inference is finished. This file must match the submission format exactly.
  • Use the versions of submission_format.csv and test_metadata.csv from the data folder provided to the container. Do not include these files with your submission, and do not read them from other locations. We need to be able to run your submission on other images by replacing these metadata files. See the problem description for details on the test set metadata.

Testing your submission locally

If you'd like to replicate how your submission will run online, you can test the submission locally before submitting. This is a great way to work out any bugs locally and ensure that your model will run quickly enough.

Submission format

Your predictions must be in a CSV file with an index column called filename and a column called relapse. Your predictions must be floating point values between 0.0 and 1.0. The predicted values are probabilities of relapse, where 1.0 means 100% likelihood of relapse. Your submission.csv must match this format exactly.

For example, if your predictions for the first five rows look like this:

filename relapse
1u4lhlqb.tif 0.91
rqumqnfp.tif 0.02
bu5xt1xm.tif 0.55
dibvu7wk.tif 0.34
qsza4coh.tif 0.10

Your submission.csv file would look like:

filename,relapse
1u4lhlqb.tif,0.91
rqumqnfp.tif,0.02
bu5xt1xm.tif,0.55
dibvu7wk.tif,0.34
qsza4coh.tif,0.10

Runtime


Your code is executed within a container that is defined in our runtime repository. The limits are as follows:

  • Your submission must be written in Python and run Python 3.10 using the packages defined in the runtime repository.
  • The submission must complete execution in 8 hours or less. We expect most submissions will complete much more quickly and computation time per participant will be monitored to prevent abuse. If you find yourself requiring more time than this limit allows, open a Github issue in the repository to let us know.
  • The container runtime has access to a single GPU. All of your code should run within the GPU environments in the container, even if actual computation happens on the CPU. (CPU environments are provided within the container for local debugging only.)
  • The container has access to 6 vCPUs powered by an Intel Xeon E5-2690 chip and 56GB RAM.
  • The container has 1 Tesla K80 GPU with 12GB of memory.
  • The container will not have network access. All necessary files (code and model assets) must be included in your submission.
  • The container execution will not have root access to the filesystem.

The GPUs for executing your inference code are a shared resource across competitors. We request you are conscientious in your use of them. Please add progress information to your logs and cancel jobs that will run longer than the time limit. Canceled jobs won't count against your submission limit, and this means more available resources to score submissions that will complete on time.

As discussed, one of the hardest challenges when working with WSI data is that the data is extremely large and the relevant areas are relatively small. One of your goals should be to make model inference execute as quickly and efficiently as possible. Progress here will be an enormous contribution to the use of machine learning in pathology.

Requesting package installations


Since the docker container will not have network access, all packages must be pre-installed. We are happy to add packages as long as they do not conflict and can build successfully. Packages must be available through conda for Python 3.10. To request an additional package be added to the docker image, follow the instructions in the runtime repository.

Happy building! Once again, if you have any questions or issues you can always head on over the user forum!