• Algorithm outputs will be submitted to our challenge website. (If the team uses pretrained model or other public datasets, the team will be required to submit an additional set of algorithm output without using pretrained model or public datasets. )

  • Each team is required to submit a docker to organizing team for reproducing the results. Each participating team will also need to submit a solution paper about the algorithm to the organizers.

  • To be eligible for awards, top-entry teams are required to make their codes publicly available.

For algorithm designs: The input can be skull-stripped ADC maps, Z_ADC maps, or combine them. The output is lesion predictions.

Github for BONBID-HIE2023&2024:

https://github.com/baorina/BONBID-HIE-MICCAI2023/tree/main

https://github.com/baorina/BONBID-HIE-MICCAI-MICCAI2024/tree/main

Lesion Segmentation Docker: demo algorithm docker

Outcome Prediction Docker: demo algorithm docker

Eval Stage

Participating teams can submit docker files of their algorithms for sanity check.

For each case, the output prediction should be stored in the same format as files in 3LABLE, saved as *.mha.

When submitting the output for all files, please zip predictions of all cases to a zip file, for example, test.zip.

This stage is only designed for sanity check of algorithm dockers. The performance is not used for ranking.

Test Stage

Participating teams can submit docker files of algorithm for held-out test cases.

The test set is hidden on the server. Participating teams are required to submit algorithm dockers. The final ranking is based on performance in this stage.