MICCAI FLARE 2022 Playground

(AbdomenCT-1K: Fully Supervised Learning Benchmark)

Abdominal organ segmentation plays an important role in clinical practice, and to some extent, it seems to be a solved problem because the state-of-the-art methods have achieved inter-observer performance in several benchmark datasets [1-3]. However, it is unclear whether the excellent performance can be generalized on more diverse datasets. Moreover,  to alleviate the dependency on annotations, semi-supervised learning, weakly supervised learning, and continual learning have been active research topics, but there are still no segmentation benchmarks for these tasks.

To address these limitations, we establish 4 benchmarks for multi-organ segmentation, including:

This homepage is designed for the fully supervised abdominal organ segmentation benchmark and we set up two subtasks.

  • Subtask 1: The training set is adapted from MSD Pancreas (281 cases) [7] and NIH Pancreas (80 cases) [5-7], where all the 361 CT scans are from the portal phase. 
  • Subtask 2: The training set is adapted from  MSD Pancreas (281 cases) [7], LiTS (40 cases) [1], and KiTS (40 cases), where the cases are from different phases. 

All the cases in the training set have four labels, including liver (label 1), kidney (label 2), spleen (label 3), and pancreas (label 4). Both the subtasks share the same testing set with 100 cases. The cases in the training set and the testing set have no overlap in each subtask.


 News

2023.03.15: This benchmark is serving as a playground for MICCAI FLARE23 Challenge. In order to get the entry qualification, participants should demonstrate basic segmentation skills and the ability to encapsulate their methods in Docker. Specifically, participants should 

  • Verify your grand-challenge account and click on the  "join" button.
  • Develop any segmentation method (e.g., U-Net) based on the (Subtask 1) training data and encapsulate the method by Docker.
  • Use Docker to predict the testing set and record 5-10 minutes of the predicting process as a video (mp4 format).

  • Submit the segmentation results here and upload your Docker to DockerHub. Send the (1) docker hub link, (2) download link to the recorded inference mp4 video, and (3) the screenshot of your playground leaderboard results (Mean DSC>0.8) to MICCAI.FLARE@aliyun.com. Email subject: Apply FLARE Entry Number_your name_grand challenge user name.

  • After reviewing your submission, we will get back to you with an Entry Number, then you can join the FLARE23 Challenge.

If you have won an award in MICCAI FLARE21 or published a LNCS paper in MICCAI FLARE22, this step can be exempt and you can directly go to Step 1. Your Entry Number is the Certificate Number in your award certificate or the paper link at LNCS.

If you have got the top 30% in other Docker-based Challenges during MICCAI 2021-22, you can also be exempt from Step 0. Please send supporting materials to MICCAI.FLARE@aliyun.com, we will get back to you with an Entry Number.


 How to Participate

  1. Click on the Join button. Please make sure that your grand-challenge profile is complete (e.g., Name, Institution, Department, and Location).
  2. Download the training and testing data on the Dataset page.
  3. Develop your solution and make a complete submission (including a zip file of segmentation results and a short paper). 

 Evaluation Metrics

  1. Dice Similarity Coefficient (DSC)
  2. Normalized Surface Distance (NSD)

The implementation is available here.


 Rules

  1. All participants should register this challenge with their real names, affiliations, and affiliation E-mails. Incomplete and Redundant registrations will be ignored without notice.
  2. For a fair comparison, participants are not allowed to use any additional data and pre-trained models.
  3. Participants are not allowed to register multiple teams and accounts.

Reference

[1] P. Bilic, P. F. Christ, E. Vorontsov, G. Chlebus, H. Chen, Q. Dou, C.-W. Fu, X. Han, P.-A. Heng, J. Hesser et al., "The liver tumor segmentation benchmark (lits)," arXiv preprint arXiv:1901.04056, 2019. 

[2] F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen, and K. H. Maier-Hein, "nnU-Net: a self-confifiguring method for deep learning" based biomedical image segmentation,” Nature Methods, vol. 18, no. 2, pp. 203–211, 2021. 

[3] N. Heller, F. Isensee, K. H. Maier-Hein, X. Hou, C. Xie, F. Li, Y. Nan, G. Mu, Z. Lin, M. Han et al., “The state of the art in kidney and kidney tumor segmentation in contrast-enhanced ct imaging: Results of the kits19 challenge,” Medical Image Analysis, vol. 67, p. 101821, 2021.

[4] A. L. Simpson, M. Antonelli, S. Bakas, M. Bilello, K. Farahani, B. Van Ginneken, A. Kopp-Schneider, B. A. Landman, G. Litjens, B. Menze et al., “A large annotated medical image dataset for the development and evaluation of segmentation algorithms,” arXiv preprint arXiv:1902.09063, 2019.

[5] H. R. Roth, A. Farag, E. B. Turkbey, L. Lu, J. Liu, and R. M. Summers, “Data from pancreas-CT,” The Cancer Imaging Archive, 2016.

[6] H. R. Roth, L. Lu, A. Farag, H.-C. Shin, J. Liu, E. B. Turkbey, and R. M. Summers, “Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation,” in International Conference on Medical Image Computing and Computer-assisted Intervention, 2015, pp. 556–564.

[7] K. Clark, B. Vendt, K. Smith, J. Freymann, J. Kirby, P. Koppel, S. Moore, S. Phillips, D. Maffitt, M. Pringle et al., “The cancer imaging archive (tcia): maintaining and operating a public information repository,” Journal of Digital Imaging, vol. 26, no. 6, pp. 1045–1057, 2013.