Digital Pathology: Segmentation of Nuclei in Images

Organized by cpm.organizing.committee - Current server time: Dec. 14, 2018, 7:47 p.m. UTC

First phase

Training
June 15, 2018, 11:59 p.m. UTC

End

Competition Ends
Aug. 17, 2018, 11:59 p.m. UTC

Overview

Grading and diagnosis of tumors in cancer patients have traditionally been done by examination of tissue specimens under a powerful microscope by expert pathologists. While this process continues to be widely applied in clinical settings, it is not scalable to translational and clinical research studies involving hundreds or thousands of tissue specimens. State-of-the-art digitizing microscopy instruments are capable of capturing high-resolution images of whole slide tissue specimens rapidly. Computer aided segmentation and classification has the potential to improve the tumor diagnosis and grading process as well as to enable quantitative studies of the mechanisms underlying disease onset and progression.

The objective of this challenge is to evaluate and compare segmentation algorithms and to encourage the biomedical imaging community to design and implement more accurate and efficient algorithms. The challenge will evaluate the performance of algorithms for detection and segmentation of nuclei in a tissue image. Participants are asked to detect and segment all the nuclear material in a given set of image tiles extracted from whole slide tissue images.

This challenge uses image tiles from whole slide tissue images to reduce computational and memory requirements. The image tiles are rectangular regions extracted from a set of Glioblastoma and Lower Grade Glioma whole slide tissue images. Nuclei in each image tile in the training set have been manually segmented. Note that the tiles are not of the same size.

Evaluation

Each participant is required at the end of the test phase to submit a zip file containing mask files. Each participant may submit up to 5 entries. The entry with the highest score will be used as the final score for said participant. The mask files are text files with the following format:

width height     // N M
pixel_label_id  // pixel (0,0)
pixel_label_id  // pixel (1,0)
pixel_label_id  // pixel (2,0)
…
pixel_label_id  // pixel (N-1,M-1)

For example, assume two nuclei were segmented in a tile of 5x4 pixels:

00000
01100
11200
02220

The mask file would have the following content:

5 4
0
0
0
0
0
0
1
1
0
0
1
1
2
0
0
0
2
2
2
0

The filename of each mask file must be the same as the image file prefix followed by a “_mask” suffix. For example, the mask file for image01.png will be image01_mask.txt.

The scoring for this sub-challenge will be done using two variants of the DICE coefficient: Traditional Dice coefficient (DICE_1) to measure the overall overlapping between the reference/human segmentation and the participant segmentation.

An “Ensemble Dice” (DICE_2) to capture mismatch in the way the segmentation regions are split, while the overall region may be very similar. The two DICE coefficients will be computed for each image tile in the test dataset. The score for the image tile will be the average of the two dice coefficients. The score for the entire test dataset will be the average of the scores for the image tiles.

Terms and Conditions

By participating in this challenge, each participant agrees to

1. Submit an extended abstract (max 8 pages) before the end of the test phase describing their algorithm. Submission instructions are available in the "Abstract Submissions" section in the "Learn the Details" tab. Upload your extended abstract/short paper to the dropbox folder linked in the Abstract Submissions section.

2. Present their algorithm and challenge results at the challenge meeting at MICCAI 2018, if the participant's entry scores in the top three among all the participants.

Based on agreement with organizers of the BrainLes workshop, we plan to include abstract submissions to this challenge with the BrainLes collection in the LNCS Springer.

Short Paper submission deadline (Aug 16).

Participants will have to evaluate their methods on the training and validation datasets, and submit a short paper (<8 LNCS pages), describing their method and results to a dropbox folder shared with the organizers. This unified scheme should allow for appropriate preliminary comparisons and the creation of the pre-conference proceedings. Participants who wish to submit a significant longer version to the MICCAI 2018 BrainLes Workshop (http://www.brainlesion-workshop.org/) - BraTS'18 will be part of this workshop at MICCAI in Granada, Spain - can submit this longer manuscript instead. BraTS papers will be part of the BrainLes workshop proceedings distributed by LNCS.

Post-conference LNCS paper (Nov 1).

Participated methods will be invited to extend their papers to 12 pages for inclusion to the LNCS proceedings of the BrainLes Workshop.

Submit your paper according to the deadline through the CMT submission system (https://cmt3.research.microsoft.com/BrainLes2018/) of the BrainLes workshop (http://www.brainlesion-workshop.org/).

The papers will be initially submitted as an 8-pages manuscript that will be peer-reviewed and then we will ask the authors to extend their manuscripts (also with pictures and tables) up to 12 pages after the workshop without additional peer-review. We also allow the submission of extended papers FROM THE CHALLENGES as post-proceedings, which are peer-reviewed. The format of the workshop is mainly based on double-blinded peer-reviewed papers. We limit to 8 pages manuscript to be in-line with the main conference. Please use the LNCS latex-doc template (ftp://ftp.springer.de/pub/tex/latex/llncs/latex2e/llncs2e.zip). No guidelines on the dataset are given, to give freedom to the authors to report results on their current works. However, the authors can also use the available data from previous and current BraTS challenges. Due to time constrains only the top papers can have oral presentations, and the remaining will be presented during the workshop's poster session.

Plans for dissemination:

Extended versions of all accepted papers will be published as LCNS proceedings by: Springer-Verlag (http://www.springer.com/lncs).

No submissions have been made public!

Training

Start: June 15, 2018, 11:59 p.m.

Test

Start: July 31, 2018, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In
No submissions have been made public!
No submissions have been made public!