2013 Challenge Overview

Overview

The 2013-2014 Challenge is on sub-Nyquist reconstruction of dynamic imaging.

The MRI field has seen an explosive growth of accelerated imaging methods that can reconstruct dynamic images using less data than specified by the Nyquist criteria. These "sub-Nyquist" methods include sliding window reconstruction, parallel imaging, k-t acceleration, and compressed sensing, among many approaches.

We believe that the time is ripe to push the envelope of sub-Nyquist dynamic imaging by establishing a test bed for an open competition based on common datasets and clear quality criteria. The methods developed from this Challenge have the potential for a substantial real-world impact on cardiovascular imaging in the clinic.

Our aim for this Challenge is to engage the MRI develop to develop the best sub-Nyquist reconstruction algorithm that:

  • Reconstructs high-quality dynamic cardiac images from arbitrary sampling patterns
  • Is robust across wide ranges of cardiac imaging examples without the need for manual adjustment on a case-by-case basis
We provide you with real-life cases of dynamic cardiac MRI from different anatomical views. You apply your technique to these cases, and submit reconstructed images, which will be scored by an automated algorithm for accuracy. You can resubmit results at most once per hour.

Phase I submissions will close on March 15th at 11:00pm UTC (2013-03-15 2300UTC).

In 2014Q1, we will invite the top five (5) teams to enter Phase II of the competition, where they will run through an expanded set of cases without any case-by-case adjustment. Teams will have 2 weeks to package and submit code for judging. The resulting dynamic image series will be ranked by several blinded expert radiologists to determine a winner.

The winner is announced at ISMRM in Milan. To earn the bragging rights, the winning team has to disclose their algorithm upon the end of the Challenge.


Who can participate?

Anyone can join as a team, made up of one or up to six persons. However, each team must have at least one ISMRM member in good standing.

Participants may choose to represent themselves individually, or be affiliated with an academic institution or a company. It is the sole responsibility of the individuals to ensure eligibility of their participation with their affiliations. We reserve the right to disqualify participants if their participation is in violation of the policies from their affiliations.

Members of the organizing committee with access to reference data are ineligible to enter the Challenge as individuals or part of teams.


Registration

To participate in the Challenge, participants must register with accurate information. The participants must provide a team name, a team leader, all other team members, and information for each participant: full name, e-mail address, ISMRM membership status, and optionally, affiliation.

We recommend participating teams to choose team names that serve as aliases and cannot be easily traced to the participants' identities. This allows the leaderboard to be used for open competition while respecting the privacy of participants. (Added July 7, 2013)

Each individual may belong to more than one team, but two teams may not have the same team leader or identical membership. Only the identities of the winning team are disclosed at the end of the Challenge.


Data format

Data Description and Format


Scoring

In Phase I of the challenge, data will be scored automatically. We recognize that current imaging metrics may not exactly correspond to diagnostic image quality. However, a vast array of potential metrics were evaluated for this specific application, resulting in the selection of a weighted Besov norm for automatic image evaluation. Reconstructions are scored according to the formula 10000*(1-RE), where RE is reconstruction error relative to reference images as measured by a weighted Besov norm. Thus, perfect reconstruction gets a score of 10,000 and any reconstruction with a relative error greater than or equal to 1 get a score of 0 (submitting a zero image will return a zero score).

The weighted Besov norm was chosen because it is known to imitate to a certain degree the human visual system (HVS) in terms of evaluating quality of the edges/texture in the image (1), while the more standard L2 norm (RMS error) works well for evaluation of smooth images (2). The Besov norm computes error in k-space for each frequency band in terms of the weighted L1 norm (chosen to minimize the impact of a small number of outliers), with the weighting function determined by the temporal standard deviation of the reference image series in the heart region. Summation across frequencies is done using the L4 norm with a penalty term of 2^{3/4 j} for the j-th frequency band. The L4 norm for summation across frequency bands was chosen because it was shown to be a close match to HVS in phychopsysical experiments (3).

In Phase II, images will be evaluated by radiologists to provide a clinically relevant ranking. (Added Oct 18, 2013)

Literature Citations:

  1. R.A. DeVore, B. Jawerth, and B.J. Lucier, Image compression through wavelet transform coding, IEEE Trans. Information Theory., Vol 38, pp. 719-746, March 1992.
  2. M. P. Eckert and A. P. Bradley, “Perceptual quality metrics applied to still image compression,” Signal Process., vol. 70, no. 3, pp. 177–200, Nov. 1998
  3. A.B. Watson, Summation of grating patches indicates many types of detectors at one retinal location, Vision Research 22 (1982) 17-25.

2013 Challenge Organizing Committee

Special thanks to
  • Michael Hansen
  • Peter Kellman
  • Sebastian Kozerke