Big Data DR Challenge

The International AI Diabetic Retinopathy (DR) Algorithm Challenge



In the face of the rapidly growing burden of diabetes worldwide, the need for automated techniques for the assessment of diabetic retinopathy (DR) has long been recognized. The recent emergence of deep learning-based artificial intelligence grading of DR represents an advancement of artificial neural networks that permit improved accuracy of disease classification (sensitivity and specificity >90%) from raw image data. These systems offer great promise to increase the efficiency, accessibility and affordability of DR screening programs. The goal of this competition is to encourage multi-disciplinary collaboration and promote the use of artificial intelligence as a novel application in telemedicine screening for eye diseases among the medicine and technology communities.



Five hundred (500), high-resolution retinal images taken under a variety of imaging conditions (mydriatic vs. non-mydriatic) and from different camera models and ethnicities, will constitute the dataset for this challenge. To minimize annotation errors, each retinal image has been graded independently by two ophthalmologists, with any disagreements adjudicated by a third ophthalmologist. A retinopathy severity score was assigned to each image according to the NHS diabetic eye screening guidelines as R0 (no DR), R1 (background), R2 (pre-proliferative), R3 (proliferative), M0 (no visible maculopathy), M1 (maculopathy). Approximately 30% of the images within the dataset are classified as referrable DR. To simulate the need in real world screening, images may contain artifacts associated with small pupils, media opacities and/or image contrast/focus issues or any other possible scenarios due to operator problems. Furthermore, to ensure the dataset is sufficiently challenging, images displaying signs of retinal pathology other than DR (e.g. BRVO) will also be included in the dataset.

The Challenge

Your task is to create an algorithm capable of assigning a score based on a binary classification of referable DR/non-referable DR, whereby referable DR is defined as ≥R2 (i.e. moderate non-proliferative DR) or worse and/or diabetic macular edema (DME).


The challenge will run during the Opening Session of APTOS 2019 in Chennai, India.


On the day of the Challenge, the algorithm owner will be responsible for uploading their executable algorithm file and team name to a virtual private cloud. One hour prior to the challenge, an independent representative who holds the image dataset will upload the images to a single folder on the virtual private cloud. The algorithm owner will be unable to see the 500-image database, nor will the challenge host be able to gain access to the pre-trained algorithms that are submitted. During the Opening Session, each algorithm will run concurrently for a period of ONE (1) minute. The session moderator will be responsible for starting and ending the challenge and a countdown clock will be viewable to the conference attendees.


The evaluation metric used for scoring and ranking submissions will be overall accuracy, defined as:

(True Negatives + True Positives)
(True Negatives + True Positives + False Negatives + False Positives)

Images labelled as ‘Ungradable’ by the automated algorithms will be treated ‘positive’ for referable DR. Only algorithms that grade ≥100 images in the allocated time will be valid. A leader board, presenting the number of images completed, the proportion of true positive and true negative images graded, and the accuracy of each algorithm will be displayed to the conference attendees, however each submitted team’s name will be coded (e.g. Team 1, Team 2 etc.) to remain anonymous to the public. Only the name of the challenge winner will be displayed on the leader board. For each algorithm, the retinal images that were misclassified will be grouped and displayed in limited view, but not be downloadable to competitors.

Rules & Regulations
  1. Competitors can be a single or multiple investigator(s), a company or hospital.
  2. Each competitor can only submit one entry.
  3. The providers of the image database are not eligible to participate in the Challenge.
  4. APTOS Council Members are not allowed to participate in the Challenge as a competitor to avoid conflicts of interest. They can run their AI against the dataset to test their algorithms though.



US$10,000 will be awarded to the winning entry. In the case of a tie between two or more valid and identically ranked submissions, the team whose algorithm graded the most images in the one-minute period will be assigned the winner.


The Asia Pacific Tele-Ophthalmology Society would like to thank Aravind Eye Care System for providing images with multiple labels.


Attention Developers! Please stay tuned for details of the APTOS Hackathon!