The National Institutes of Health’s Clinical Centre has made a large-scale dataset of CT images publicly available to help the scientific community improve detection accuracy of lesions. While most publicly available medical image datasets have less than a thousand lesions, this dataset, named DeepLesion, has over 32,000 annotated lesions identified on CT images.
The images, which have been thoroughly anonymised, represent 4,400 unique patients, who are partners in research at the NIH.
Once a patient steps out of a CT scanner, the corresponding images are sent to a radiologist to interpret. Radiologists at the Clinical Centre then measure and mark clinically meaningful findings with an electronic bookmark tool. Similar to a physical bookmark, radiologists save their place and mark significant findings to be able to come back to at a later time. These bookmarks are complex – they provide arrows, lines, diameters, and text that can tell the exact location and size of a lesion so experts can identify growth or new disease.
The bookmarks, abundant with retrospective medical data, are what scientists used to develop the DeepLesion dataset. DeepLesion is unlike most lesion medical image datasets currently available, which can only detect one type of lesion. The database has great diversity – it contains all kinds of critical radiology findings from across the body, such as lung nodules, liver tumours, enlarged lymph nodes, and so on.
The conventional methods for collecting image labels like a search engine does, cannot be applied in the medical image domain. Medical image annotations require extensive clinical experience. But, that could change. The dataset released is large enough to train a deep neural network – it could enable the scientific community to create a large-scale universal lesion detector with one unified framework.
With the release of the dataset, researchers hope the others will be able to: develop a universal lesion detector that will help radiologists find all types of lesions. It may open the possibility to serve as an initial screening tool and send its detection results to other specialist systems trained on certain types of lesions; mine and study the relationship between different types of lesions. In DeepLesion, multiple findings are often marked in one CT exam image. Researchers are able to analyse their relationship to make new discoveries; and more accurately and automatically measure sizes of all lesions a patient has, enabling the whole-body assessment of cancer burden.
In 2017, the research hospital released anonymised chest x-ray images and their corresponding data.
In the future, the NIH Clinical Centre hopes to keep improving the DeepLesion dataset by collecting more data, thus improving its detection accuracy. The universal lesion detecting capability will become more reliable once researchers are able to leverage 3-D and lesion type information. It may be possible to further extend DeepLesion to other image modalities such as MRI and combine data from multiple hospitals, as well.
Extracting, harvesting, and building large-scale annotated radiological image datasets is a greatly important yet challenging problem. Meanwhile, vast amounts of clinical annotations have been collected and stored in hospitals’ picture archiving and communication systems (PACS). These types of annotations, also known as bookmarks in PACS, are usually marked by radiologists during their daily workflow to highlight significant image findings that may serve as reference for later studies. We propose to mine and harvest these abundant retrospective medical data to build a large-scale lesion image dataset. Our process is scalable and requires minimum manual annotation effort. We mine bookmarks in our institute to develop DeepLesion, a dataset with 32,735 lesions in 32,120 CT slices from 10,594 studies of 4,427 unique patients. There are a variety of lesion types in this dataset, such as lung nodules, liver tumors, enlarged lymph nodes, and so on. It has the potential to be used in various medical image applications. Using DeepLesion, we train a universal lesion detector that can find all types of lesions with one unified framework. In this challenging task, the proposed lesion detector achieves a sensitivity of 81.1% with five false positives per image.
Ke Yan; Xiaosong Wang; Le Lu; Ronald M Summers