The New York University Tests ‘Radiology Assistant’ Powered by Artificial Intelligence

BEL-based Emotional AI

The New York University Tests ‘Radiology Assistant’ Powered by Artificial Intelligence

The New York University Tests ‘Radiology Assistant’ Powered by Artificial Intelligence

Combination of AI & Radiologists More Accurately Identified Breast Cancer

There are several challenges for radiologists when screening mammography. For instance, some women develop breast cancer but are required to undergo additional screening tests, such as MRI and ultrasounds. This makes diagnosis not only costly, but also adds tremendous stress to the patients. 

Deploying AI in Radiology

A team of researchers introduced AI-powered ‘radiology assistant’ in an attempt to tackle the challenges faced in screening mammography. They were led by Dr. Krzystof J. Geras, an assistant professor in the department of radiology at the NYU School of Medicine.

According to Dr. Krzystof J. Geras, the team’s goal was to decrease the number of additional and unnecessary imagings using Artificial Intelligence. He adds that many times radiologists miss a small fraction of patients with cancer during regular screenings. They believe their AI tool will help radiologists discover such false negative cases which may help in the prognosis of the disease.

ResNet-22 Technology

The name of the proposed new technology, which involves a type of deep convolution network is ResNet-22. Resnet-22 works by learning from a huge number of image/label pairs. According to the lead author, Dr. Geras, for the training of the network they have presented it with more than 200,000 exams which consists more than 1 million images. It took the researchers approximately three weeks to train their algorithm using a powerful computer with a GPU (graphical processing unit).

AI to Help Radiologists

The creators of ResNet-22 expect that the AI technology will soon become an assistant to radiologists, as they are able see the images as they see them now, and if necessary, seek a second opinion from the new technology. The new technology is able of providing the radiologists with a predicted probability if the patient is suspected to have cancer and in addition, can point the parts that appear most suspicious. The technology may be able to make radiologists more confident about the diagnosis and may reduce the number of additional tests.

Radiologists can now see images as they currently are, and if necessary, request for a second opinion from AI (artificial intelligence).

The Reader Study

Consequently, the researchers conducted a reader study involving 14 radiologists who read 720 screening mammogram exams each. The AI model was presented with the same data. They also used a hybrid reading by radiologists using the AI model. Interestingly, the hybrid model was more accurate than either of the two separate predictions, which is because AI and radiologists were using different features of the data.

A Pilot Study

The good news is that this technology can be integrated with the real clinical pipeline fairly easily and the team is currently considering a pilot study at the NYU to validate it in the clinical setting.

Interestingly, the new technology is capable of enhancing the accuracy of radiologists diagnosis (AUC) from 0.8 to 0.895 in predicting the presence of cancer in the breast. “A random predictor achieves an AUC of 0.5 and a perfect predictor achieves an AUC of 1.0,” Dr. Geras explained.

The lead researcher, Dr. Geras explained that the neural network was trained for 3 weeks, and the team hopes to keep accumulating more data to enhance its performance.

AI Expert Advice

 According to the abstract of the paper, the researchers attributed the high accuracy of their model to a few technical advances. (i) Their network’s novel two-stage architecture and training procedure, which allows them to use a high-capacity patch-level network to learn from pixel-level labels alongside a network learning from macroscopic breast-level labels. (ii) A custom ResNet-based network used as a building block of the model, whose balance of depth and width is optimized for high-resolution medical images. (iii) Pretraining the network on screening BI-RADS classification, a related task with more noisy labels. (iv) Combining multiple input views in an optimal way among a number of possible choices.

The link to the paper:

The link to the model: