
How AI can improve medical imaging
Using AI-based algorithms to identify health conditions in medical imaging may be the most widely known use case for AI in healthcare. It is easy to understand why: AI offers not only the possibility of better detection of a tumor, a skin lesion or some other indication but also can improve accuracy and efficiency for radiologists. Here are four current use cases.
Breast cancer detection on mammography
Breast cancer screening with mammography has been shown to improve prognosis and reduce mortality by detecting disease at an earlier, more treatable stage. However, many cancers are missed on screening mammography, and suspicious findings often turn out to be benign.
The authors of a study published in Radiology: Artificial Intelligence have shown, that AI can enhance the performance of radiologists in reading breast cancer screening mammograms. The researchers used MammoScreen, an AI software from Therapixel, that can be applied with mammography to aid in cancer detection. The AI system is designed to identify regions suspicious for breast cancer on 2D digital mammograms and assess their likelihood of malignancy. The system takes as input the complete set of four views composing a mammogram and outputs a set of image positions with a related suspicion score.

Fourteen radiologists assessed a dataset of 240 2D digital mammography images acquired between 2013 and 2016 that included different types of abnormalities. Half of the dataset was read without AI and the other half with the help of AI during a first session and without during a second session.
Average sensitivity for cancer increased slightly when using AI support. AI also helped reduce the rate of false negatives, or findings that look normal even though cancer is present. "The results show that MammoScreen may help to improve radiologists' performance in breast cancer detection," said Serena Pacilè, Ph.D., clinical research manager at Therapixel.
The improved diagnostic performance of radiologists in the detection of breast cancer was achieved without prolonging their workflow. In cases with a low likelihood of malignancy, reading time decreased in the second reading session. This reduced reading time could increase overall radiologists' efficiency, allowing them to focus their attention on the more suspicious examinations, the researchers said.
Brain aneurysms on CT angiography
For a study published in the journal Radiology, Dr Xi Long, Ph.D., from the Department of Radiology at Tongji Medical College's Union Hospital in Wuhan, China, and colleagues showed that a deep learning system can help physicians detect potentially life-threatening cerebral aneurysms on CT angiography.
The scientists developed a fully automated, highly sensitive algorithm for the detection of cerebral aneurysms on CT angiography images. They used CT angiograms from more than 500 patients to train the deep learning system, and then they tested it on another 534 CT angiograms that included 649 aneurysms. The algorithm detected 633 of the 649 cerebral aneurysms for a sensitivity of 97.5%. It also found eight new aneurysms that were overlooked on the initial assessment.
Statistical analysis revealed that deep learning assistance enhanced radiologists' performance. The improvement was most pronounced in the less experienced radiologists. "The developed deep learning system has shown excellent performance in detecting aneurysms," Dr. Long said. "We found some aneurysms that were overlooked by the human readers on the initial reports, but they were successfully depicted by the deep learning system."

The results suggest that the algorithm has promise as a supportive tool for detecting cerebral aneurysms with a potential to be used clinically for a second opinion during interpretation of head CT angiography images. It has a number of advantages in this setting, Dr. Long said, primarily due to the fact that the computer is not influenced by factors like the level of experience, working time and mood that affect human performance.
However, the system has also some limitations, Dr. Long noted. It can miss very small aneurysms or aneurysms located close to similar density structures like bones. It also suffers from false positive results, meaning that it mistakenly identifies structures similar to aneurysms as aneurysms, which necessitates careful revision of the system suggestions by human readers. "Simply put, the deep learning system is intended to assist human readers, not to replace them," Dr. Long said.
Perfecting MRI images of the brain
Researchers at Vanderbilt University and Vanderbilt University Medical Center have created a technique that corrects distortions in MRI images, which helps researchers and radiologists to better interpret brain scans.

Distorted images are common—an image of a three-dimensional object will get squashed or pulled in ways that don't reflect what the object truly looks like—but this is especially important to fix when the image is of a brain and its purpose is to understand disease or disorder. "Incorrect images can distort the image's intensity, understanding of brain size volume or interpretation of connections of brain pathways. If we don't have a true image, we cannot accurately observe or describe brain connections, which will negatively affect neurological research," said Bennett Landman, professor of electrical engineering and computer science and radiology and radiological sciences, the project's lead researcher.
The new algorithm, Synb0-DisCo, developed by the core faculty at the Vanderbilt Institute for Surgery and Engineering and the Vanderbilt University Institute of Imaging Science, synthesizes what the MRI image should look like from anatomically correct images and uses that data to correct the MRI scan that was acquired. "We've been able to use deep learning to synthesize contrasts," said Kurt Schilling, research assistant professor of radiology and radiological sciences, the paper's lead author. "The idea of using information from huge datasets and applying it to a single dataset was a black box before we developed this technique. We've been able to learn things about the brain that we never would have been able to, working with only our datasets. That has been a remarkable outcome of this work."
CT technology produces spectral images

Bioimaging technologies are the eyes that allow doctors to see inside the body in order to diagnose, treat, and monitor disease. In research published in Patterns, a team of engineers led by Ge Wang, an endowed professor of biomedical engineering at Rensselaer Polytechnic Institute, demonstrated how a deep learning algorithm can be applied to a conventional CT scan in order to produce images that would typically require a higher level of imaging technology known as dual-energy CT. "With traditional CT, you take a grayscale image, but with dual-energy CT you take an image with two colors," Wang said. "With deep learning, we try to use the standard machine to do the job of dual-energy CT imaging."
In this research, Wang and his team demonstrated how their neural network was able to produce those more complex images using single-spectrum CT data. The researchers used images produced by dual-energy CT to train their model and found that it was able to produce high-quality approximations with a relative error of less than 2%. "We hope that this technique will help extract more information from a regular single-spectrum X-ray CT scan, make it more quantitative, and improve diagnosis," said Wang.