AI algorithm detects when medical images will be difficult for radiologists or AI to make an effective diagnosis

Radiologist diagnosing x-ray scan

A new AI algorithm can identify when medical images are likely to be difficult for either a radiologist or AI to make an effective diagnosis. The algorithm can potentially be used to triage medical scans and highlight cases that warrant further in-depth clinical evaluation or additional tests to support a definitive clinical diagnosis.

The algorithm, called UDC by AI healthcare company Presagen, was originally designed to automatically detect errors in medical data, particularly data that cannot be reliably verified by experts. When applied to images of x-rays to detect pneumonia, errors by radiologists were rare when the x-ray images had clear features. However, UDC found the diagnosis (or label) for several x-ray images to be neither correct nor an error. Removal of poor-quality images identified by UDC from the training dataset improved AI accuracy for diagnosing pneumonia in x-rays images by over 10% as measured on a hold out blind test set and the AI proved to be more scalable. The accuracy also exceeded benchmarks set by the current literature for that public dataset.

The AI Scientist that led the project, Dr. Milad Dakka, said: “Our results suggest these poor-quality images are uninformative, counter-productive or confusing when used in training AI. The ability to identify when new images are poor-quality is important to prevent an inaccurate AI clinical assessment, but also to alert the radiologist when the scan is likely to be difficult to diagnose or when a new scan should be taken.”

Dr. Don Perugini, Co-Founder and Chief Strategy Officer, Presagen said “Many AI practitioners believe that AI performance and scalability can be solved with more data. This is not true, and we call it the AI data fallacy. Even 1% of poor-quality data can impact the performance of the AI. Building accurate and scalable AI is about using the right data.”