AI in Mammography: Won’t Be Replacing Humans Just Yet

Artificial intelligence (AI) has “a long way” to go before it’s ready for routine use in breast cancer screening, says a team that reviewed recent studies of the technology.

“AI systems are not sufficiently specific to replace radiologist double reading in screening programmes,” the study authors conclude.

The review was published online on September 2 in The BMJ.

There’s been a lot of hype about the use of AI in radiology, including opinion pieces suggesting that “the replacement of radiologists by AI is imminent” because computers don’t get sleepy or have bad days. In addition, their performance is continually improving, explained the reviewers, led by Karoline Freeman, a senior research fellow at the University of Warwick, in Coventry, United Kingdom.

However, “current evidence is a long way from having the quality and quantity required for its implementation” in routine mammography, the authors say.

The team extracted data from 12 studies that reported test accuracy of AI algorithms alone or in combination with radiologists. These studies were culled from an extensive literature review as the most methodologically sound.

Overall, the studies included 131,822 women who underwent screening from 2010 to May 2021.

Among other analyses, the researchers compared AI and radiologist performance against biopsy results.

They found that in the three largest retrospective studies, which included 79,910 women screened in Europe and the United States, most of the AI systems (34 of 36 systems, 94%) were less accurate than a single radiologist, and all of them were less accurate than screening by two or more radiologists.

In addition, promising results in smaller studies were not replicated in larger ones, and there were no real-world prospective trials that measured test accuracy with AI.

The evidence so far “does not yet allow judgement of [AI] accuracy in breast cancer screening programmes, and it is unclear where on the clinical pathway AI might” help most, the reviewers comment.

“Complementing rather than competing with radiologists is a potentially promising way forward,” perhaps by using “AI to pre-screen easy normal mammograms for no further review,” they suggest.

Separating Wheat From Chaff

Randomized trials are crucial at this point to find the proper role for AI in breast cancer screening, including trials that pit one AI system against another, the reviewers say.

Constance Lehman, MD, PhD, chief of breast imaging at Massachusetts General Hospital, Boston, Massachusetts, told Medscape Medical News that she agrees.

Reviews such as this one are “critical for careful assessment of implementation of AI into routine practice,” but “rigorous trials as the authors propose will move us closer to understanding the risks and benefits of currently available” AI systems, she said.

The reviewers suggest that the reason their findings are less optimistic than previous reports and opinion pieces is probably “because of our emphasis” on study quality and “on comparisons with the accuracy of radiologists.”

Also, a lot of what has been published extracted “the ‘simulation’ parts of studies, which were often used as the headline numbers in papers, and often estimated higher accuracy for AI than the empirical data” of the studies.

For the new analysis, the researchers didn’t do that; rather, the investigators relied on real-world data, they say.

Five small studies included in The BMJ report concluded that AI is more accurate than readings by a single radiologist, but “these studies were examining the mammographic images” of women in a laboratory setting, “which is not generalizable to clinical practice.”

In a sixth small study, the comparison was against a single reader in the United States “with an accuracy below that expected in usual clinical practice.”

The reviewers hope that “highlighting the shortcomings” will inspire “decision makers to press for high quality evidence on test accuracy” before “integration of AI into breast cancer screening programmes.”

They note, however, that AI systems are constantly improving, so reviews such as theirs might be out of date by the time they’re published.

The study was commissioned by the UK National Screening Committee. The investigators and Lehman have disclosed no relevant financial relationships.

BMJ. Published online September 2, 2021. Full text

M. Alexander Otto is a physician assistant with a master’s degree in medical science. He is an award-winning medical journalist who worked for several major news outlets before joining Medscape. He is an MIT Knight Science Journalism fellow. Email: [email protected].

For more news, follow Medscape on Facebook, Twitter, Instagram, and YouTube.

Source: Read Full Article