Artificial intelligence (AI) technology developed by the RIKEN Center for Advanced Intelligence Project (AIP) in Japan has successfully found features in pathology images from human cancer patients, without annotation, that could be understood by human doctors. Further, the AI identified features relevant to cancer prognosis that were not previously noted by pathologists, leading to a higher accuracy of prostate cancer recurrence compared to pathologist-based diagnosis. Combining the predictions made by the AI with predictions by human pathologists led to an even greater accuracy.
According to Yoichiro Yamamoto, the first author of the study published in Nature Communications, “This technology could contribute to personalized medicine by making highly accurate prediction of cancer recurrence possible by acquiring new knowledge from images. It could also contribute to understanding how AI can be used safely in medicine by helping to resolve the issue of AI being seen as a ‘black box.'”
The research group led by Yamamoto and Go Kimura, in collaboration with a number of university hospitals in Japan, adopted an approach called “unsupervised learning.” As long as humans teach the AI, it is not possible to acquire knowledge beyond what is currently known. Rather than being “taught” medical knowledge, the AI was asked to learn using unsupervised deep neural networks, known as autoencoders, without being given any medical knowledge. The researchers developed a method for translating the features found by the AI—only numbers initially—into high-resolution images that can be understood by humans.
To perform this feat the group acquired 13,188 whole-mount pathology slide images of the prostate from Nippon Medical School Hospital (NMSH), The amount of data was enormous, equivalent to approximately 86 billion image patches (sub-images divided for deep neural networks), and the computation was performed on AIP’s powerful RAIDEN supercomputer.
The AI learned using pathology images without diagnostic annotation from 11 million image patches. Features found by AI included cancer diagnostic criteria that have been used worldwide, on the Gleason score, but also features involving the stroma—connective tissues supporting an organ—in non-cancer areas that experts were not aware of. In order to evaluate these AI-found features, the research group verified the performance of recurrence prediction using the remaining cases from NMSH (internal validation). The group found that the features discovered by the AI were more accurate (AUC=0.820) than predictions made based on the human-established cancer criteria developed by pathologists, the Gleason score (AUC=0.744). Furthermore, combining both AI-found features and the human-established criteria predicted the recurrence more accurately than using either method alone (AUC=0.842). The group confirmed the results using another dataset including 2,276 whole-mount pathology images (10 billion image patches) from St. Marianna University Hospital and Aichi Medical University Hospital (external validation).
Source: Read Full Article