Adversarially Based Virtual CT Workflow for Evaluation of AI Imaging

The goal is to develop an FDA workflow evaluating, approving, and monitoring AI imaging software

Software

The software of this project

Go to Software

Data

The data of this project

Go to Data

Lab

The introduction of lab

Go to Lab

Tool

The turing test tool

Go for Test

Abstract

Over the past several years, artificial intelligence (AI) and machine learning (ML), especially deep learning (DL), has been the most prominent direction of tomographic research, commercial development, clinical translation, and FDA evaluation. Recently, it has become widely recognized that deep neural networks often have generalizability issues and are vulnerable to adversarial attacks, deliberate or unintentional. This critical challenge must be addressed to optimize the performance of deep neural networks in medical applications.
In January this year, FDA published an action plan for furthering the oversight for AI/DL-based software as medical devices (SaMDs). One major action underlined in the plan is “regulatory science methods related to algorithm bias and robustness”. The significance of ensuring the safety and effectiveness of AI/DL-based SaMDs cannot be overestimated since AI is expected to play a critical role in the future of medicine. In this context, the overall goal of this academic-FDA partnership R01 project is to generate diverse training and challenging testing datasets of low-dose CT (LDCT) scans, prototype a virtual CT workflow, and establish an evaluation methodology for AI-based imaging products to support FDA marketing authorization. The technical innovation lies in cutting-edge DL methods empowered by (a) adversarial learning to generate anatomically and pathologically representative features in the human chest; (b) adversarial attacking to probe the virtual CT workflow in individual steps and its entirety; and (c) systematic evaluation methods to better characterize and predict the clinical performance of AI-based imaging products. In contrast to other CT simulation pipelines, our Adversarially Based CT (ABC) platform relies on adversarial learning to ensure diversity and realism of the simulated data and images and improve the generalizability of deep networks, and utilizes adversarial samples to probe the ABC workflow to address the robustness of deep networks.
The overarching hypothesis is that adversarial learning and attacking methods are powerful to deliver highquality datasets for AI-based imaging research and performance evaluation. The specific aims are: (1) diverse patient modeling (SBU), (2) virtual CT scanning (UTSW), (3) deep CT imaging (RPI), (4) virtual workflow validation (FDA), and (5) ABC system dissemination (RPI-SBU-UTSW-FDA). In this project, generative adversarial learning will play an instrumental role in generating features of clinical semantics. Also, adversarial samples will be produced in both sinogram and image domains. In these complementary ways, AI-based imaging products can be efficiently evaluated for not only accuracy but also generalizability and robustness. Upon completion, our ABC workflow/platform will be made publicly available and readily extendable to other imaging modalities and other diseases. This ABC system will be shared through the FDA’s Catalog of Regulatory Science Tools, and uniquely well positioned to greatly facilitate the development, assessment and translation of emerging AI-based imaging products.

MeTAI ecosystem with four major healthcare applications.

The above figure is the MeTAI ecosystem with four major healthcare applications. a, Virtual comparative scanning (to find the best imaging technology in a specific situation). b, Raw data sharing (to allow controlled open access to tomographic raw data). c, Augmented regulatory science (to extend virtual clinical trials in terms of scope and duration). d, ‘Metaversed’ medical intervention (to perform medical intervention aided by metaverse). In an exemplary implementation of the MeTAI ecosystem, before a patient undergoes a real CT scan, his/her scans are first simulated on various virtual machines to find the best imaging result (a). On the basis of this knowledge, a real scan is performed. Then, the metaverse images are transferred to the patient’s medical care team, and upon the patient’s agreement and under secure computation protocols, the images and tomographic raw data can be made available to researchers (b). All these real and simulated images and data as well as other medically relevant information can be integrated in the metaverse and utilized in augmented clinical trials (c). Finally, if it is clinically indicated, the patient will undergo a remote robotic surgery aided by the metaverse and followed up in the metaverse for rehabilitation (d). Each of the four applications is further described in the main text.

Timeline

Timeline.