Be Aware of Threats to Data Integrity in Medical Imaging

By Kerri Fitzgerald - Last Updated: August 9, 2022

Artificial intelligence (AI) holds enormous potential to improve medical imaging, with AI approaches often showing comparable or superior performance to medical expert review. However, a presentation at the Society of Nuclear Medicine and Molecular Imaging (SNMMI) Annual Meeting outlined important vulnerabilities in AI models, which developers and users must take into consideration. In particular, the presenters highlighted the threats of data attacks and data manipulation.

Advertisement

The authors conducted a review of threats and mitigation strategies to highlight the importance of data security in medical imaging AI efforts. Among their concerns are generative adversarial networks (GANs): “unsupervised neural networks, which compete to generate new examples from a given training sample.”

“In the case of imaging, the generated images are indistinguishable from the initial example images visually. GANs have been used to create deep-fake photos and videos. Whether inadvertently or maliciously, processed images could result in data manipulation,” wrote the authors, led by Sriram S. Paravastu, of the National Institutes of Health.

Such manipulation can have many consequences, the presenters said, such as counterfeit images, delayed detection of problematic images because they seem authentic, suboptimal treatment outcomes when decision are based on manipulated images, and financial implications.

They offered several suggestions to mitigate these risks, including human verification of images in addition to AI verification, watermarks to identify verified images, and encryption at every stage (as images move among scanners, storage, workstations, and more). They also encouraged AI developers to educate AI users regarding issues of reliability and trustworthiness.

“Proactive data-security strategies are needed to prevent data compromise while promoting the advancement of medical imaging technologies and patient care through AI approaches,” the authors concluded. “It is important for healthcare professionals to be informed of vulnerabilities in order to balance responsibilities for patient care with the obligation for data integrity control.”

Advertisement