
The Society of Nuclear Medicine and Molecular Imaging (SNMMI) convened a task force to establish standards for the ethical integration and implementation of artificial intelligence (AI) applications in nuclear medicine practice. At the SNMMI Annual Meeting, the group presented a framework for those working in AI, emphasizing the importance of establishing trust among all stakeholders.
“The AI ecosystem contains the total life-cycle of the application including data acquisition, model training and prototyping, production/testing, validation/evaluation, implementation and development, and post-deployment surveillance,” wrote the authors, led by Babak Saboury, of the National Institutes of Health Clinical Center. “Attention to all these steps through the lens of trustworthiness is essential.”
The SNMMI AI Task Force included physicists, computational imaging scientists, physicians, statisticians, and representatives from industry and regulatory agencies. The task force has conducted surveys and performed literature reviews, as well as disseminated findings through summits, presentations, and articles.
At the Annual Meeting, members of the task force encouraged the healthcare community to take advantage of opportunities to enhance the practice of nuclear medicine through AI-based innovation. They cited specific examples such as improving the generation and analysis of all diagnostic imaging, including nuclear imaging. They also said there is potential for AI to improve the discovery and labeling of radiopharmaceutical therapies, in addition to more precise dosimetry. AI also stands to improve clinical workflow and efficiency, they said.
However, the task force cautioned: “Critical pitfalls that commonly afflict AI algorithm development, evaluation, and implementation have been recognized.” Specifically, those developing AI applications or medical devices that rely on AI face challenges gathering data and establishing architecture. They also may struggle to measure validity, measure and communicate uncertainty in their models and results, or develop case studies that demonstrate clinical utility. Those working with AI must also continuously consider ethical, regulatory, and legal issues, which tend to be ambiguous and ever-evolving. Throughout use of AI technology, developers and users must monitor results and continually update the data informing their algorithms.
Ultimately, the task force established a list of essential considerations for those seeking to use AI in health care.
- human agency, or the capacity for humans to make choices and to impose those choices on the world
- oversight
- technical robustness
- safety and accountability
- security and data governance
- predetermined plans for change control
- awareness of the potential for bias inherent in data, as well as incorporating diversity, avoiding discrimination, and ensuring fairness
- participation of all stakeholders
- transparency, including the ability to explain AI use and algorithms
- societal well-being
- privacy for individuals and their data
They emphasized that teams that consider these essential aspects will enhance the trustworthiness of their tools and applications.