CARLISLE, Pa. — Artificial Intelligence (AI) is revolutionizing health care, including the field of colonoscopy. Recent trials have shown that AI can help boost the detection rate of precancerous polyps by circa 50% and that removing such precancerous polyps can ultimately promote the prevention of colorectal cancer and reduce colonoscopy-related costs.
But what ethical issues arise when physicians employ such high-tech tools? And who becomes liable if they fail?
Those are a couple of the questions tackled by a new European Union-funded research project that aims to find solutions for clinical validation of AI. The goal of the project, named OperA and starting this month, is to improve the diagnosis and therapy of colorectal cancer and polyps.
Yuichi Mori, associate professor at the University of Oslo, leads the five-year project with 18 partners from 13 countries. Penn State Dickinson Law Assistant Professor of Law Sara Gerke spearheads, as co-principal investigator, the study on addressing ethical and legal concerns raised by AI in colonoscopy.
The 6-million-euro project (approximately $5.97 million) recently received 4.7 million euros (approximately $4.68 million) in funding from Horizon Europe, one of Europe’s most prestigious funding programs. A U.K. funding body will cover the remainder.
An expert on health law, bioethics and liability, Gerke will serve on the ethics monitoring committee, which will also include physicians from around the world. She will also explore ethical and legal questions that emerge with AI in cancer screening.
“Challenges related to unclear long-term clinical benefits, cost-effectiveness, and ethical and legal concerns have hampered the integration of AI technologies into clinical medicine,” said Gerke. By assessing the value of AI-assisted colonoscopy in colorectal cancer prevention, OperA could positively impact patients, society, and the economy.
“There has not been much research on the ethical and legal issues raised by AI in colonoscopy,” said Gerke. “Our study in OperA will explore the development of those AI tools from ethical and legal perspectives. We will particularly focus on assessing bias associated with AI in cancer screening, especially with respect to gender and ethnicity.”
For example, Gerke said, some communities lack access to health care systems, so they may be underrepresented in existing data, which could affect the accuracy of these AI tools when used in the care of such patients. Other issues Gerke and her co-leaders will explore include the implications of a misdiagnosis and what that could mean for legal liability.
A lack of field research has proven a barrier to widely implementing AI tools. OperA plans to conduct a pan-European, population-based, randomized trial with 222,000 participants. More widely accessible AI tools have the potential to reduce annual colorectal cancer deaths by 6,000 and save 720 million euros per year in Europe alone, the OperA team projects.
“AI is not perfect, but the hope is that it can help one day optimize colorectal cancer prevention through personalized treatment,” said Gerke. “AI screening has the potential to be better than human screening, but it can have pitfalls. We need to learn to use the advantages AI can unlock in a meaningful way to reach its potential.”
The OperA team just had its first kickoff meeting in Oslo, and Gerke and her colleagues — including Omer Ahmad of University College London in the U.K. and Tyler M. Berzin of Beth Israel Deaconess Medical Center and Harvard Medical School — will meet regularly to advance the field of AI in colorectal cancer screening.
“I am so much looking forward to the challenge and opportunity of this project. It is such a great responsibility to do this work,” said Gerke.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Health and Digital Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.