AI has the potential to support ethical principles in hematology – but there is a dark side
During a session at the 2022 European Hematology Association (EHA) Congress, speakers discussed how artificial intelligence (AI) can help advance the principles of ethical medicine – but also how new technologies are being used to ensure the integrity of scientific research to undermine.
The session was part of the YoungEHA track at EHA2022, which is aimed at scientists and clinicians earlier in their careers and aims to go beyond scientific data and provide a forum for discussion of the changing field of hematology.
The first speaker, Dr. Elisabeth Bik, explained how her consulting work has evolved from the world of microbiology to consulting on suspected scientific misconduct – in other words, she is now a detective on the hunt for possible errors or fraud in research.
Scientific works are the building blocks of research, Bik noted, and “if these works contain errors, it would be like building a wall where one of the bricks is not very stable… The wall of science will come down.” “
Scientific fraud is defined as plagiarism, forgery or forgery, but does not include honest mistakes, Bik explained. Reasons for cheating can be publication pressure, the feeling of having to meet high expectations after a taste of research success, or even a “power game” where a professor threatens a postdoc with visa suspension if an experiment is not successful.
Inappropriate image duplication in research, which is Bik’s area of expertise, can fall into one of three categories: simple duplication, repositioning, and alteration. A simple duplication can mean an honest mistake instead of willful misconduct, but it’s still inappropriate and should be corrected, she noted.
The problem, however, is that these cases are difficult to detect. On an 8-image slide, Bik asked the audience if they could identify a duplication. This reporter was proud to have spotted 1 set of identical images until Bik revealed that there were actually 2 instances of duplication.
In addition to simple duplication, Bik also showed examples of hematology-related images, such as western blots and bone marrow flow cytometry, that had been repositioned, flipped, and altered to obscure their identities, suggesting more deliberate deception on the part of the authors.
To make matters worse, magazines often took no action to withdraw or correct the article once Bik brought their concerns to their attention. She advised audiences to be critical of research numbers, especially when acting as peer reviewers — if data “looks too nice,” one has a responsibility to raise the issue privately with the publisher.
“If you see something, say something, because that’s what happens,” Bik warned.
One insidious use of AI to perpetuate scientific fraud is the presence of artificially generated Western blot images in articles published by “paper mills”. Bik has identified over 600 papers using these fake images produced via Generative Adversarial Networks. Unfortunately, the images are unique, so they cannot be captured with duplicate detection software.
The cost to society of scientific misconduct goes beyond the potential for readers to unknowingly base their research on papers containing errors or fraud, Bik explained. The presence of fraud undermines the integrity of science and can be abused by those with a political agenda to claim that all science is flawed.
“We have to believe in science, and we have to do better to make science better,” concluded Bik.
The next speaker, Amin Turki, MD, PhD, from Universitätsklinikum Essen in Germany, explained how AI can pose both an opportunity and a challenge for ethics in medicine, especially hematology.
The use of AI in hematology has evolved exponentially in recent years, Turki explained, as prediction tasks have evolved from risk prediction to bone marrow diagnostics, which is time-consuming and challenging due to the complexity of the bone marrow with its abundance of cells and structures, but has the potential to transform clinical practice.
Machine learning has identified new phenotypes in predicting outcomes in chronic graft-versus-host disease (GVHD), and Turki and colleagues are working on research to predict mortality after allogeneic hematopoietic cell transplantation as well as in patients with acute GVHD.
Still, the use of AI in medicine is not without ethical issues, and the number of PubMed-indexed articles on AI and ethics has grown exponentially in recent years. Turki examined AI through the lens of the moral principles of medicine based on the United Nations Declaration of Human Rights published in 1948:
- Autonomy: AI can support patient autonomy through the use of digital agents such as wearables.
- Benefits: AI can improve health by overcoming the limitations of human cognition through improved risk prediction or individualized treatment.
- Harmless: Toxicity can be reduced by AI-defined dosing and therapy algorithms.
- Equity: Researchers hope that AI can be used to reduce the impact of health disparities, but if not done properly, it can amplify or perpetuate those disparities (e.g., if AI interventions are made accessible only in wealthier countries ).
Ethical AI in medicine stakeholders include the developers, practitioners, adopters and regulators, and each has unique responsibilities, Turki said. He proposed several ways to overcome ethical challenges, including embedding ethics in the AI development lifecycle, prioritizing human-centric AI, and ensuring fair representation.
The ongoing digital transformation promises to transform hematology care, Turki concluded, but it also demands that we “never forget that the human condition is the basis of our understanding.”
Comments are closed.