When we read the intriguing text ‘explainable artificial intelligence’ for the first time, several thoughts came to our mind. It is known that artificial intelligence (AI) models and their results are difficult to interpret, and even more difficult to explain to others. Interpretability and/or transparency are frequently used as synonyms. Every effort to facilitate, make the results more explainable, is very welcome. But what is explainable AI about? Is it a set of guidelines for anyone to be able to navigate AI models and results more easily, or is it something more profound, a new concept? Let’s explore the use of explainable artificial intelligence for pharmacovigilance and patient safety.
What is explainable artificial intelligence?
According to Google Cloud, XAI is a series of tools and frameworks to understand and interpret predictions made by our machine learning models.
XAI is widely acknowledged as a crucial feature for the deployment of AI models.1 Explainability can facilitate the understanding of various aspects of a model, leading to insights that can be used by different stakeholders, such as data scientists, business owners, model risk analysts, regulators, and consumers.2
In traditional machine learning, complex models are built using large amounts of data and mathematical algorithms, making it difficult for humans to understand how the model arrived at its conclusions. XAI aims to make these models more transparent and interpretable, allowing humans to understand the decision-making process and to have confidence in the results.
The importance of XAI is growing as AI is being integrated into more and more aspects of our lives, including healthcare, finance, and criminal justice. XAI can help ensure that these systems are fair, unbiased, and transparent, and can help build trust between humans and AI.
Explainable Artificial Intelligence in Healthcare and Medicine
Explainability constitutes a major medical AI challenge. Omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individuals and public health.
There are several perspectives to explainability of artificial intelligence in healthcare, namely the technological perspective, the legal perspective, the medical perspective, the patient perspective, as well as the ethical perspective.3
Explainable Artificial Intelligence has significant potential to improve healthcare and medicine by helping clinicians and researchers better understand how AI systems make predictions or recommendations, which is crucial for ensuring their safety and effectiveness.
One area where XAI can be particularly useful is in medical diagnosis. AI systems can be trained on large amounts of medical data to make accurate predictions, but these predictions need to be explainable so that clinicians can understand why the system is making a particular diagnosis. This can help clinicians make more informed decisions and reduce the risk of errors or misdiagnoses.
In addition, XAI can be used to help identify biases in medical data and prevent them from influencing the predictions of AI systems. For example, if an AI system is trained on medical data that is biased against certain patient groups, it may make inaccurate or unfair predictions that could negatively impact those patients.
XAI can also be used to improve the transparency of clinical trials by helping researchers better understand the factors that contribute to treatment outcomes. This can help identify new treatments or interventions that are more effective, as well as identify potential side effects or risks associated with these treatments.
Overall, XAI has the potential to significantly improve the accuracy and safety of medical diagnoses and treatments, as well as increase the transparency and fairness of healthcare systems.
Pharmacovigilance and Explainable Artificial Intelligence
XAI can be used in pharmacovigilance by analyzing large amounts of medical data to identify potential adverse drug reactions (ADRs) or potential adverse events. This can be done using machine learning algorithms that are trained on large datasets of patient data, including electronic health records, social media posts, and other sources.4 XAI can help make these algorithms more transparent and interpretable, allowing researchers and clinicians to understand how the algorithm is making predictions and identify potential biases or errors.
In addition, XAI can be used to identify patterns and trends in ADRs that may not be immediately apparent to humans. For example, XAI can be used to analyze patterns in patient data that may indicate a particular drug is causing unanticipated adverse events, or to identify patient groups that are particularly susceptible to certain ADRs. XAI can be combined with other machine learning models, like knowledge graphs, helping identify biomolecular features that may distinguish or identify a causal relationship between an ADR and a particular compound.5
Explainable artificial intelligence may improve the accuracy and effectiveness of pharmacovigilance by helping researchers and clinicians better understand the data and algorithms used in the process. This can help identify potential safety issues more quickly and accurately, leading to improved patient outcomes and better drug safety.
Citations
- 1.Barredo Arrieta A, Díaz-Rodríguez N, Del Ser J, et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion. Published online June 2020:82-115. doi:10.1016/j.inffus.2019.12.012
- 2.Belle V, Papantonis I. Principles and Practice of Explainable Machine Learning. Front Big Data. Published online July 1, 2021. doi:10.3389/fdata.2021.688969
- 3.Amann J, Blasimme A, Vayena E, Frey D, Madai VI. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak. Published online November 30, 2020. doi:10.1186/s12911-020-01332-6
- 4.Ward I, Wang L, Lu J, Bennamoun M, Dwivedi G, Sanfilippo F. Explainable artificial intelligence for pharmacovigilance: What features are important when predicting adverse outcomes? Comput Methods Programs Biomed. 2021;212:106415. doi:10.1016/j.cmpb.2021.106415
- 5.Bresso E, Monnin P, Bousquet C, et al. Investigating ADR mechanisms with Explainable AI: a feasibility study with knowledge graph mining. BMC Med Inform Decis Mak. 2021;21(1):171. doi:10.1186/s12911-021-01518-6
Leave a Reply