Federated learning and explainable AI: enhancing trust in clinical practice
In this session, we will bring together speakers from European healthcare industries and institutions to discuss challenges related to getting insights from data stored in different sites and/or hospitals by leveraging emerging AI techniques such as Federated Learning. Federated Learning (FL) is an emerging AI technique that trains AI models across multiple decentralized devices or sites holding local data sets without exchanging them. FL enables to overcome the limitations and constraints concerning data availability and privacy, towards AI realization in the healthcare setting. FL can be leveraged for in-product learning to improve an already deployed model by relying on clinical data without having access to it and deploying AI in real-world clinical scenarios. Nevertheless, challenges related to differences in data populations might be considered and mitigated by including explainability and responsible AI tools. To realize in-product learning, data quality and fairness should also be explored, so that each hospital and/or research center can get rationale behind other sites. Can FL empower model robustness and trust on AI models deployed and generated in clinical settings?
Key questions: How can we ensure access to data for research purposes, epidemiology applications and drug development within the European Data and AI ecosystem? Which legal and ethical aspects for privacy-preserving are needed when deploying Federated Learning in healthcare? How Federated Learning can leverage explainable and responsible AI tools to ensure data quality and fairness? How responsible is AI for addressing the differences in data populations in the context of distributed/federated learning scenarios?
Presentation of the session: Thomas Penzel presentation
