Digital Health

Upcoming events

Training | July 3, 2024

AI in healthcare

In July, Fraunhofer IKS is organizing a one-day intensive training course with scientific input, in-depth, interactive practical examples and best practices from applied research.

The safety and trustworthiness of intelligent health applications is at the heart of our research on Digital Health at Fraunhofer IKS. AI applications must be reliable, transparent, and robust for patients, doctors, healthcare providers and other important stakeholders. Only then can the much-discussed potential of digitalization and Artificial Intelligence (AI) in the safety-critical healthcare sector be exploited to provide the best possible patient care.

Our guiding theme, Trustworthy Digital Health, brings together various key areas of research, interdisciplinary expertise, and solutions that we are constantly developing in line with the state of research, technology, and the healthcare industry. Together with clinics, medtech & health IT companies, networks, universities and many other partners from research and industry, we research and develop intelligent healthcare solutions, ranging from basic AI models (e.g., decision trees, clustering or neural networks for tasks such as medical imaging, analysis of EHR or time-series data) to cutting-edge generative AI and hybrid quantum-enhanced AI approaches. We consider little, multimodal, or distributed data, interpretability of AI and other complex challenges exciting scientific questions.

Our core competencies

Optimization of healthcare processes

Medical Decision Support with AI

Unsere Formate:

  • Ideation-Workshops
  • Rapid Prototyping
  • R&D
  • Trainings

Data-efficient Medical Imaging

Verification and Validation of AI

Optimization of healthcare processes

The acute shortage of health specialists, demographic change, cost pressure and simultaneous desire for continuously improved healthcare require health processes to be optimized from prevention to aftercare, and specialist staff to be relieved using technology. Digitalization in hospitals and other healthcare facilities is intensively being driven forward, particularly through regulatory initiatives such as the German Hospital Future Act (KHZG, 2020), the Digitalgesetz (DigiG, 2023) and the Health Data Usage Act (GDNG, expected in 2024).

The digital transformation of German healthcare enables and promotes the increasing use of AI along the patient journey, in particular by relieving specialists of time-consuming, manual tasks. Possible fields of application for AI include personnel planning, surgery management, material procurement and storage and patient file management. Together with healthcare professionals and industry, we develop cognitive systems and trustworthy AI and test their potential, scalability and often challenging transferability to other wards and facilities in our research projects. 

Our research approaches

arzt patient
© iStock.com/AmnajKhetsamtip
  • AI-based personnel, resource and surgery planning
  • Intelligent process management in clinics
  • Transferability of AI models (domain generalization) across areas of applications and healthcare facilities
  • Generative AI for facilitation of administrative tasks (e.g., documentation, administration, patient information)

References:

AI Innovation Days in Berlin by Fraunhofer IKS and Flying Health (2022)

Award-winning project: AI-based workforce prediction in hospitals with integration of PPR 2.0 (2023)

Article in Krankenhaus IT-Journal (German): Personalplanung mit KI – Status Quo, Chancen und Grenzen (2024)

Training „AI in Healthcare“ for the EU Commission (2024)

Medical Decision Support with AI

The digitalization of the healthcare sector and the introduction of the electronic health record (EHR) give access to an increasing amount of data and open the possibility of using AI to provide medical decision support in ways that were previously not possible. In the safety-critical sector of healthcare, where lives are at stake, ensuring trustworthiness and robustness of AI is crucial. 

Therefore, we develop explainable and reliable AI models for clinical decision support systems, intended to assist in the analysis of patient data. These systems aim to support healthcare professionals with improved decision-making, disease prediction, and individualized treatment plans for patients. Through previous research projects, we have gained extensive experience in handling the challenges of real-world healthcare problems, where data is frequently incomplete, unstructured, limited in size, or distributed among institutions (Zamanian et al. 2023). In close collaboration with medical experts and industry, we use our scientific expertise to technologically advance the healthcare sector and improve patient care.

Our research approaches

Medizin im Labor
© iStock.com/gorodenkoff
  • Advanced analysis of EHR data using AI methods (e.g., data-driven analysis of patient data in preterm infants)
  • AI-based prediction of health complications (e.g., prediction of target lesion failure in coronary artery stent patients (Pachl et al. 2021))
  • Time-series analysis for monitoring disease progression in sequential patient data (e.g., vital signs, lab values)
  • Optimization of treatment strategies through treatment effect analysis and causal inference (Artikel: "Medizintechnik...”)
  • Federated learning enabling collaborative model training with data distributed across multiple medical institutions without data centralization
  • Explainable AI (xAI) to provide transparency of AI-driven decision-making (Schröder et al. 2023a, Schröder et al. 2023b)
  • Generative AI to assist in complex information structures (e.g., analysis of causal relationships, diagnoses, EHR data)

References

Blog article: "Medical Technology – AI helps understand medical problems" (2023)

Blog article: "Improving intensive care with artificial intelligence" (2022)

Article in IHK-Magazine (German): Innovationskraft durch KI – Stents unter der Lupe (page 8-9, 2023)

Project: Fraunhofer vs. Corona: Noncontact health monitoring of infectious patient groups (2021)

Zamanian A, Ahmidi N, Drton M. (2023). Assessable and interpretable sensitivity analysis in the pattern graph framework for nonignorable missingness mechanisms Stat Med. 2023 Dec 20;42(29):5419-5450. doi: 10.1002/sim.9920. Epub 2023 Sep 27. PMID: 37759370. 

von Kleist, H., Zamanian, A., Shpitser, I., & Ahmidi, N. (2023a). Evaluation of Active Feature Acquisition Methods for Static Feature Settings. arXiv preprint arXiv:2312.03619.  

von Kleist, H., Zamanian, A., Shpitser, I., & Ahmidi, N. (2023b). Evaluation of Active Feature Acquisition Methods for Time-varying Feature Settings. arXiv preprint arXiv:2312.01530.  

Schröder, M., Zamanian, A., & Ahmidi, N. (2023a). Post-hoc Saliency Methods Fail to Capture Latent Feature Importance in Time Series Data. In ICLR 2023 Workshop on Trustworthy Machine Learning for Healthcare. 

Schröder, M., Zamanian, A., & Ahmidi, N. (2023b). What about the Latent Space? The Need for Latent Feature Saliency Detection in Deep Time Series Classification. Machine Learning and Knowledge Extraction, 5(2), 539-559.

Bauer, A., Pachl, E., Hellmuth, J. C., Kneidinger, N., Heydarian, M., Frankenberger, M., ... & Hilgendorff, A. (2023). Proteomics reveals antiviral host response and NETosis during acute COVID-19 in high-risk patients. Biochimica et Biophysica Acta (BBA)-Molecular Basis of Disease, 1869(2), 166592. 

Pachl, E.; Zamanian, A.; Stieler, M.; Bahr, C.; Ahmidi, N. (2021). Early-, Late-, and Very Late-Term Prediction of Target Lesion Failure in Coronary Artery Stent Patients: An International Multi-Site StudyAppl. Sci. 2021, 11, 6986.

Data-efficient Medical Imaging

In medical imaging and diagnostics, AI is already supporting doctors in practical applications, e.g., in the evaluation of X-ray, MRI or CT images. Despite the ongoing progress of digital transformation in Germany and the digitization of the healthcare sector, the development of trustworthy AI, especially in medical imaging remains challenging as only small sample sizes (little data) are available, such as in the case of rare disease. Generally, large or very large sample sizes (big data) are required for the training and testing of AI models, in order to achieve high accuracy and reliability of AI decisions. 

Our research approaches show that using the right AI methods allows to develop very reliable and explainable AI models for image classification, with just a small amount of data. The question of how little data is enough highly depends on the use case and can range from a handful to thousands of images. Besides "classic" AI methods, we are also exploring future technologies like quantum computing, e.g., by developing hybrid quantum-enhanced AI models to classify breast cancer images.

Our research approaches

Ärztinnen begutachten medizinische Bildaufnahmen
© iStock.com/vm
  • Explainable AI (xAI) to provide transparency of AI-based image classification and diagnostics  (e.g., Prototype Learning, Concept Learning)
  • Human-interpretable AI concepts
  • AI models based on domain expert know-how
  • Trustworthy AI (despite small sample sizes)
  • Sub-task learning
  • Quantum-enhanced AI

References

Project Fast: How less data leads to early and reliable automation through AI (2023)

Blog article: How quantum computing could be helpful for medical diagnostics (2023)

Projekt BayQS: Quantum computing and artificial intelligence for reliable medical diagnoses (2023)

Hagiwara, Y., Espinoza, D., Schleiß, P., Mata, N., Doan, N. A. V. (2023). Toward Safe Human Machine Interface and Computer-Aided Diagnostic Systems. 2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Milano, Italy, 2023, pp. 236-241, doi: 10.1109/MetroXRAINE58569.2023.10405816.

Sinhamahapatra, P., Heidemann, L., Monnet, M., & Roscher, K. (2023). Towards Human-Interpretable Prototypes for Visual Assessment of Image Classification Models. Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. Volume 5: VISAPP, 878-887

 

Verification and Validation of AI

Ensuring safety and trustworthiness of AI systems is a crucial step before deployment in healthcare facilities. This concern is reflected in the recent trend of publishing standards and regulations such as the EU law for regulating artificial intelligence (AI ACT 2021). The challenge for AI developers and providers is thus operationalizing the high-level standards in the intricate technical level, arguing how an AI system complies with the propounded norms, and compiling valid and transparent assurance cases that supports the tedious and costly process of certification of the AI system.  

With our AI verification framework, we have developed a solution to verify the trustworthiness of AI models based on scientific findings and experience in applied projects. Covering various types of use cases, AI models and data sources, it generates a thorough documentation with arguments and assurances, specific to the AI project, to substantiate compliance with standards and regulations. What is then left for safety and AI experts is to revise and fine-tune the documentation by minimal time and effort. Relying upon the “safeAI” expertise at Fraunhofer IKS which is already proven in automotive and railway industries, the framework provides roadmaps for safe development and effective verification of AI

Our research approaches

Medtech Roboter von oben
© iStock.com/gremlin2
  • Validation & verification of AI models, considering regulation like the EU AI Act
  • Standardised approaches and benchmarks for AI verification
  • Forschung zu sicheren und vertrauenswürdigen KI-Qualitätsmerkmalen

More information & news on AI in medicine

 

Training / 3.7.2024

AI in healthcare

From scientific theory to trustworthy practical applications of artifical intelligence - in July, Fraunhofer IKS is organizing a one-day intensive training course with scientific input, in-depth, interactive practical examples and best practices from applied research.

Berlin

DMEA 2024

April 9 - 11, 2024

DMEA is Europe's leading event for digital health. Meet our scientists PD Dr. habil. Jeanette Miriam Lorenz, Maureen Monnet and Johanna Schmidhuber from the Trustworthy Digital Health research department. Using four practical examples, they will provide insights into our applied research on trustworthy smart health applications.

 

Quantum Computing

Advancing Medical Diagnostics with Quantum-Powered AI

AI has paved the way for better and faster diagnostics in the realm of medical imaging. However, challenges specific to the field of medicine, notably the scarcity of data, can impede its performance. This article looks into the enhancement of AI algorithms using quantum technology for the improvement of medical diagnostics, showcasing how quantum computing can tackle existing challenges in AI and offer unique opportunities for healthcare.

 

Verification of medical AI systems / 2.4.2024

What do regulations say about your medical diagnostics algorithm?

Regulations and standards for trustworthy AI are in place, and high-risk medical AI systems will be up for audits soon. But how exactly can we translate those high-level rules into technical measures for validating actual code and algorithms? Fraunhofer IKS’s AI verification framework provides a solution.

 

Blog articles on AI in medicine

Would you like to find out more about Fraunhofer IKS research on the topic of AI in medicine? Then take a look at our Safe Intelligence Blog!