Im Gespräch mit Mari Trompke (Center for Leadership & People Management)
In dieser dieser Folge von "ISM Perspectives on…" sprechen wir mit der Forscherin Mari Trompke vom Center for Leadership & People Management (LMU) und wissenschaftliche Mitarbeiterin an der _ISM _über Vertrauen als zentrales Element der Zusammenarbeit zwischen Mensch und KI im medizinischen Feld. Neben ihrem eigenen Forschungsprojekt werden dabei konkrete Anwendungsfelder im Alltag von Ärztinnen sowie kritische Phänomene wie Automation Bias und Algorithm Aversion diskutiert. Was geschieht, wenn Ärztinnen unter Zeitdruck zwischen eigener Intuition und KI-Empfehlung abwägen müssen? Und warum bleibt die Verantwortung im Falle von Fehlentscheidungen meist beim Menschen hängen? Um all diese Fragen und die Suche nach der richtigen Balance bei der Human-AI Collaboration im Gesundheitswesen soll es dieser Episode gehen.
Bayor, R., Huang, Y., De, A. A., Kong, H. J., Seong, D., Sharafoddini, A. & Liu, F. (2025). Patient-clinician Trust in Artificial Intelligence-enabled Clinical Decision Support Systems: A Systematic Review. Journal of Medical Internet Research, 27, e72191.Choudhury, A. (2022). Toward an Ecologically Valid Conceptual Framework for the Use of Artificial Intelligence in Clinical Settings: Need for Systems Thinking, Accountability, Decision-making, Trust, and Patient Safety Considerations in Safeguarding the Technology and Clinicians. JMIR Human Factors, 9, e35421. https://doi.org/10.2196/35421Darvish, M., Holst, J.-H. & Bick, M. (2024). Explainable AI in Healthcare: Factors Influencing Medical Practitioners' Trust Calibration in Collaborative Tasks. Proceedings of the 57th Hawaii International Conference on System Sciences (HICSS-57). https://doi.org/10.24251/HICSS.2024.402Hamric, A. B., Borchers, C. T. & Epstein, E. G. (2012). Development and Testing of an Instrument to Measure Moral Distress. AJOB Primary Research, 3(2), 1–9. https://doi.org/10.1080/21507716.2011.652337Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.Kempt, H. & Nagel, S. K. (2022). Relative Explainability and Double Standards in Medical Decision-making: Should Medical AI Be Subjected to Higher Standards in Medical Decision-making than Doctors? Ethics and Information Technology, 24(20). https://doi.org/10.1007/s10676-022-09646-xLawton, T., Morgan, P., Porter, Z., Hickey, S., Cunningham, A., Hughes, N., Iacovides, I., Jia, Y., Sharma, V. & Habli, I. (2024). Clinicians Risk Becoming 'Liability Sinks' for Artificial Intelligence. Future Healthcare Journal, 11(1), 100007. https://doi.org/10.1016/j.fhj.2024.100007Laxar, D., Eitenberger, M., Maleczek, M., Kaider, A., Hammerle, F. P. & Kimberger, O. (2023). The Influence of Explainable vs Non-explainable Clinical Decision Support Systems on Rapid Triage Decisions: A Mixed Methods Study. BMC Medicine, 21(359). https://doi.org/10.1186/s12916-023-03068-2Lazarus, R. S. & Folkman, S. (1984). Stress, Appraisal, and Coping. Springer.Lee, J. D. & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392 Mayer, R. C., Davis, J. H. & Schoorman, F. D. (1995). An Integrative Model of Organizational Trust. Academy of Management Review, 20(3), 709–734. https://doi.org/10.2307/258792Medical Device Coordination Group. (2019). MDCG 2019-11: Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 – MDR and Regulation (EU) 2017/746 – IVDR. European Commission.Morley, G., Ives, J., Bradbury-Jones, C. & Irvine, F. (2017). What Is 'Moral Distress'? A Narrative Synthesis of the Literature. Nursing Ethics, 26(3). https://doi.org/10.1177/0969733017724354Naiseh, M., Cemiloglu, D., Al-Thani, D., Jiang, N. & Ali, R. (2021). Explainable Recommendations and Calibrated Trust: Two Systematic User Errors. Computer, 54(10), 28–37. https://doi.org/10.1109/MC.2021.3076131Panigutti, C., Beretta, A., Fadda, D., Giannotti, F., Pedreschi, D., Perotti, A. & Rinzivillo, S. (2023). Co-Design of Human-centered, Explainable AI for Clinical Decision Support Systems. ACM Transactions on Interactive Intelligent Systems, 13(4), 1–35. https://doi.org/10.1145/3587271Pilon, M. & Brouard, F. (2023). Conceptualizing Accountability as an Integrated System of Relationships, Governance, and Information. Financial Accountability & Management, 39(2), 421–446.Rezaeian, O., Asan, O. & Bayrak, A. E. (2025). The Impact of AI Explanations on Clinicians' Trust and Diagnostic Accuracy in Breast Cancer. Applied Ergonomics, 129, 104577. https://doi.org/10.1016/j.apergo.2025.104577Tun, H. M., Rahman, H. A., Naing, L. & Malik, O. A. (2025). Trust in Artificial Intelligence-based Clinical Decision Support Systems Among Health Care Workers: Systematic Review. Journal of Medical Internet Research, 27, e69678. https://doi.org/10.2196/69678Whitney, C., Preis, H., Vargas, A. R., et al. (2025). Anticipatory Moral Distress in Machine Learning-based Clinical Decision Support Tool Development: A Qualitative Analysis. SSM – Qualitative Research in Health, 7(1), 100540. https://doi.org/10.1016/j.ssmqr.2025.100540Website der International School of Management (ISM) /
Website zum Fernstudium der ISM