ARTIFICIAL INTELLIGENCE AS A MEMBER OF CRISIS MANAGEMENT LEADERSHIP – BENEFITS, LIMITATIONS, AND THE PSYCHOLOGICAL IMPLICATIONS OF HYBRID DECISION-MAKING
DOI:
https://doi.org/10.32782/2956-333X/2025-2-1Keywords:
artificial intelligence, crisis management, decision-making, cognitive load, ethics, responsibility, algorithmic trust, emergency response, security management, European legislationAbstract
This article examines the integration of artificial intelligence as a constituent element within crisis management leadership structures, focusing on the technological, psychological, and ethical implications of hybrid decision-making systems. The study addresses a significant gap in scholarly attention regarding the psychological and ethical consequences of AI integration in emergency management, while acknowledging that technological dimensions have been relatively well-investigated. The primary objective is to present both the potential benefits and principal risks associated with incorporating AI into crisis response teams. The research highlights several functional advantages of AI integration, including enhanced predictive capabilities through real-time data synthesis, improved logistics optimization, and significant reduction of cognitive burden on human decision-makers. AI-supported predictive models can reduce median response times by up to 30% in countries employing machine learning-based systems, while optimization algorithms demonstrate 10–18% improvements in emergency unit response times compared to static distribution models. However, the study identifies substantial risks including automation bias, where human operators accept algorithmic recommendations uncritically, and the “black box” problem where AI decision-making processes lack transparency. Additional concerns include dependence on data quality, the erosion of individual and collective autonomy, information overload, and the emergence of accountability gaps in decision-making processes. The integration significantly impacts team dynamics, with potential effects on trust, autonomy, psychological safety, and communication patterns within crisis management teams. The research emphasizes the “responsibility gap” phenomenon, where attribution of accountability becomes ambiguous when decisions are influenced by algorithmic systems. The article concludes with foundational recommendations for safe and effective AI deployment, emphasizing the “human-in-the-loop” approach, ensuring system explainability, comprehensive training programs, and adherence to European legislative frameworks, particularly the Artificial Intelligence Act. The authors stress that successful integration requires combining technical precision with human responsibility, viewing AI as a complement rather than replacement for human judgment in crisis management contexts.
References
Alexander, D. (2002). Principles of Emergency Planning and Management. Oxford: Oxford University Press. ISBN: 9780195218387.
Barredo Arrieta, A.B., Díaz‑Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil‑López, S., Molina, D. & Benjamins, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. DOI: 10.1016/j.inffus.2019.12.012.
Burrell, J. (2016). How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms. Big Data & Society, 3 (1), 1–12. DOI: 10.1177/2053951715622512.
Comfort, L.K. (2019). The Dynamics of Risk: Changing Technologies and Collective Action in Seismic Events. Princeton University Press. DOI: 10.1515/9780691191226.
European Commission (Joint Research Centre) (2024). Artificial Intelligence applied to disasters and crises management. JRC Publications Repository, EUR 40153 EN. DOI: 978‑92‑68‑22801‑2. https://publications.jrc.ec.europa.eu/repository/handle/JRC138914.
Edmondson, A.C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44 (2), 350–383. https://doi.org/10.2307/2666999.
Glikson, E. and Woolley, A.W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14 (2), 627–660. https://doi.org/10.5465/annals.2018.0057.
Grosz, B.J., Grant, D.G., Vredenburgh, K. and Behrends, J. (2019). Embedded EthiCS: Integrating ethics across CS education. Communications of the ACM, 62 (8), 54–61. https://doi.org/10.1145/3330794.
Hacker, P. (2023). AI regulation in Europe: From the AI Act to future regulatory challenges. arXiv preprint. https://arxiv.org/abs/2310.04072.
Heaven, D. (2019). Why deep-learning AIs are so easy to fool. Nature, 574 (7777), 163–166. https://doi.org/10.1038/d41586-019-03013-5.
Imran, M., Castillo, C., Lucas, J., Meier, P. & Vieweg, S. (2014). AIDR: Artificial Intelligence for Disaster Response.
In: Proceedings of the 23rd International Conference on World Wide Web – WWW ’14 Companion. Association for Computing Machinery, pp. 159–162. DOI: 10.1145/2567948.2577034.
Jerusalem Post Staff (2023). AI is being used today in Israel to save lives by two of the country’s emergency services. Jerusalem Post, 6 July 2023. Dostupné z: https://www.jpost.com/health-and-wellness/article-749079.
Kapucu, N., Hawkins, C.V., & Rivera, F.I. (eds.) (2013). Disaster Resiliency: Interdisciplinary Perspectives. New York/London: Routledge. DOI: 10.4324/9780203102459.
Lee, J.D. and See, K.A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46 (1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392.
Martinková, J. (2024). Nástroje umělé inteligence a jejich využití v ochraně obyvatelstva. Bakalářská práce. UTB Zlín. https://digilib.k.utb.cz/handle/10563/56629.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6 (3), 175–183. https://doi.org/10.1007/s10676-004-3422-1.
PwC Česká republika (2024). AI Act: Jak se připravit na novou evropskou legislativu. https://www.pwc.com/cz/cs/sluzby/technologie-a-data/ai-act.html.
Scantamburlo, T., Zicari, R.V., and Castillo, C. (2023). Artificial intelligence across Europe: A study on awareness, attitude and trust. arXiv preprint. https://arxiv.org/abs/2308.09979.
Schuett, J. (2022). Risk management in the Artificial Intelligence Act. European Journal of Risk Regulation, 15 (2), 367–385. DOI: 10.1017/err.2023.1.
Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human-Computer Interaction, 36 (6), 495–504. https://doi.org/10.1080/10447318.2020.1741118.
Siau, K. and Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31 (2), 47–53. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3208002.
Tarafdar, M., Tu, Q., Ragu-Nathan, B. S. & Ragu-Nathan, T. S. (2007). The Impact of Technostress on Role Stress and Productivity. Journal of Management Information Systems, 24 (1), 301–328. DOI: 10.2753/MIS0742-1222240109.
United Nations Office for Disaster Risk Reduction (UNDRR) (2024). Early warning systems benefit from shared knowledge: a 24‑hour advance warning can result in a reduction of losses by up to 30 percent. UNDRR News, 2024. https://www.undrr.org/news/early-warning-systems-benefit-shared-knowledge.
UTB Zlín (2024). Doporučení k využívání nástrojů AI. Fakulta logistiky a krizového řízení. https://flkr.utb.cz/student-3/doporuceni-k-vyuzivani-nastroju-ai.
Wei, D., Yi, P., Lei, J. & Zhu, X. (2024). Multi‑Agent Deep Reinforcement Learning for Distributed and Autonomous Platoon Coordination via Speed‑regulation over Large‑scale Transportation Networks. arXiv preprint. Dostupné z: https://arxiv.org/abs/2412.01075.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.