Responsibility Gaps in Human Machine Interaction (ReGInA)
We investigate the potential dangers of over-reliance on machines in medical decision-making. We assess the appropriate level of trust for physicians to benefit from the use of AI-based recommender systems in interpreting medical images for diagnosis and explore user-centric AI systems that reduce bias in medical decision-making.
Summary
Our consortium investigates the potential risks of over-reliance on machines. We focus on the interaction between physicians and AI-based recommendation services in the interpretation of medical images for diagnosis. We evaluate the appropriate level of trust that human decision-makers need to benefit from the use of AI systems and make well-informed professional decisions. We further investigate the causal effects of institutional, situational, individual and technological parameters on this level of trust. The significance of our findings will be relevant for a variety of AI applications in other domains and add a new layer to the public debate on AI recommendation services. In line with the requirements of human-centered design principles, we put the physicians themselves and the doctor-patient relationship at the center of our research and explore design paradigms for AI-based recommendation services that allow for a calibration of trust. While structures and processes in the medical field are increasingly being adapted to machines, we focus instead on the extent to which recommendation services can be integrated into the existing structure of responsibility and accountability in medical practice. In doing so, we complement our approach with the perspective of organizational ethics. In doing so, we expand the scientific debate on the use of recommendation services in the medical field.