Why AI's lack of motivation doesn't preclude legal explainability

Acknowledgements This op-ed arose from the workshop on Automation in Legal Decision-Making: Regulation, Technology, Fairness organised on 14 November 2024 at the University of Copenhagen under the aegis of the LEGALESE project. The workshop aimed at inspiring critical thinking about the growing role of automation in legal decision-making and its implications for fairness and explainability in legal decision-making. Participants from multiple disciplines were invited to contribute and discuss blog posts reflecting on a range of pressing themes, including the role of human oversight, the challenges of explainability, and the importance of interdisciplinary dialogue for effective and responsible innovation. By engaging with both cautionary tales and constructive approaches, the event fostered a space for discussion on how legal, technical, and societal perspectives can come together to shape ‘fair’ ADM practices. Funding: The authors received funding from the Innovation Fund Denmark, grant no 0175-00011A. Within the debate over AI and legal explainability, it has been argued that black box AI should not be used for legal decision-making because such AI cannot provide adequate explanations of decisions as they do not anchor those explanations in motivations (Sarid and Ben-Zvi, 2023; Rudin, 2019). This conclusion – in part – disagrees with an argument called…

Read the full article →