The Point of Blaming AI Systems

Leonhard Menges, Hannah Altehenger

Publikation: Beitrag in FachzeitschriftArtikelPeer-reviewed

Abstract

As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense to extend our blaming practices to these systems. In the paper, we argue for the admittedly surprising thesis that this question should be answered in the affirmative: contrary to what one might initially think, it can make a lot of sense to blame AI systems, since, as we furthermore argue, many of the important functions that are fulfilled by blaming humans can also be served by blaming AI systems. The paper concludes that this result gives us a good pro tanto reason to extend our blame practices to AI systems.
OriginalspracheEnglisch
FachzeitschriftJournal of Ethics and Social Philosophy
Jahrgang27
Ausgabenummer2
DOIs
PublikationsstatusVeröffentlicht - 2024

Schlagwörter

  • Artifical Intelligence
  • blame
  • moral responsibility

Systematik der Wissenschaftszweige 2012

  • 603 Philosophie, Ethik, Religion

Dieses zitieren