Proceed with Caution

Annette Zimmermann, Chad Lee-Stronach

Research output: Contribution to journalArticlepeer-review


It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely available to human agents. After exploring the limitations of existing formal algorithmic fairness strategies, the article argues that procedural justice requires that human agents relying wholly or in part on algorithmic systems proceed with caution: by avoiding doxastic negligence about algorithmic outputs, by exercising deliberative capacities when making similarity judgments, and by suspending belief and gathering additional information in light of higher-order uncertainty.
Original languageEnglish
Pages (from-to)6-25
Number of pages20
JournalCanadian journal of philosophy
Issue number1
Early online date29 Jul 2021
Publication statusPublished - 1 Jan 2022

Bibliographical note

© The Author(s), 2021


  • philosophy of AI
  • political philosophy
  • moral philosophy
  • philosophy of law
  • procedural justice
  • structural injustice
  • algorithmic injustice
  • algorithmic decision-making
  • doxastic negligence
  • uncertainty
  • automation bias
  • intersectionality
  • epistemic duties

Cite this