Proving algorithmic discrimination in government decision-making

Joe Tomlinson, Jack Maxwell

Research output: Contribution to journalArticlepeer-review

Abstract

Public bodies in the United Kingdom are increasingly using algorithms and big data to make decisions. While there is growing awareness of the risks of algorithmic discrimination, it can be very difficult to establish that a specific algorithm is in fact discriminatory. This raises the question of how courts should allocate the burden of testing and proving whether a government algorithm is discriminatory, as between the government and the person affected. In R (Bridges) v South Wales Police [2020] EWCA Civ 1058, the England and Wales Court of Appeal found that public bodies must take positive steps to identify and address risks of algorithmic discrimination. This note explores the decision in Bridges and its implications for algorithmic decision-making in government. It suggests that Bridges, alongside recent decisions in Canada and the Netherlands, forms part of a broader trend: the courts are placing the burden of testing and reviewing potentially discriminatory algorithms on government, rather than the general public.
Original languageEnglish
Pages (from-to)352-360
Number of pages8
JournalOXFORD UNIVERSITY COMMONWEALTH LAW JOURNAL
Volume20
Issue number2
Early online date21 Oct 2020
Publication statusE-pub ahead of print - 21 Oct 2020

Bibliographical note

© 2020 Faculty of Law, Oxford University. This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy. Further copying may not be permitted; contact the publisher for details.

Cite this