This book proposes a regulatory system for ensuring that AI makes fair decisions.
No one wants to be the subject of an unfair decision made by an AI, and fairness is so important to society that we are likely to want to regulate to demand it. But how? This book attempts to answer that question.
The aim of regulation must be for an AI's decisions to match the human conception of fairness. To understand what that is, the book proposes a holistic understanding of fairness, which tells us what regulation must try to achieve.
However, regulation is not an abstract activity – it regulates how humans behave, and the humans in question are those who develop and use AI for decision-making. Thus the book investigates how those humans are attempting to achieve AI fairness. It finds that there is a serious mismatch between how technologists conceptualise fairness, compared to other humans. How can AI regulation bridge this gap?
Traditional models of regulation cannot solve this problem. Fairness is too nuanced, too contextual, and is ultimately a human emotional response. Instead the book proposes to place the responsibility on the AI community to explain and justify their efforts to achieve fairness, basing regulatory and legal responses on how well that explanation deals with the risks that particular AI presents, and whether the AI operates in accordance with the explanation in use.
The book concludes by examining how far this regulatory model might be useful for some of the other social problems which AI generates.
An original and significant contribution to the literature on AI regulation, this book is a must-read for those working in the areas of law, regulation, and technology.