http://ipkitten.blogspot.com/2021/12/book-review-we-robots.html
The regulation of artificial intelligence is the topic of national and international public consultations and a growing area of literature. A recent contribution to that discussion is “We, the Robots? Regulating Artificial Intelligence and the Limits of the Law“ by Simon Chesterman, National University of Singapore.
As Chesterman mentions in the introduction to the book, “the field of AI and law is fertile,” and there are already books, dedicated journals and thousands of articles that discuss recent developments in AI, its actual or potential impact on the legal profession, and normative questions raised by AI. The majority of them concentrate on the activities of legal practitioners, their potential clients, or the machines themselves. This book, by contrast, Chesterman explains, focuses on those who seek to regulate the activities of AI, and the difficulties that AI systems pose for government and governance. Regulation here refers to two aspects; first. the exercise of control through rules and standards, including self-regulation; and second, that such control is exercised by one or more public bodies.
This book focuses on the challenges raised by ‘narrow’ AI, meaning systems that can apply cognitive functions to specific tasks typically undertaken by a human. In doing so it asks how should we understand the challenges to regulation posed by AI? What regulatory tools exist to deal with those challenges and what are their limitations? And what more is needed – rules, institutions, actors – to reap the benefits offered by AI, while minimising avoidable harm? As such, the book is presented in three main parts: challenges, tools and possibilities.
Part one addresses the challenges of speed, autonomy, and opacity, with the aim of highlighting the gaps in existing regulatory models with a view to seeing whether the tools at our disposal can fulfil them.
Chapter one, Speed, examines three areas. First, considering the globalisation of information, which seeks to demonstrate the difficulty of containing problematic activity in an interconnected world where speed has conquered distance. Second, considering high frequency trading – where algorithms buy and sell stocks – which highlights the danger that speed of decision making can have on frustrating human attempts to limit or regulate it. Third, the chapter considers the challenges posed by the accelerated flow of information and AI on competition law. For example, tacit collusion by algorithms which conflicts with the regulatory framework.
Chapter two turns to Autonomy. Naturally, covering autonomous vehicles. Chesterman distinguishes between, on the one hand, automated functions of a vehicle, such as cruise control, which is supervised by the driver. On the other hand, autonomous means a vehicle that is capable of taking decisions without input from a driver, or where there is no human driver at all. The chapter seeks to expose gaps in regulatory regimes that assume the centrality of human actors, particularly referring to civil liability, criminal law and ethics. A second case study that this chapter draws upon is autonomous weapons and a third is algorithmic decision making.
Image: Riana Harvey |
Chapter three, Opacity, raises concerns about the way that decisions are made by AI where humans are unable to know or understand the decision making process. In particular, this leads to a lack of scrutiny and potential for discriminatory practices and outcomes when using AI to make decisions.
Publisher: Cambridge University Press
Published: August 2021
Formats: Hardback £29.99. Also available as ebook
ISBN: 9781316517680
Extent: 310 pages
Content reproduced from The IPKat as permitted under the Creative Commons Licence (UK).