Oct 2019

Published: 1 October 2019

Robot Rules – Regulating Artificial Intelligence, by Jacob Turner

It seems that lately, wherever you look, you are faced with stories about Artificial Intelligence (AI). Many such stories are dystopic (often accompanied with pictures of The Terminator).

First and foremost, Turner highlights the lack of general agreement on a definition of AI – “what is artificial intelligence? is an easy question to ask and a hard one to answer because there is little agreement about what intelligence is”.

This book offers us less of a caricature representation and more of a thoughtful analysis of the complexity of AI. Turner (a lawyer and author) sets out why AI is unlike other technology because of the potential for independent decision-making and unpredictability. Arguing that this in turn raises unique legal and ethical problems, the book identifies three key issues: responsibility – who is liable if AI causes harm (think autonomous cars); rights – the disputed moral and pragmatic grounds for giving AI legal personality; and the ethics surrounding the decision-making of AI.

Turner is critical of the lack of concerted governmental efforts to regulate AI pointing to the fact that as a result, private companies have begun to act unilaterally. He argues that AI needs principles formulated by public bodies, not private companies. We must not mythologise AI as a new species which is a moral agent outside of our control. It is humans that are the ethically responsible agents and it is humans who decide on the rules. What regulation looks like and how it is enforced are difficult but not insurmountable challenges.

But the first step must be understanding the reality of AI and the trajectory it is on, preferably without reference to the Terminator!