Blog:

The AI Dilemma

Published: 2 May 2023

Inspired by a ‘call to arms’ by the Center for Humane Technology, Bailiwick Data Protection Commissioner Emma Martins ponders the wisdom of pausing the development of powerful AI systems to allow safety protocols and legislation to catch up.

We live in small, but beautiful Islands. A significant volume of data flows across our borders, fuelled by our finance sector at a moment in time when the world is seeing huge technological advancements. 
And let’s be clear, those advancements are not only huge, they are rapidly gathering momentum. 

Most of us will have seen the stories whirling around advancements in AI, of the eye-watering speed that ‘large language models’ like ChatGPT can answer complex questions and complete homework assignments. You may also have heard of a recent open letter by the Future of Life Institute co-signed by many big names, including Elon Musk. It calls for a pause in training of AIs above a certain capacity for at least 6 months due to the ‘profound risks to humanity’ they pose.

Whether you agree with that or not, all of humanity is potentially affected by these developments. It follows, therefore, that this is not the time to be a bystander, to watch passively while technology takes on a ‘mind’ of its own. 
One of the problems we all face (including those of us working in this area) is the challenge of comprehending the technology. We have become so reliant on and trusting of the smart devices in our lives that conversations about the risks, if they happen at all, often happen too late.

If we are, as some suggest, at a critical moment in our relationship with technology, the question must be – are we simply going to repeat the mistakes of the past and wait for those harms to manifest themselves before wringing our hands in despair?

A growing number of decision makers and technologists are saying no, we cannot afford to make those mistakes, there is simply too much at stake.

One such group is the Center for Humane Technology. Founded in 2018 by Tristan Harris, Aza Raskin and Randima Fernando, their mission is to ‘align technology with humanity’s best interests’. I have always followed their work with interest, but a recent presentation they made available on YouTube called ‘The AI Dilemma’ feels seminal. 

In publishing The AI Dilemma, they have thrown the gauntlet down to every single one of us, regardless of whether we live and work in large or small jurisdictions.  

They describe a world of ‘double exponential’ advancement - which is complicated to describe but it’s essentially when the rate of advancement speeds up so rapidly that a system could be said to have a near vertical learning curve in a short time period. Harris remarks that humans have a blind spot when trying to comprehend progress at this pace.  

The thing with blind spots is that they can be overcome but we need to know they exist, so we can reposition ourselves accordingly. Watching The AI Dilemma helps reveal a host of blind spots via real examples:  

• If you need an horrific example demonstrating how Snapchat’s new ‘My AI’ chatbot effectively encourages the grooming of a child, it’s in this presentation.
• If you need to see how astonishingly good deep fakes are now and how after only 3 seconds of voice an AI can synthesise a person’s voice, it’s in this presentation.
• If you need a list of the ways in which the misalignment between what IS happening and what SHOULD be happening is impacting us all, it’s in this presentation.
• If you want some hope that some of the best minds of our generation are working to better align technology with humanity, it’s in this presentation.

What Harris and Raskin are seeking to do is give us a visceral understanding of what’s at stake so we can better understand how critical democratic dialogue is.

Can you think of any other product that is rolled out to the public, including children, without being fully tested for safety? Why are we relinquishing those assurances and protections when it comes to AI?

The presentation ends with a plea – for us to ‘give this moment in history the weight it deserves’.

It is easy to feel overwhelmed by the speed and nature of these developments. That is entirely understandable but it needs to be challenged because it often leads to inaction. We cannot afford inaction. Silence goes hand in hand with powerlessness. 

These are not matters just for technologists, these are matters for all human beings.

If you work in business, government or third sector this is a critical time. Equally, all of us, as citizens, owe it to ourselves and each other to engage. Living in a small, beautiful island is no shield from the negative impacts of these technological developments. We don’t need to be experts, we just need to feel the weight of this moment, as Harris puts it, and not be a bystander.

• Suggested reading:
'AI, human rights, democracy and the rule of law: A primer prepared for the Council of Europe’ (2021) The Alan Turing Institute. 
'Rise of the machines - what does AI mean for business and boards in 2023?' (May 2023) Business Brief.