This article was first published in the Guernsey Press on 31 August 2023.
In recent weeks there has been a run of unprecedented data breaches in the UK, striking at the heart of several police forces as well as the electoral system. These breaches have been staggering both in scale and impact. With the stakes higher than ever before, Bailiwick Data Protection Commissioner Emma Martins reflects that effective risk management should be a priority for anyone caring for other people’s personal information.
I saw a headline recently which read “UK needs culture shift to become AI superpower”.
The quote was from the co-founder of the AI company DeepMind, and the culture shift he is calling for wants us to encourage more risk taking and more celebration of failures.
That is not unusual language, especially in a business context. Certainly, many start-ups consider their endeavors to be full of potential risks – especially financial.
But risk does, of course, relate to things beyond the financial.
And risk surrounds us all, every minute of every day.
Some risks we are aware of, some we are not.
Some risks we care about, some we do not.
And those risks we care about may prompt us to take action, or we may decide that we either cannot take action, or we cannot afford to do so.
Across the globe, there are those moments when questions of risk are thrust into the spotlight. A train crashes, a child is harmed, a bank collapses, a fire devastates a block of flats, a virus spreads.
For a while, the human stories dominate the headlines. Then, attention invariably turns to ‘who knew what and when’.
Because time and time again we see evidence of there having been real clarity that the risks existed, as well as about what could be done to remove or at least mitigate them, well before things went wrong.
In the context of a business, we are not talking about people’s lives, we are talking about the success, or not, of a financial investment….aren’t we? That is certainly what the headline I referred to above seems to be getting at.
But is it that easy to differentiate?
Let’s think about the particular context that the statement was made. Certainly technology (including AI) is at the cutting edge of advanced economies. We are absolutely living in the fourth industrial revolution marked by rapid technological advancement.
Very few such advancements don’t rely on data, in some way, shape of form, to ‘feed’ them.
Chat GPT, which arrived in our lives to great fanfare (and its fair share of controversy), literally scrapes existing data from available sources to produce its outputs.
Algorithms rely on big data and have quietly embedded themselves into our everyday lives.
Data matters in this particular revolution that we find ourselves in, in fact the revolution could not be a revolution without it.
So, it is just as well that we produce lots, and lots of personal data – your location, your browsing habits, shopping preferences, social media activity – the list goes on.
Therefore, the question of risk in this context is far from being purely financial.
In our last article, I made the comment ‘data is us and what happens to it, happens to us. Data harms are people harms’.
I accept that it is difficult to get traction around this message. How can ‘data’ possibly hurt people?
It is such an ephemeral concept, and we tend to look at it as something that exists away from us as individuals - sitting on a spreadsheet, a database, or a server somewhere.
We could not be more wrong.
Firstly, data is far from ephemeral. It is materially very real indeed.
Have a look at images for those servers that house the ever-increasing quantities of data we all produce every second of the day, or the cobalt mines in the Democratic Republic of the Congo.
The environmental implications (and risks) have only recently started to be talked about and we must absolutely welcome that.
But what about other risks that are less tangible?
The data protection and privacy community have, for a long time, worked hard to highlight the importance of data governance and the potential for harms when data governance fails.
Here at the Data Protection Authority for the Bailiwick of Guernsey, we have, since our inception, been laser focused on harm prevention. But we have to be honest and say that those conversations and that strategy have not always had an easy ride.
People find it very difficult to appreciate that data is real and the impact it can have on lives when mishandled can be devastating.
Data protection is, sadly, often seen as little more than red tape, an administrative burden or curb on innovation.
But the recent breaches could have come from a work of fiction. The risks and harms they have inflicted are horrifyingly obvious and they cannot be undone.
Will they prompt us to look at things differently, to better understand the very real and very human nature of potential harm in the context of data?
I was reading a book recently and came across this quote in the author’s contemplation of risks -
“Humanity lacks the maturity, coordinaton and foresight necessary to avoid making mistakes from which we could never recover.” (The Precipice - Toby Ord)
Despite the rather gloomy prediction, Ord also describes our future as a canvas, one which is ours to paint as we want.
We need to ask ourselves how we go about improving our maturity, coordination and foresight – all of which is in our hands.
I do think recent events will change the way we engage with risks and harms but that won’t happen just because we want it to, it will happen because we choose to pick up a brush to paint our part of the canvas.
And I hope that as we better understand that both the risks and the harms are real, the more likely we are to play our part.