05
DAYS LEFT

Registration window open (1 Jan - end of Feb)

If you use personal data in your work you are legally obliged to register during January and February each year.
NEW REGISTRATION? View guidance and create new registration here
EXISTING REGISTRATION? Sign-in to Registrations Portal here
 

Blog:

Are social media companies treating children like 'crash test dummies'?

Published: 1 November 2023

* WARNING: This blog includes reference to suicide, please contact www.samaritans.org if you are affected by this.*

This blog was first published in the Guernsey Press on 1 November 2023.

As a new report says social media is having a severely negative impact on how children and young people view themselves, Guernsey Data Protection Commissioner Emma Martins reflects how compelling social media content – reinforced by algorithms – can cause a deadly spiral.

‘Move fast and break things’ is a motto that has come to symbolise the early days of the giant tech platforms that now dominate our lives, our economies and our politics.
It is of course true of many innovations, that the early days are characterised by mishaps, mistakes, even disasters, often involving loss of life.

Driving cars in the early days of their invention was a very risky business, not only for drivers but also for pedestrians.

In the early 20th century, public alarm at the rising deaths and injuries in car accidents prompted dramatic interventions from law makers, manufacturers, civil society and others to make car driving safer for everyone. Driving today has never been safer and we do not give these safety requirements, which fundamentally changed the nature of risk, a second thought these days.

Another example is the construction industry. One of the most dangerous jobs in the 19th century was building railways. Construction workers today are better protected than ever before. We sometimes roll our eyes at ‘health and safety’ requirements, but we would certainly be worse off without them.

Are we destined to follow the same path with technological innovations? Must public alarm at the harms and risks reach a crescendo before action is taken?

The jury is probably out on that question at this moment in time, but I think one of the most significant challenges we face is the difficulty in appreciating the nature of the harms we face in the context of technology and data.

The risk of a car crashing is very easy to comprehend. The impact of social media algorithms impacting the mental health of young people, less so.

But the case in the UK of Molly Russell shows us that the impacts can be no less tragic. Molly took her own life aged just 14 in 2017. Her father has campaigned tirelessly since, claiming that the way social media platforms deliberately manipulated the information she saw in the last few days and hours of her life contributed to her death. At the inquest into her death, the coroner wrote to social media firms and the UK government calling for action. In a very unusual step, he also published what is called ‘a prevention of future deaths report’.

We need to be clear what that report is referring to. It is aimed at protecting people, especially children, from death. It could not be clearer.

A survey carried out by youth mental health charity Stem4 found that a staggering 97% of children as young as 12 years of age are now on social media, with 70% saying it makes them stressed, anxious and depressed.

Recently we heard the Principal of the Ladies’ College here in Guernsey explain how strongly she feels about the real and present dangers for children and young people posed by developing tech and social media.

The question must follow then, are we at a moment of change?


Are we starting to understand the realities of the harms and is that prompting us to push back against the ‘move fast and break things’ mentality because it is becoming clear that the things being broken are us?


The world's first AI Safety Summit is being held in the historically significant Bletchley Park this month. Metaverse Safety Week, which is an international community effort, is taking place in December for the fourth year. The new Online Safety Act has become law and the UK Government has recently published a discussion paper entitled ‘Capabilities and risks from frontier AI’. Anyone noticing a theme here? We are starting to see these conversations framed around the concept of ‘safety, harms and risks’.

I am both heartened and saddened by that.

Heartened because that is exactly how they need to be framed. Technology can be used to do tremendous good in the world, but it also has the power to cause unimaginable harms.

Saddened because of the harms that have already been done. And saddened because, up to now, technological innovations have largely escaped the safety net that is the public conscience. We have allowed ourselves to look at it differently.

Moving fast and breaking things has its place, especially in innovation. But we need to be thinking much more carefully about what is moving and what is breaking.

If it is a car, on a test track, with a crash test dummy, I have no objections.

If it is us, our children, the next generation, I think we need to look at it very differently. Most of all we need to think.

By definition, if we are moving fast, we are not giving ourselves much, if any, time to think. And in a world so reliant upon technology, we have become very used to others thinking for us. But we need more than ever, to think for ourselves. And we need leaders that do not just ‘do their job’, we need them to take time to think about how to get their job done, well. They do that by thinking, talking, and listening.

I am often struck by how, in such a complicated world, the things we are most in need of are so simple. Thinking, talking, listening. I would suggest that we do not need or want AI to do those things for us.
A very powerful tech giant CEO was recently purported to say ‘I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence’.

It is not difficult to see why public interest is less piqued by the idea of ‘persuasion’ harms when contrasted with images of killer robots. But I think this quote gets to the heart of one of the biggest challenges we face, from a technological perspective.

We have enough evidence now to understand that these harms are real.

Persuading someone to buy a particular brand of fizzy drink may not be an existential threat to anyone but persuading a child that their life is worthless certainly is.