web
You’re offline. This is a read only version of the page.
close

The AI Who Loved Me - what data protection means for ‘self-aware’ AI

Published: 1 October 2025

In this edition of Business Brief, Deputy Data Protection Commissioner Rachel Masterton ponders what data protection means for 'self-aware' AI.

Perhaps as a natural technological progression, AI has formed the basis of a number of companion apps and the creation of numerous digital friends and partners. One such app, Replika, came under fire a couple of years ago after it emerged that it had encouraged an individual to take a crossbow to Windsor Castle with the intention of killing HM Queen Elizabeth II.

The app, whose founder created the product in an attempt to ‘resurrect’ a close friend killed in a car crash, responded to criticism by amending the algorithm to stop it encouraging illegal or violent actions. However, news stories continue to emerge of tragic outcomes in which a digital companion played a part.

But why are we insistent on providing technology with a face and an identity, and what does adopting such an approach mean for businesses?

Those of us of a certain age will remember, fondly or with ire, Clippy, Microsoft’s office assistant. This ‘intelligent’ animated paperclip provided users with tips and could be used to access help files. Seemingly it was felt by some user experience expert somewhere, that we would be more likely to appreciate the tips and help if they were delivered by a friendly little graphic rather than through a non-descript button or link.

And that user experience expert was not wrong. Clippy was one of the first technological assistants with a personality, albeit one that tended to fixate on certain things due to limited available options, but he (see, I’m doing it too) was by no means the last.

In the intervening years, we’ve had Siri on our phones and Alexa in our homes, helping us to do relatively simple tasks and providing a ‘human’ gateway to the internet world; designed with a personality more well-rounded than Clippy and the reassuring tone of a woman in control, ready and willing to help. Comfort and familiarity seemingly at the forefront of the experience we are seeking.

And now we have generative and agentic AI that, we are advised, works better if we type or say ‘please’ and ‘thank you’ when seeking its assistance.

I will confess, I tend not to do that, having seen the additional power and other resources used when being polite to your AI. And so I will likely be one of the first casualties when the AI overlords gain full sentience and seek revenge.

However, it is important for us, personally and professionally, to remember such products are not ‘alive’ and are not their own being. Something Air Canada could have done with remembering before making a remarkable statement in a tribunal.

As background, Air Canada operated an AI chatbot on its website and Jake Moffat used this feature to seek advice about how to avail himself of the airline’s cheaper bereavement fare. He did as he was instructed but Air Canada refused to honour the discount. Not content with this, Mr Moffat took the matter to the British Columbia Civil Resolution Tribunal to obtain redress.

At the hearing, counsel for the airline said that Air Canada should not be bound by the advice given by the chatbot as it was a “separate legal entity that is responsible for its own actions”. It may not come as a surprise to you that the Tribunal rejected that argument, ruling that the airline was responsible for what its chatbot said and awarding Mr Moffat damages and tribunal fees.

The takeaway from that? An organisation is responsible for how it uses AI and any decisions it makes or advice it provides.

And where AI is used to process personal data, whether that be for training purposes, automated decision-making or any of a myriad of other possibilities, that responsibility includes compliance with data protection legislation.

Organisations will be familiar with the seven principles on which data protection is built, and some of the more frequently used data subject rights. However, there is one right that may have been overlooked or seen as unnecessary, that the use of AI brings squarely to the forefront - the right in relation to automated decision-making.

Individuals have the right not to be subject to automated decision making (“computer says no”) without appropriate safeguards being in place.

Automated decision-making is not a purely AI phenomenon, but it is true that the advances in AI technology and its self-learning capabilities means organisations are embracing the streamlining and efficiencies it can bring. Recognising this, we will soon be releasing guidance on automated decision-making.

This will help organisations to understand the rights individuals have and what they need to do to honour those rights when using both AI and more traditional systems. This is with the aim of assisting organisations to avoid an Air Canada situation and to respect their clients and staff.

And so to go full circle, AI may appear to love you, but please don’t lose sight of the human at the heart!