Blog:

Artificial Intelligence: with great power comes great responsibility

Published: 1 March 2024

In this article, first published in Business Brief March 2024 edition, the Bailiwick’s Deputy Data Protection Commissioner Rachel Masterton reflects on the challenges and opportunities of the AI revolution. 

AI has been a buzzword in recent years, and for good reason. With advancements in technology, AI is becoming more ubiquitous and powerful, enabling machines to perform tasks that were once thought to be exclusive to humans. From self-driving cars to virtual assistants, AI has the potential to transform the way we live and work. 

However, with great power comes great responsibility, and we must carefully consider the ethical implications of AI. 

Like many, I let ChatGPT write the introduction above. And it is pretty good. It covers the range of AI available and uses my favourite Spiderman quote. My only real complaint is ‘more ubiquitous’ – something can’t really be ‘more’ everywhere! 

Tautology aside, generative AI has experienced a genuinely meteoric rise – telephones took 75 years to get 1 million users, mobile phones 16 years, and Twitter 5 years. ChatGPT took just 2 months…! Why? Because having a tool that can understand prompts and generate reasoned (for the most part) responses has clear benefits. 

But there has been significant concern expressed by educators that generative AI would take plagiarism and cheating to another level. And so it might if students took the time to read what it produced before handing in their assignment. An opening paragraph of ‘I am an AI chatbot and shouldn’t really be used for your homework but here are some ideas you could expand on in relation to your topic…’ does give the game away rather, and proves, as Sir Terry Pratchett once wrote, that ‘real stupidity beats artificial intelligence every time’!

One way to tackle, in part, such behaviour is to insist on footnotes and bibliographies, referencing sources used. But what happens when the AI 'generates' those sources too?

In one particularly disturbing case, ChatGPT falsely accused an American professor by including him on a list of legal scholars who has sexually harassed someone. The professor said he had been accused of “assaulting a student on a trip he never took while working at a school he never taught at” and to compound this, a source was cited – a non-existent Washington Post article.   

This points to a problem. How accurate is the content AI generates and how much will users fact-check what is returned? And once information is out there, how hard will it be to prove a false allegation is just that? Would you take the time to verify information before refusing someone a job or denying someone something else they may be entitled to?

It should be borne in mind that once information has been generated, it then forms part of the material used by that system to answer other questions. Goodbye to GIGO – ‘garbage in, garbage out’ – and hello to GGIEWGO – ‘generated garbage in, even worse garbage out’! (I made that one up, but the point stands). If we as users do not understand how a system works, we cannot be sure it is working correctly. Is it good enough to simply hope its developers know how it works and rely on them to do the ‘right’ thing?

In April 2023, the front cover of German weekly magazine, Die Aktueelle, claimed to have the first interview with Michael Schumacher since his skiing accident in December 2013 that resulted in a near-fatal brain injury. The seven times F1 champion’s family has kept details of his condition fiercely private, so such a headline was a surprise and a really big deal.  However, it transpired that it was an AI-generated ‘interview’ – which the magazine admittedly owned up to.  

Now, in that case, no blame can be placed on AI. It simply did what was asked. People were behind the article and for them there were serious consequences. The magazine editor was sacked, and Schumacher’s family threatened legal action. 

Therefore, we must all remember, with AI set to play an increasingly important role in many business sectors, including financial services, that the usual rules still apply and that includes data protection. If an organisation is using information about people in an AI system, it is processing personal data, and this must be done in compliance with the Law.

Contrary to the false narrative that AI is unregulated, its use is covered by existing data protection laws. And regulators around the globe are all over this issue. Here at the Bailiwick’s Office of the Data Protection Authority we have joined forces with other supervisory authorities around the world in an AI Working Group. Organisations are accountable for how they use AI. Data protection tenets of lawfulness, fairness, transparency and accuracy apply. Using AI is not an excuse to mislead, and innovation is certainly not a reason to circumvent the usual checks and balances. The race for technological superiority must never be at the expense of proper consideration of, and response to, risks and harms. The big tech companies being investigated by data protection authorities across the globe might have benefited from remembering this. 

None of this is to detract from the huge power of AI, and innovation more broadly, to do good in the world. AI has positive impacts for health, education, the environment and beyond and will be critical in tackling many of the world’s problems, some of which are as yet unknown. But where personal data is involved, the only way for long lasting advancement is to embrace data protection and ensure people’s data and their rights are respected. Because, to loop back to the beginning, ‘with great power comes great responsibility’ and Spiderman’s Uncle Ben can’t be wrong.