Technology vs. Freedom to think

Published: 28 February 2023

The Bailiwick’s Data Protection Commissioner, Emma Martins, considers whose interests are being served by the recent expansion of AI technologies, and whether it risks putting one of our most innate freedoms in danger.

“It has become appallingly obvious that our technology has exceeded our humanity.”

These words were attributed to Albert Einstein in the context of nuclear weapons, but they resonate as we enter an era of artificial intelligence and machine learning. You have probably heard of ChatGPT, indeed you may well have used it. It is an artificial intelligence model built by AI research and deployment company, OpenAI. It launched in November 2022 to much fanfare, not least from those celebrating the fact that it may well mean end of homework.

Elon Musk jubilantly proclaimed – “It’s a new world. Goodbye homework”

GPT stands for ‘Generative Pre-Trained Transformer’. Let’s pause to think about one of the words in its name, pre-trained. We need to be clear that this is not some sci-fi autonomous robot that magically knows the answers to various questions (and how to do your homework).

It has been pre-trained. So, the next questions needs to be – trained with what, by whom, how, and for what purpose?

Here are some extracts of what it told me when I asked how it generated its responses: 

“My responses are solely based on the algorithms and data that were used to train me.”
“I rely on a large database of information that has been preprocessed and organized.”
“I identify the most relevant information from the database, and then use that information to generate a response.”

It isn’t sci-fi and it isn’t magic. It was thousands of humans that were paid to sift through huge quantities of online content to feed into the training. Their task? To identify what is ok to be used and what is not ok to be used.

So when you ask ChatGPT a question, the answer it provides will be a combination of all the answers it has found on the internet which has been determined as appropriate and relevant by one of those humans.
We all know that the internet has some wonderful content, it also has biased, prejudiced, hateful, illegal, exploitative, fake and disturbing content.

Who on Earth would want the job of looking through that all day, every day? The answer is: people who have little choice. For ChatGPT, the filtering work was largely outsourced to workers in places like Kenya many of whom earned less than $2 per hour. Often young, poor and disadvantaged, many of these workers have since spoken out about the psychological impact of spending hours, days, weeks reviewing appalling content.

In addition to questions of human exploitation, we also need to consider the dangers of assuming that information returned by software such as ChatGPT is high quality, safe and accurate. If the source data is biased to begin with, the outputs will be biased too.

Ensuring quality outputs is much more than removing illegal or hateful content. We have seen how online propaganda can lead to the polarisation of communities and societies, how fake news can lead to violence and extremism and how information can be used to influence and manipulate individuals and groups.

Education (and yes, that includes homework!) should not simply be about regurgitating information. It is about: helping us understand the world as it is; allowing us to place our own lives in a wider context; giving us a sense of history; providing opportunities to build critical thinking skills; and so much more.

It is also about giving us the tools to question, assess and discuss things intelligently. Our world is diverse, complex and evolving. Surely our young people need those skills more than ever. Does it not seem rather sad then that we are celebrating the fact that artificial intelligence can do our children’s homework for them.

The risk is not that AI somehow morphs into The Terminator, destroying everything in its path, it is that we become so influenced and so controlled by the information we are fed that we lose autonomy and freedom of thought. And the extraordinary level of detail about each one of us left online by our daily activities allows this manipulation to happen in a highly targeted manner.

If we end up with the brains of people who are simply fed knowledge, we cannot really know anything. We become vulnerable to misinformation and persuasion. Our brains become modified to someone else’s agenda.

The clue is in the title. It is artificial. We do not gain knowledge or intelligence, or anything else of any worth by blindly repeating other people’s views. We gain these things by being open to learning and engaging with the world around us.

The creation and application of technologies seeking to replicating human communication and interaction rarely comes without human cost.

There is a human cost at the start - we have not really engaged properly with the ethics of human based content moderation, and the exploitation that is often so widespread. (Read the novel ‘We had to remove this post’ by Hanna Bervoets for a fictional exploration of this.)

There is a human cost at the end - we risk losing our freedom to think (read the book ‘Freedom to Think’ by Susie Alegre for more on this). We cannot rely on technology to give us the things that are arguably the most important – curiosity, creativity, imagination, humility. For me, that is a much more terrifying prospect than The Terminator!

There is no easy answer to any of this but if we do not take the time to understand how these technologies work and the likely impact they may have, especially on the young or vulnerable, then we will not even begin to find good solutions. And there are good solutions to be found.

We have seen how the public narrative has shifted in debates about important areas such as climate change. That shift needs to happen around technology and data. Humanity needs to catch up. We surely don’t need an algorithm to tell us that.