The Data Protection (Bailiwick of Guernsey) Law, 2017
provides a robust framework for the protection of personal data for all processing activities. This includes that within AI systems. Adherence to this law is crucial to ensure the privacy, security, and fair processing of personal data in the context of AI. Anyone using AI systems within the Bailiwick of Guernsey should understand and comply with the law's provisions to promote responsible and ethical use of personal data within their AI applications.
There are specific considerations that must be taken into account when conducting automated processing of personal data under the Law Enforcement Ordinance. When processing personal data for a Law Enforcement purpose under the Law Enforcement Ordinance
please consult sections 17 and 33 of the Ordinance
To set the scene before we outline the practical, specific steps you can take to ensure your use of AI is compliant with data protection law, we’d like you to consider the wider context:
What is AI (and what it isn’t)
‘Artificial intelligence’ is a branch of computer science. It is used as a catch-all term for a huge range of computer-based systems that can make predictions, recommendations or decisions in a way that previously could only be done by human beings.
Machine learning is a subfield of AI that focuses on systems that can find similarities and differences in data through repeated tuning of its parameters (often called ‘training’) which allow it to improve or ‘learn’.
Existing AI applications
AI systems are used in a variety of settings including:
- Search engines (e.g. Google / Bing);
- Recommendation systems (e.g. the suggestions served to you by social media platforms or other online service providers such as Amazon, Netflix etc.);
- Speech-recognition (e.g. Siri / Alexa);
- Medical diagnostics (e.g. skin cancer detection);
- Pharmaceutical advances (e.g. developing new drugs faster and cheaper);
- Self-driving vehicles;
- Generative/creative systems (e.g. ChatGPT, Midjourney);
- amongst others.
The definition of the term ‘artificial general intelligence’ varies depending on what source you use but broadly it refers to a (currently hypothetical) system that could autonomously out-perform most human beings in work it had not been trained or programmed for.
Anyone who cares about the treatment of human beings should involve themselves in conversations around AI, how it is used, and its future development. These issues are too important to be decided by technologists alone, many of whom have never received any training in ethics (as confirmed in this BBC article from June 2023 quoting Prof Yoshua Bengio
It can be helpful to draw a parallel between safe use of AI and safe driving – you don’t have to be an automotive engineer to know how to drive a car safely, just like you don’t need to be able to build an AI system in order to understand that you have to take care around the impact it may have on people. We’ve outlined some specific pointers below to help you with this in relation to data protection law.
It is important to keep in mind that an AI system is a software tool, not a sentient being. Human beings have a strong tendency to anthropomorphise things, and this tendency is especially dominant when we deal with generative AI systems who produce natural language responses to our prompts. They ‘sound’ human, but only because they have been trained on human language. Because they can mimic human language we start to think of these systems as sentient beings.
Creators of these systems are aware of this fantasy and appear to be playing up to it through features like, for example, ‘MyAI’ – a chatbot Snapchat inserted in amongst your chats
with friends and family. Increasingly we are allowing AI systems to take up space in our most intimate places: our messaging apps; our homes (think of all the smart speakers in kids’ bedrooms
); and even our minds – as explored in Dr Susie Alegre’s book ‘Freedom to Think
It is important to consider the power dynamics at play in the development and use of AI systems. Large tech companies who position themselves in the ‘AI arms race
’ are driven by commercial interests to consolidate their power to exert influence over public policy, and crowd out competitors. This power play is made possible by the immense cache of knowledge and personal data these tech companies have – most of which has been handed to them by their customers. Whenever a locus of power is created we have a moral and ethical imperative to find those who are powerless and ensure that their interests are being safeguarded.
Below are signposts to key facets of data protection law that you need to understand and implement if you are using personal data within an AI system, with links to more detailed information:
Scope and definitions
Our local data protection law
applies to any processing of personal data within the jurisdiction of the Bailiwick of Guernsey or where goods or services are offered to Bailiwick residents. Personal data refers to any
information about or related to an identified or identifiable living individual. The law has a wide reach and captures any AI systems that use personal data.
Lawful processing conditions
The law requires that personal data be processed on lawful grounds
. The lawful processing conditions for using someone’s data includes things such as: having the data subject's consent, fulfilling contractual obligations, complying with legal obligations, protecting vital interests, performing tasks in the public interest, and legitimate interests of the organisation. If you are using an AI system to process personal data you must identify an appropriate lawful basis upfront.
The Seven Data Protection Principles
The data protection law outlines seven key principles
that AI systems must follow when personal data is involved. These principles are: Lawfulness, Fairness & Transparency / Purpose Limitation / Minimisation / Accuracy / Storage Limitation / Integrity & Confidentiality / Accountability. If you are using an AI system you must incorporate these principles into its design and operation. You may need to pay particular attention to ‘lawfulness, fairness, and transparency’ as this principle is arguably the most challenging to apply to AI systems as they can arrive at decisions in ways that are unclear even to its own designers (this is often referred to as the ‘black box’ problem).
Data subjects (aka. living people) have ten rights
under the local data protection law. These rights include the right to be informed, right of access, right to rectification, right to erasure, right to restrict processing, right to data portability, right to object, and rights related to automated decision-making and profiling. If you are using an AI system you must provide mechanisms to facilitate the exercise of these rights by data subjects. In particular you must be ready and able to allow an individual to have any automated decision-making and profiling reviewed by a human being, if they ask.
Data protection by design and by default
If you work with personal data you must establish and carry out proportionate technical and organisational measures
to effectively comply with the law. These measures must ensure that by default
only personal data that is necessary
for the purpose is processed, and you must integrate necessary safeguards into your processing to ensure compliance with the law and safeguard the rights of individuals.
If you are using an AI system for high-risk processing of personal data
, such as automated decision-making or profiling, you must consider whether a Data Protection Impact Assessment is required
. Data Protection Impact Assessments (DPIAs) are an important compliance tool when you are embarking on new processing or making changes to existing processes. In some cases it will a legal requirement. DPIAs involve systematically evaluating the potential impact of data processing activities on individuals' rights and significant interests. Organisations deploying AI systems should conduct DPIAs when necessary and mitigate identified risks.
To ensure people’s information is safeguarded in a practical way and so that ‘data protection’ is not perceived as a blocker, the ODPA is open to exploring innovative practices (including the use of AI systems), please contact us to find out more
Security and data breach notification
If you are using an AI system that involves people’s data you must implement appropriate technical and organisational measures to ensure the security of that data. If you experience a personal data breach
, you must notify the Office of the Data Protection Authority within 72 hours of becoming aware of a breach and, where appropriate, tell affected data subjects. If the breach is unlikely to result in any risk to the ‘significant interests’ of the affected data subject(s) there is no legal requirement to report it to the ODPA or to notify the people affected but you should keep an internal record. If you are using AI systems you must have mechanisms in place to detect, respond to, and report data breaches effectively.
International data transfers
The data protection law imposes restrictions on the transfer of personal data to countries or territories outside the Bailiwick of Guernsey, unless adequate safeguards are in place to protect the personal data being transferred. If you are using AI systems that involve cross-border personal data transfers you must ensure compliance with these requirements
, such as using standard contractual clauses or other approved mechanisms. This is a highly technical, and evolving legal area so you may benefit from seeking legal advice.
- PRIMER: ‘AI, human rights, democracy and the rule of law: A primer prepared for the Council of Europe’ (The Alan Turing Institute)
- GLOSSARY: Key Terms for AI Governance (IAPP - International Association of Privacy Professionals )
- REPORT: ‘AI and Human Rights: Aligning Our Tech Future With Our Human Experience’ (All Tech is Human)
- GUIDANCE: Artificial Intelligence - this detailed guidance was issued by The UK’s Information Commissioner and is aimed at UK-based organisations. It relates to the UK GDPR which is equivalent (though not identical) to our local law.
- GUIDANCE: Explaining decisions made with AI - This co-badged guidance by the ICO and The Alan Turing Institute aims to give organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them.
- GLOBAL STANDARD: Ethics of Artificial Intelligence - UNESCO produced the first-ever global standard on AI ethics, developing two practical methodologies: Readiness Assessment Methodology and Ethical Impact Assessment for UNESCO's 193 Member States to use to implement its recommendations. Whilst these tools are aimed at the public sector, developers or others in the private sector and elsewhere can use them to facilitate the ethical design, development and deployment of an AI system..
- ISO/IEC 42001 STANDARD: The International Organization for Standardization (ISO) is an independent, non-governmental international organisation - in December 2023 it published ISO/IEC 42001 an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System.
- UK AI SAFETY INSTITUTE: The UK Government launched the AI Safety Institute (AISI) in November 2023 and describes it as "the world’s first state-backed organisation focused on advanced AI safety for the public benefit and is working towards this by bringing together world-class experts to understand the risks of advanced AI and enable its governance."
- ACADEMIA: University of Oxford Institute for Ethics in AI
- RESEARCH: The Ada Lovelace Institute is an independent research institute working to ensure that data and AI work for people and society.
- LEGISLATION: The Artificial Intelligence Act – first proposed legal framework that aims to significantly bolster regulations on the development and use of artificial intelligence within the European Union.