Ten-step practical AI guidance

AI Guidance

Artificial Intelligence is developing quickly and has huge potential to improve how we live and work. Because it relies so heavily on data, it also raises important data protection considerations that need to be kept in mind when using it. 

AI is becoming an inseparable part of everyday life, from smart assistants and online recommendations to healthcare and public services. As this technology evolves, it is important that it is used in ways that are fair, safe, and respectful of people’s rights.

The challenge for all organisations, is to harness the power of AI in their work activities, while managing any of its associated risks.

To support this, the Office of the Data Protection Authority (ODPA) has created guidance to help people and organisations understand how AI should handle personal data responsibly. This includes making sure: 

  • AI systems are transparent and accountable
  • People’s privacy and rights are respected. 
  • Decisions made by AI are fair and understandable

It explains in plain English how to use AI in a way that respects individuals’ rights, complies with privacy laws, and maintains trust.   

But first things first. What do we mean when we talk about AI?  

In its most general form: 

‘Artificial Intelligence’ is technology that allows computers to learn from data and perform tasks that would normally require human thinking. 

The type of AI used by organisations is most likely to be:  

Generative AI which creates new content such as text, images or code based on the data it has learned from. Or: 

Agentic AI which not only generates content but also takes actions or makes decisions to complete tasks. 

THE 10 STEPS

1. Check if you are processing personal data

Data protection obligations apply if you are processing personal data, whether this is with AI or more traditional systems.

Personal data is any information relating to an identified or identifiable living individual. As data is the fuel that drives AI, more often than not, your AI deployment will involve personal data.  This can be both in training generative or agentic AI, or used in a live system. 

2. Understand your role and lawful processing condition 

Before you build or deploy AI: 

  • Clarify your role 
  • Are you a Controller?: You decide what data is used and why. 
  • Are you a Processor?: You process data only under someone else’s instructions. 
  • Identify your lawful processing condition 
    (this is the legal reason that allows you to collect and use personal data. Under data protection law, you must have a valid basis, such as consent, a contract, or a legal obligation, before you process someone’s information.) Our guidance on lawful processing conditions can help you decide which you are relying on for your processing. 
  • Remember, if you are processing special category data, the appropriate lawful processing conditions will be more strict. (Special category data is personal information that is more sensitive and needs extra protection. It includes details about someone’s health, racial or ethnic origin, political opinions, religion, genetics, criminal information and sexual life or orientation.) 

Tip: Always record the lawful condition for your AI processing.  This will often be the first question we, as the regulator, will ask if we have cause for concern. 
 

3. Carry out a data protection impact assessment (DPIA) 

A DPIA is an essential step when AI could significantly affect people’s rights, such as: 

  • Automated decisions that impact individuals 
  • Processing special category or large-scale personal data 
  • Profiling, scoring, or behaviour prediction 


Your DPIA should: 

  1. Describe how the AI system works and what data it uses 
  1. Identify the risks to privacy, fairness, and accuracy 
  1. Evaluate how serious those risks are 
  1. Decide how to mitigate or manage them 

The Law sees DPIAs as a key tool for accountability.  Guidance on how to develop your DPIA can be found on our website
 

4. Be transparent, fair and ccountable

  • Tell people how AI is used 
  • Explain in simple terms what data you collect, why you use AI, and how it affects them.  Our guidance on data processing notices (aka privacy notices) can assist you in getting this bit right. 
  • Avoid hidden bias 
  • Test your AI regularly for discrimination or unfair outcomes. 
  • Be able to explain decisions 
  • If someone challenges an AI-driven decision, you must provide a clear explanation. 

Remember: As well as being legal requirements, transparency and fairness build trust and confidence in your organisation. 
 

5. Handle training data responsibly 

  • Check where data comes from 
  • Data should be from reliable sources, whether that be from the individual themselves or from other trusted third parties. 
  • Do not use unlawfully obtained or scraped personal data unless you have a valid legal basis.  Scraped data is information collected automatically from websites or online platforms using software tools. These tools scan pages and copy content such as text, images or contact details, often without the knowledge of the people the data relates to. 
  • In 2024, we signed a joint statement with other data protection authorities that outlined our expectations with respect to guarding against data scraping. 
  • The Joint Statement calls on organisations using scraped data including from their own platforms for training LLM (Large Language Model Artificial Intelligence) AI, to comply with data protection and privacy laws as well as any AI-specific laws.  
  • Anonymise where possible 
  • Properly anonymised data is not personal data.  Data is considered 'anonymised' if an organisation has irreversibly removed all information from a set of data that could have identified individual people.  
  • Use data minimisation 
  • Only include the data genuinely needed to train or run the AI model.  If you would be concerned about your data being used, it is a big indicator that the system is using more than is necessary. 
     

6. Respect individuals’ rights 

The Law gives people certain rights over their data. AI systems must respect these, including: 

  • Access – people can ask to see what data you hold about them – DSAR guidance
  • Correction – they can request inaccurate data be fixed and incomplete data be made complete. 
  • Erasure – they can ask, in some circumstances, for their data to be deleted. 
  • Objection – they can object to certain types of AI processing (e.g. profiling). 
  • Human review – in the EU/UK, significant automated decisions must have a route for human involvement.  
     

7. Manage risks - do not aim for zero-risk 

The Law asks that you take a risk-based approach

  • You are not expected to eliminate every risk, but you must, as part of your response to your DPIA: 
  • Mitigate them with sensible safeguards 
  • Balance trade-offs openly and responsibly 

For example, you may accept a small residual risk if the AI provides significant public or business benefits, provided you have minimised the risk as much as reasonably possible. 

However, if a high risk remains following efforts to mitigate it, you must consult with us. 
 

8. Secure the data and the AI system

  • Apply strong security measures such as encryption, access controls, and regular audits. 
  • Set retention limits so data is not kept longer than necessary. 
  • Have a data breach management policy and respond accordingly to any data breaches. 
     

9. Keep records and be ready to explain

You should always be able to justify your decisions.  We can ask for this justification and so you should keep a clear record of: 

  • Your role (controller or processor) 
  • Your lawful processing condition 
  • DPIA results 
  • Risk mitigations 
  • How you tested and validated the AI system 
  • What you told individuals about AI use 

Remember to add your use of AI to your Record of Processing Activity, alongside your other processing activities. 

Documentation is your best defence if challenged, whether that be by an individual or us as the regulator. 
 

10. Maintain ongoing oversight 

AI systems evolve over time, so: 

  • Review risks regularly as data, models and purposes change. 
  • Monitor AI models for drift or unexpected changes that could cause harm. 
  • Update DPIAs when making significant changes. 
  • Train staff to handle AI responsibly. 
     

Conclusion 

Data protection compliance and accountability for AI is about balance. You are not expected to stop all risks, but you must know what they are, reduce them wherever possible, and be able to explain your approach. 

By following this guidance, you can capture all the promise that AI offers, while reducing the risk of fines, reputational damage, and loss of trust. 

 

Quick Checklist for AI Privacy Compliance 

​​☐​ Map out how your AI works and what data it uses 
​​☐​ Decide your role (controller or processor) 
​​☐​ Pick a lawful basis for processing 
​​☐​ Carry out a DPIA and document the results 
​​☐​ Be transparent with users 
​​☐​ Mitigate bias and explain AI decisions 
​​☐​ Minimise data and anonymise where possible 
​​☐​ Secure the data and the AI model 
​​☐​ Keep detailed records and review regularly 
​​☐​ Stay up to date with evolving guidance