Global AI regulation update

Published: 3 November 2023

Governments in the UK and US are keen to show leadership in the race to regulate future development of artificial intelligence. It is important to understand how AI is already captured by existing data protection regulation, and the existing rights people have.

On 1-2 November 2023 the UK Government hosted its first ‘AI Safety Summit’. This major event, held at Bletchley Park (home of WWII’s codebreakers) aimed to: “consider the risks of AI, especially at the frontier of development; discuss how they can be mitigated through internationally coordinated action.”

The UK's summit had 5 objectives: 
1. a shared understanding of the risks posed by frontier AI and the need for action
2. a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
3. appropriate measures which individual organisations should take to increase frontier AI safety
4. areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
5. showcase how ensuring the safe development of AI will enable AI to be used for good globally

Some key outputs reported from the AI Safety Summit were: 
1. Publication of a policy paper called the 'The Bletchley Declaration' which the UK Government says "announces a new global effort to unlock the enormous benefits offered by AI - by ensuring it remains safe". 
2. Professor Yoshua Bengio has agreed to chair a new body that would seek to establish the scientific consensus on frontier AI systems’ risks and capabilities via a ‘State of the Science’ Report.  
3. The UK’s current ‘Frontier AI Taskforce’ will become The AI Safety Institute, a new body focused on advanced AI safety for the public interest, understanding its risks and enabling its governance.  

Meanwhile, on 30 October 2023, US President Joe Biden issued a ‘landmark’ executive order that The White House says aims “to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.” 

It is important to know that despite the above developments, AI systems that deal with information about individuals (‘personal data’) are already captured by existing data protection laws locally and internationally. These laws give individuals a raft of important protections and rights over how their personal data is used, and they place a number of legal obligations on any entities using AI systems to process personal data. 

One of the key areas that AI and data protection intersect is in employment. The recently adopted ‘Resolution on Artificial Intelligence and Employment’ seeks to ensure that artificial intelligence is used appropriately in employment settings and with due regard to existing data protection laws. This resolution was adopted at the 45th Global Privacy Assembly (GPA) held in Bermuda in October 2023. It includes thirteen points that the GPA (an international grouping of data protection authorities) wishes to underline the importance of. Some highlights of these include: 
• a human-centric approach to AI use in employment; 
• recognition of the existing right people have to not have decisions made about them by AI (you can ask for a human to review an AI decision); 
• the existing right all people have, to be given information about how their personal data is used by AI systems. 

Read more about how to ensure your AI systems comply with local data protection law.