Video recording | Transcript AI can bring enormous advantages but represents some challenges as well, in particular in the field of human rights. As often used to process personal data, its compliance to existing data protection legal frameworks, laws needs to be seen. GDPR was not designed with AI in mind. Whoever wants to train AI system in the EU faces tough challenges when she wants to use training data that is not fully anonymized. This is particularly true when medical AI applications are developed. At the same time, GDPR is still poorly enforced towards big tech companies. GDPR also provides little regulatory guidance regarding the use AI techniques that can derive very sensitive information like sexual preferences from innocent looking data. Although GDPR imposed a heavy compliance burden on companies, meaningful control over personal data by data subjects is still missing. It might seem that GDPR mostly gifted us with annoying cookie walls. Can one regulation in one region of the world ensure appropriate protection to all? Aren’t there already existing or emerging international laws, solutions? Can't we regulate AI to empower data subjects? For example, some automated ruleset could free us from clicking on the same cookie consent settings again and again. The EU and the Council of Europe announced plans to regulate AI. However, will that fix these issues with GDPR that have been amplified by AI technology? What is the way forward to turn users from data objects to data subjects being in meaningful control of their personal data and privacy?