Council of Europe Handbook on Human Rights and Artificial Intelligence
This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The Council of Europe’s Handbook on Human Rights and Artificial Intelligence has been published. It sets out how existing European human rights standards apply to the use of AI systems, particularly in public governance. It is intended as a practical reference tool for policymakers, regulators and public authorities, rather than as a statement of new legal obligations.
The Handbook is grounded in established instruments, including the European Convention on Human Rights (ECHR) and the European Social Charter, and complements the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. It proceeds on the basis that these frameworks continue to apply where AI systems are used, including where public functions are delivered by private actors.
A key feature of the Handbook is its focus on the AI system lifecycle. It highlights that potential human rights impacts may arise at multiple stages, including system design, data collection, training, deployment, operation and monitoring. The Handbook identifies a number of recurring, cross‑cutting risks across sectors, including:
The Handbook also includes a sector‑specific analysis covering the administration of justice, law enforcement and public security, immigration and border control, democratic processes, healthcare, social services and welfare, education, and labour and employment. For each sector, it outlines common AI use cases, identifies the human rights most likely to be engaged, and highlights safeguards and good practices relevant to those contexts.
Throughout, the Handbook reiterates that states retain positive obligations under human rights law. These include obligations to regulate and supervise AI use, ensure appropriate oversight and accountability mechanisms, and provide accessible and effective remedies, regardless of whether AI systems are developed or operated by public authorities or private providers.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Griffiths or any other member in our Technology team. For the latest on AI law and regulation, see our blog and newsletter.
Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.
Update your preferred sourcesBe sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.
Follow us