Responsible AI and Global AI Governance: G7 Digital and Tech ministers’ statement
This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The Digital and Technology ministers of the G7 countries published a declaration addressing, amongst other things, responsible AI and Global AI governance. This was ahead of the G7 leaders summit 19 to 21 May 2023, and shortly before the EU's leading parliamentary committees vote on the EU AI Act (see Euractiv's article here).
We highlight in this article the key points from the Digital and Technology ministers' statement and summarise the relevance to the UK's White Paper on AI regulation (see our article on the White Paper).
The G7 Summit is an international forum held annually for the leaders of the G7 member states of France, the United States, the United Kingdom, Germany, Japan, Italy, and Canada (in order of rotating presidency), and the European Union (EU).
The Digital and Technology ministers of the G7 countries declared that the G7:
It is notable that the G7 declaration recognises that G7 members may take different approaches to achieving trustworthy AI. The UK's White Paper on AI regulation shows that the UK will take a different approach to the EU with its proposed AI Act (click here for a flowchart on navigating the EU AI Act). Given AI systems and those involved in the AI lifecycle often operate internationally, the G7 Digital and Technology ministers recognise the need to seek interoperability between varying regulatory frameworks which may otherwise risk diverging.
The UK is also clear on its desire to ensure global interoperability and international engagement; there is a section on the subject in the White Paper. The UK wants to continue to work closely with international partners to both learn from, and influence, regulatory and non-regulatory developments. It cites numerous examples where it is already doing this, including: being an active member of the Organisation for Economic Co-operation and Development's governance working party; a contributor to and founding member of the Global Partnership on AI; seeking bilateral AI engagement with other nations and jurisdictions such as the EU (and its member states), US, Canada, Singapore, Australia and others.
So there remains a risk that different approaches to regulating AI could result in multiple complex and diverging regulatory frameworks to navigate. However, the positive message from both the UK (in the White Paper) and the G7 Digital and Technology Ministers (in the declaration) is that this risk is recognised and there is a political will to tackle it, in large part by encouraging active discussions across countries and organisations, and developing international standards.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong.
We reaffirm our commitment to promote human-centric and trustworthy AI based on the OECD AI Principles and to foster collaboration to maximise the benefits for all brought by AI technologies. We oppose the misuse and abuse of AI to undermine democratic values, suppress freedom of expression, and threaten the enjoyment of human rights.
Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.
Update your preferred sourcesBe sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.
Follow us