Algorithmic bias: what are the risks and what can be done?
This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The European Union Agency for Fundamental Rights (FRA) published a report on the bias in algorithms; how it appears and its impacts. The report is centred around case studies on algorithms in predictive policing and offensive speech detection. However, the issues and potential actions identified are relevant to other use cases for algorithms and AI systems. Here we highlight some of the key issues with algorithmic bias in those case studies and what the FRA says EU institutions need to do.
The report discusses various risks that lead to algorithmic bias; here we pick out a couple:
Feedback Loops in predictive policing
There is a risk that feedback loops in algorithms and AI systems reinforce and exacerbate bias.
For example:
A feedback loop occurs when predictions made by a system influence the data that are used to update the same system. It means that algorithms influence algorithms, because their recommendations and predictions influence the reality on the ground. This reality then becomes the basis for data collection to update algorithms. Therefore, the output of the system becomes the future input into the very same system.
This is a particular issue for high-risk AI systems, such as predictive policing (using algorithms to predict victims or suspects of crime, or crime hotspots, to inform policing policy).
Several factors can contribute to the formation of feedback loops:
Data and transparency issues in offensive speech detection
The report also looked at online hate speech detection systems which use machine learning and natural language processing (NLP) to identify potential offensive speech. The report identifies several reasons why such detection tools can result in biased results:
The FRA called for EU institutions and countries to:
Algorithmic bias is a real and significant concern. The FRA point to how the Netherlands tax authorities used algorithms that mistakenly labelled around 26,000 parents as having committed fraud in their childcare benefit applications, causing financial and psychological harm. The algorithms were found to be discriminatory. With the growing use and capabilities of algorithmic and AI systems, in particular those which impact human rights, there are growing concerns to address the risk of algorithmic bias. The FRA recognise that the EU's proposed AI Act is an opportunity to address those concerns.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong.
Well developed and tested algorithms can bring a lot of improvements. But without appropriate checks, developers and users run a high risk of negatively impacting people’s lives.
https://fra.europa.eu/en/news/2022/test-algorithms-bias-avoid-discrimination
Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.
Update your preferred sourcesBe sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.
Follow us