New guidelines aim to correct discriminatory algorithms

New guidelines aim to correct discriminatory algorithms

VUB researchers help to create AI rules for government organisations and companies

A team of researchers, including scientists from VUB, has compiled a step-by-step guide to how organisations can prevent discrimination when using algorithms.

The research team – with specialists from VUB, Tilburg University, Eindhoven University of Technology and the National Human Rights Institute of the Netherlands – was commissioned by the Dutch ministry of the interior to study the technical, legal and organisational conditions that need to be taken into account when organisations use artificial intelligence in their operations. The guidelines apply to the public and private sectors and is available in Dutch and English. 

Algorithms are increasingly being used to work in a risk-driven way and to make automated decisions. The flaws of this system were clearly demonstrated by the recent Dutch child allowance affair, in which minorities were systematically discriminated against by the tax authorities, partly on the basis of algorithms. The new guideline provides rules and preconditions for government organisations and companies that want to use algorithms and AI.  

The guideline establishes six steps for the development and application of algorithms: 

  • determining the problem;   
  • collecting data;   
  • data selection;  
  • establishing a policy for automated decision-making;   
  • implementing the policy; 
  • testing the policy.   

For each step, legal rules are provided for parties to take into account. These come from sources including the General Data Protection Regulation, the Equal Treatment Act and the European Convention on Human Rights, real-world best practices and examples from the literature.   

Involving parties  

The guideline requires, among other things, that organisations involve stakeholders and relevant groups in society in the development of the algorithm from the beginning and that external, independent experts critically monitor the entire process. It also requires that people affected by automated decisions should be informed and be able to object to them, that the system should be stopped when errors and shortcomings are detected, and that the entire process should be permanently monitored.  

Human rights   

The guideline is a result of the Dutch government’s commitment to human rights as a starting point for the use of AI and is in line with the recently adopted motion by politicians Jesse Klaver and Lilianne Ploumen on the new governance culture, in which the country’s parliament stated that “racism must be ended as soon as possible, not least by stopping the use of discriminatory algorithms”. 

The document can be found here.

10 lessons for algorithms 

Involve stakeholders: Stakeholders and groups affected by automated decision-making should be involved in the development of the algorithm, not at the end of the process but from the very beginning. There should be regular reviews with stakeholders during the development process.

Think twice: Implementing authorities and companies now often opt by default for automatic decision-making and risk-driven work, unless there are substantial objections. Because algorithms by definition make decisions based on group characteristics and do not take into account the specific circumstances of individual cases, the risk of discrimination is intrinsic. Organisations should always first check whether the objectives can be achieved without an algorithm.

Consider the context: By using algorithms, a process becomes model-driven; this is efficient and can promote consistency. However, the rules that the algorithm learns follow from the reality of data and regularly become detached from the real world and the human scale. Therefore, decision-making processes need a person to check: does what the algorithm is doing now make sense? The team working on an AI project should be as diverse as possible, taking into account both professional and personal backgrounds, such as the ethnicity, gender, orientation, cultural and religious background and age of the team members.

Check for bias in the data: Algorithms work with data, but the data that organisations hold is often incomplete and biased. An algorithm that learns from today’s world will learn that men are more likely to be invited for an interview for a managerial position than women; an algorithm trained on the police database, which has an overrepresentation of data from neighbourhoods with many residents with an immigrant background, will conclude that crime is concentrated in those neighbourhoods. Organisations should therefore always check the data is balanced, complete and up to date.

Set clear objectives: Before the algorithm is developed, success criteria must be established. What is the allowed margin of error and how does it differ from existing processes? What advantages should the artificial intelligence system have in order to be worth the investment? One year after the system is put into practice, evaluate whether these benchmarks have been achieved; if not, the system should in principle be stopped.  

Monitor continuously: Algorithms are often self-learning. This means they adapt themselves to the context in which they are deployed. How such an algorithm develops, however, cannot be predicted, particularly because the context may change. That is why the system must be permanently monitored. Even if the automatic decision-making is not discriminatory at the moment it is deployed, it may be different after a year or even a month. 

Involve external experts: Companies and government organisations are often not neutral about their own systems. Costs have been incurred, time invested, prestige involved. Therefore, the evaluation of the system should not be entirely in the hands of the organisation itself, but external experts should be involved. This group of experts must have knowledge of the domain in which the algorithm is deployed, the prevailing legal rules and the prevailing technical standards. 

Check for indirect discrimination: The control of algorithms should not only be about whether the system explicitly makes decisions based on discriminatory grounds such as race, orientation, beliefs or age. Even if automatic decision-making systems do not take these factors into account directly, they may do so indirectly. If a predictive policing system uses postcode areas, for example, it can lead to indirect discrimination, because it can advise the police to do a lot of surveillance in particular areas. The risk of self-fulfilling prophecies is high. 

Check legitimacy: Not all discrimination is prohibited; indirect or even direct discrimination may be permissible in some cases. An automatic decision-making system that helps a casting agency to select letters of application may, for instance, discriminate on the basis of gender if it is looking for a female actor. A postcode area or other factors that may lead to indirect discrimination are not prohibited in all cases but are dependent on the context. Organisations should always check not only whether the system makes a distinction, but also how, why and whether there are good reasons for it.

Document everything: Document all decisions relating to the collection of data, the selection of data, the development of the algorithm and the adjustments that are made to it. Documentation must be understandable for citizens who invoke their rights, for the external experts who carry out an independent audit and for supervisors such as the Dutch Data Protection Authority who want to check the system.   

More information

Bram Visser

[email protected]

The full guidelines

The condensed guidelines

The one-pager

 

PE
Contact us
Sicco Wittermans
Sicco Wittermans Woordvoerder en persrelaties Vrije Universiteit Brussel
Sicco Wittermans
Sicco Wittermans Woordvoerder en persrelaties Vrije Universiteit Brussel
About Press - Vrije Universiteit Brussel

Vrije Universiteit Brussel is an internationally oriented university in Brussels, the heart of Europe. By providing excellent research and education on a human scale, VUB wants to make an active and committed contribution to a better society.

The World Needs You

The Vrije Universiteit Brussel assumes its scientific and social responsibility with love and decisiveness. That’s why VUB launched the platform De Wereld Heeft Je Nodig – The World Needs You, which brings together ideas, actions and projects based on six Ps. The first P stands for People, because that’s what it’s all about: giving people equal opportunities, prosperity, welfare, respect. Peace is about fighting injustice, big and small, in the world. Prosperity combats poverty and inequality. Planet stands for actions on biodiversity, climate, air quality, animal rights... With Partnership, VUB is looking for joint actions to make the world a better place. The sixth and last P is for Poincaré, the French philosopher Henri Poincaré, from whom VUB derives its motto that thinking should submit to nothing except the facts themselves. VUB is an ‘urban engaged university’, strongly anchored in Brussels and Europe and working according to the principles of free research.

www.vub.be/dewereldheeftjenodig

 


Press - Vrije Universiteit Brussel
Pleinlaan 2
1050 Brussel