For several years, artificial intelligence (AI) systems have been widely used in various aspects of society. The biggest challenge this development presents is that we often don’t know how the algorithms behind the systems work and whether the choices they make are fair. To address that problem, Vrije Universiteit Brussel AI specialist Dr Carmen Mazijn investigated how we can better understand AI systems so they don’t have a negative impact on our social institutions through bad decision-making and discrimination.
For her PhD, Mazijn carried out four years of interdisciplinary research, with the Data and Analytics Laboratory at the Faculty of Social Sciences & Solvay Business School, and the Applied Physics research group at the Faculty of Science and Bioengineering. She began looking at decision algorithms: AI systems that support human decisions or even replace humans in the decision-making process. In a selection process, an AI model may appear to make gender-equal choices, for example, but when the algorithm is scrutinised, the choices may be motivated completely differently.
“The algorithm may sometimes seem to decide fairly, but not always for the right reasons,” says Mazijn. “To know for sure if an AI system has bias or is really making socially acceptable decisions, you need to crack the system and its algorithms.”
Mazijn developed a detection technique that can break AI algorithms and investigate whether the system’s logic is acceptable. This allows her to assess whether the system could be applied in the real world. During the research, she noticed that many AI systems interact with each other, creating possible feedback loops due to bias in one or more of the systems.
“A police department can use AI to determine which streets to deploy more patrols on,” she gives as an example. “By deploying more patrols, more breaches are identified. When that data is fed back into the AI system, a bias is amplified, whether it exists or not, and you get a self-fulfilling prophecy.”
The key message is that policymakers and business leaders should approach the use of AI systems intelligently and consider the long-term effects of choosing one system over another. To make the results of the study more accessible to policymakers and their staff, Mazijn also drew up a policy line to indicate how her technical and social insights could be applied.
Her thesis was published on 5 September under the headline “Black Box Revelation: Interdisciplinary Perspectives on Bias in AI” and was supervised by Prof Dr Vincent Ginis and Prof Dr Jan Danckaert.
Dr. Carmen Mazijn: +32 475 56 70 09