Horizon Europe project for fair AI algorithms supported by the EU with 3,8 million Euros. Imperial College London, the Israeli Institute of Technology Technion and the National and Capodistrian University of Athens as well as partners from the industry collaborate on developing explainable and transparent algorithms.

In this Horizon Europe project, we address the matter of transparency and explainability of AI using approaches inspired by control theory. Notably, we consider a comprehensive and flexible certification of properties of AI pipelines, certain closed-loops and more complicated interconnections. At one extreme, one could consider risk averse a priori guarantees via hard constraints on certain bias measures in the training process. At the other extreme, one could consider nuanced communication of the exact tradeoffs involved in AI pipeline choices and their effect on industrial and bias outcomes, post hoc. Both extremes offer little in terms of optimizing the pipeline and inflexibility in explaining the pipeline’s fairness-related qualities. Seeking the middle-ground, we suggest a priori certification of fairness-related qualities in AI pipelines via modular compositions of pre-processing, training, inference, and post-processing steps with certain properties. Furthermore, we present an extensive programme in explainability of fairness-related qualities. We seek to inform both the developer and the user thoroughly in regards to the possible algorithmic choices and their expected effects. Overall, this will effectively support the development of AI pipelines with guaranteed levels of performance, explained clearly. Three use cases (in Human Resources automation, Financial Technology, and Advertising) will be used to assess the effectiveness of our approaches.

Start date: 1 October 2022
End date: 30 September 2025
Call: HORIZON-CL4-2021
Project ID: 101070568

8 research organizations from 5 different countries

This three-year AutoFair project was selected for funding as the only project coordinated by an institution from the Czech Republic within the call HUMAN-01. Its principal investigator is Jakub Mareček from the Optimization cluster at our center. The Czech Technical University will receive almost 600 000 EUR. Rest of the funding will be shared by seven other members of the consortium, including prestigious universities, such as Imperial College London, the Israeli Institute of Technology Technion and the National and Capodistrian University of Athens.

Partners from the industry (IBM Research, Workable, Date.io) will provide the necessary data for modeling and verify the applicability of the results in practice. Among those will be big tech companies as well as local AI startups.

Project AutoFair is coordinated by Jakub Mareček who leads the optimization research group at the AI Center FEE CTU. He received his PhD at University of Nottingham in the UK and worked in IBM Research in Dublin, Ireland. (Photo: Petr Neugebauer, FEE CTU)

Human-compatible AI with guarantees

The goal of the AutoFair project is to guarantee that AI algorithms will not favor anyone. The decision-making process poses a considerable risk when using AI acting as a blackbox. The algorithms can satisfy many people, but for some it may work very poorly. One strategy for dealing with this risk is to carefuly work with data. The choice of data for learning the system must be representative and not transfer inequalities in society to the development of algorithms. The opposite strategy is to consistently explain the operation of AI systems and its limitations to the public. The later strategy considers only the communication aspects after the implementation itself. The AutoFair project combines both of these extreme approaches: it wants to improve the algorithms themselves while educating end users. It therefore draws on knowledge from computer and data sciences, control theory, optimization and other scientific disciplines, including ethics and law.

Issues related to the ethics of artificial intelligence are commonly explored in computer vision. "Many people use a face recognition system to unlock their mobile phones. However, for a relatively long time, it only worked reliably for white men, and until recently the success rate for ethnic minorities was significantly lower. This problem due to unrepresentative data has already been eliminated but AI has a number of other uses, where similar ethical problems persist to this day,” explains the urgency of the project its coordinator Jakub Mareček.

Jakub Mareček designs and analyses algorithms for optimisation and control problems across a range of application domains, including power systems, quantum computing, and robust statistics. (Photo: Petr Neugebauer, FEE CTU)


The AutoFair project seeks to address needs for trusted AI and user-in-the-loop tools and systems in a range of industry applications.

  • Comprehensive and flexible certification of fairness. At the one end, we consider risk averse a priori guarantees on certain bias measures as hard constraints in the training process. At the other end, we consider post hoc comprehensible but thorough presentation of all of the tradeoffs involved in the AI pipeline design and their effect on industrial and bias outcomes. 
  • User-in-the-loop in continuous iterative engagement among AI systems, their developers and users. We seek to both inform the users thoroughly about the possible algorithmic choices and their expected effects, and to learn their preferences in regards to different fairness measures. We subsequently aim to guide decision making bringing together the benefits of automation in a human-compatible manner. 
  • Toolkitsfor the automatic identification of various types of bias, and their joint compensation by automatically optimizing various and potentially conflicting objectives (fairness/accuracy/runtime/resources), visualising the tradeoffs, and making it possible to communicate them to the industrial users, government agencies, NGOs, or members of the public.

Expected impact

IMPACT 1: The availability of human-compatible tools for detecting bias will encourage organizations to detect biases in-house and third-party researchers to detect bias in the data of said organizations.

IMPACT 2: Removing some of the risks of violating European Regulation of Artificial Intelligence while using AI will make European businesses more likely to use AI and become more efficient and competitive.

IMPACT 3: Advances in fairness in recruitment automation and explaining fairness-related aspects thereof will improve fairness in hiring.

IMPACT 4: More efficient policy recommendations for the regulation of AI will lead to more efficient uses of AI.

IMPACT 5: Better public understanding of AI and the tradeoffs involved will reduce the risk of backlash against AI.

IMPACT 6: A cohort of early-career researchers is trained to focus on fairness aspects of AI.

Three case studies in preparation

The project findings will be tested on three case studies of industrial use across three sectors. The first is the automation of fair evaluation in recruitment, the second is the elimination of gender inequality in advertising and the third is the area of fin-tech, specifically the elimination of discrimination against bank clients. The development of these case studies will be accompanied by expert groups consisting of representatives of business, public authorities, non-governmental organizations and politicians.

  • Use Case 1: workable.com is the world’s leading hiring platform for organizations to find, evaluate and hire better candidates, faster. Individual and group fairness among the candidates is crucial for their continued custom.
  • Use Case 2: IBM Watson Advertising helps scale advertising campaigns with AI and machine learning and addressing unwanted bias in advertising.
  • Use Case 3: dateio.eu is a fintech running a card-linked marketing platform delivering targeted cashback offers to banks clients.

The view of all stakeholders will be central to the research process and will increase the potential for the practical application of project results. The real implementation of scientific knowledge and the ethical use of artificial intelligence are the main features of the AutoFair project. "I believe that the outputs of the project will also be reflected in the planned regulation of artificial intelligence, which is being prepared by the European Commission," adds Jakub with determination and hope in his voice.

Kick-Off meeting (October 2022)

Members of the AutoFair project met at CTU on October 25, 2022 to discuss the expected legislative measures for AI algorithms and the upcoming activities of the AutoFair project. The official launch of the $95 million Horizon Europe project coordinated by our AI Center brought a number of interesting insights from research and practice. Read more about what technological changes are in store for AI algorithms in the face of planned EU regulations in the kick-off report.

AutoFair project team, October 2022