Members of the Horizon Europe project met at CTU to discuss the expected legislative measures for AI algorithms and the upcoming activities of the AutoFair project.
In the face of planned EU regulations, what technological changes are in store for AI algorithms? This question was addressed by the researchers of the international project AutoFair, who met at FEE CTU on 25 October 2022. The official launch of the $95 million Horizon Europe project coordinated by our AI Center brought a number of interesting insights from research and practice.
They were shared by Marcel Kolaja, Member of European Parliament, Irina Orssich, Head of Sector AI Policy at the European Commission, Martin Hodula, Head of Research at Czech National Bank, and representatives of our AI Center FEE CTU, Israel's Technion, IBM Research, software companies Workable and Avast, startup Dateio and French bank BNP Paribas. What conclusions did they reach?
Human-compatible AI
As Jakub Mareček, the coordinator of the AutoFair project from our AI Center, pointed out in his introductory contribution, artificial intelligence is becoming more and more powerful. The challenge for legislators, innovators and researchers is therefore to make AI algorithms compatible with humans.
The idea of "Human-Compatible AI" is the driving force behind the entire research project. At the end of the project, AI models should emerge that meet two key criteria: transparency and fairness. The development will then include the deployment of such algorithms on the platforms of industry partners - IBM, Dateio and Workable. Specifically, the outputs will find applications in marketing, banking and HR. During the meeting, these plans were presented by employees of the mentioned companies.
Eliminating the harmful bias in recruitment
One example of the use of ethical AI is in recruitment, where the software will eliminate bias and discrimination in the company and also offer explanations for each step in the automated recruitment process. What might this look like in concrete terms? Data on the actual representation of women in leadership positions clearly shows gender inequality, but the improved algorithm will take this bias into account and will not disadvantage female candidates based on this indicator. In addition, the programme will document why it has made certain decisions or recommendations so that the recruiter can assess the data for themselves - this explainability is crucial. This example is very simplistic, but it shows that some social problems can be solved by introducing algorithms.
Potential misuse of seemingly non-sensitive data
Sometimes, however, such discrimination or potential misuse of data is less apparent. This was discussed in her presentation by Elizabeth Daly, a researcher from IBM Research who works with a team on model validation for artificial intelligence.
She pointed out that some personal data may not sound sensitive at first glance, but could be source of critical information. For example, from data about where we shop, banks can tell if we have offspring, if we are sick, or if we had a successful quarter at work. All of these data can then lead to discrimination, even though data on parenting, health or salary bonuses are not directly collected by the bank for marketing purposes.
A frequently raised issue in the context of AI and its regulation is liability. Who would be liable for damages in the event of an algorithm failure? During the meeting, the experts most often leaned towards the argument that the responsibility for AI lies with the person who approved its deployment. Thus, it is not the fault of the developer or the vendor, but of the manager or other employee who gives the green light to the implementation of the algorithm in the company's systems.
"We need to hold people accountable, not the AI," said Lea Deleris, head of AI risk at BNP Paribas. She also added that the composition of the teams developing or deploying AI also plays a big role. Diversity, whether cultural or gender, is therefore key in ensuring a fair AI.
Testing the accuracy and fairness of AI-based decisions
A topic that resonated strongly among the participants was people scoring, the so-called social scoring. Both MEP Marcel Kolaja and other speakers strongly criticised the categorisation of citizens based on data, which would put them at a disadvantage in gaining equal opportunities.
"We certainly don't want to open this Pandora's box," he made clear. But discrimination can happen elsewhere, too. An effective mechanism for back-testing the accuracy and fairness of AI-based decisions, he said, is to retain data for several months. The biggest challenge, then, is to ensure that the system set up is fair but also practical and sustainable. At the same time, in many cases, the question of privacy arises, as in the collection of data from self-driving car cameras, which can also capture passers-by and in the wrong hands could be misused for identification.
All the discussions show that the project investigators are aware of these and other risks associated with artificial intelligence. At the same time, they did not hide their enthusiasm for the field and the search for solutions that would contribute to the protection of society. They believe in the potential of artificial intelligence and welcome legislative safeguards.
With a diverse team that draws on expertise in different fields, the project clearly has a chance of achieving its goals. That is to fulfil the potential of artificial intelligence to serve people. Transparently and fairly. Let's keep our fingers crossed!