Interdisciplinary research project analyzing risks, opportunities and regulation of AI in the context of human rights.

The aim of the project is to identify and assess the risks and opportunities in the relationship between AI and human rights and to propose solutions how AI technologies should be developed, used and regulated in order to prevent human rights violations and support their further progress and protection. The project is based on an interdisciplinary analysis of AI technologies aiming to identify the source of human rights violation in all phases of "AI life cycle" and on subsequent formulation of a set of recommendations for technical and regulatory remedy.

The research focuses on the full lifecycle of artificial intelligence to identify the root causes of the human rights violations.

An intedisciplinary team of international law and human rights law experts, AI and IT experts and Czech law and EU law specialists carry on the research project in the framework of the consortium of the Ambis University, AI Center FEE CTU, Institute of Law and Technology of Masaryk University and prg.ai as the application garantor. The principal investigator of the project is JUDr. Bc. Martina Šmuclerová, Ph.D. from Ambis who is also affiliated with the prestigious Paris Institute of Political Studies (SciencesPo).

Current shortcomings in human rights protection may be diminished by the automatisation of the respective human acitivities. We focus on the following areas: justice, detention, discrimination, freedom of expression, disappearances, refugees, asylum seekers and migrants, police violence, child rights, disability rights, rights of older people, climate change, indigenous peoples, right to dignity, sexual reproductive rights.

Prevention of human rights violation

To assure the effective implementation of human rights norms in the AI domain, it is first necessary to identify the root causes of the human rights violations within the AI life cycle in order to formulate a remedy.

  • The root causes can reside in all phases of the AI life cycle, starting with the incomplete input data, passing by a biased transfer learning, up to a malicious application.
  • The corresponding remedies can be diverse: problems might be solved via e.g. a technical adjustment of the machine learning procedure, rules on the processing of data, or a more robust legal interference restricting or banning the development and use of certain AI technologies.

A comprehensive approach linking international human rights and AI expertise is thus indispensable in order to provide a holistic and solution-oriented viewpoint.

Interdisciplinary workshop with governmental and AI private sector representatives (29 September 29, 2022 at CTU).

Support for human rights protection

Current shortcomings in human rights protection may be diminished by the automatisation of the respective human acitivities. We focus on justice, detention, discrimination, freedom of expression and other areas listed above. The relationship between AI and human rights is, for now, double-edged. AI brings great benefits to all sectors of society and strengthens progress, social well-being and economic competitiveness. At the same time, however, it poses risks to a variety of human rights and fundamental freedoms, be it due to the intrinsic technological processes, human input or its abusive or malicious use in practice.

Current Research Results

Identification of the root causes of human rights violation throughout the whole AI life cycle and formulation of the remedy

ŠMUCLEROVÁ, M., KRÁL, L., DRCHAL, J., AI life cycle and human rights: risks and remedies in TEMPERMAN J. and QUINTAVALLA A. (eds.): Artificial Intelligence and Human Rights, Oxford University Press, Oxford, 2023.

Non-biased AI as utopia?: The challenge to produce a non-discriminatory AI technology

Presentation by Martina ŠMUCLEROVÁ of research results on the biased AI (prohibition of discrimination) at the International Law and Technological Progress Conference 2022 on 23 June 2022 in Aberdeen, UK.

Report on the risks of human rights violations by AI technologies and remedies

Research report (September 2022, in Czech) submitted to the Government of the Czech Republic and relevant actors of AI life cycle (public-private sphere).

Interdisciplinary workshop “AI and Human Rights”

Workshop held on 29 September 2022 at the CIIRC of the Czech Technical University in Prague with governmental and AI private sector representatives.

Meaningful Regulatory Choices? Analysing the Proposed AI Regulation

Presentation by the MUNI team of the research results at the international conference on Law and Artificial Intelligence – Challenges and Opportunities on 24 March 2023 at the Faculty of Law, Charles University in Prague – MÍŠEK, Jakub, Monika HANYCH, Veronika PŘÍBAŇ ŽOLNERČÍKOVÁ a Jakub HARAŠTA: “Meaningful Regulatory Choices? Analysing the Proposed AI Regulation”.

Final workshop “AI and Human Rights”

Final workshop presenting the project research results, incl. the proposed Human Rights Risks Assessment Mechanism, evaluation of EU AI Act in light of human rights law, and opportunities for AI applications in human rights sphere, was held on 21 September 2023 at the CIIRC of the Czech Technical University in Prague with representatives of the Government, AI private sector and human rights NGOs.

Panel “AI and Human Rights” – Days of AI in Prague

Panel on “Artificial Intelligence and Human Rights” held at the festival “Days of AI” in Prague on 30 October 2023 at Kampus Hybernská.

Expert seminar – Office of the Government

Expert seminar for the Office of the Government of the Czech Republic “Artificial Intelligence and Human Rights: Risks, Regulation and AI Act” on 31 October 2023 in Prague.

Human Rights Risk Assessment in AI Life Cycle – table    

Proposal for the Human Rights Risk Assessment Mechanism in AI Life Cycle containing the identification and explanation of particular human risks, impacted phases of AI life cycle and recommended means of risk elimination both at the level of development and deployment of the AI system (table CZ, EN – arriving soon).

Opportunities for AI in the Human Rights domain – table (EN)

The table identifies the potential of automatization in selected areas of human rights violations in the society and proposes new opportunities for the development and deployment of AI technologies with the aim to protect and reinforce human rights (table EN).

Core documents:

Project information

Start date: 1 May 2021
End date: 31 October 2023
Call: TA ČR ÉTA 5 (Technology Agency of the Czech Republic)
Project ID: TL05000484

Further information and documents can be found on the project website.