The project builds on the Central European Digital Media Observatory (CEDMO) to develop AI tools that combat disinformation, strengthen resilience, and promote digital literacy and regulation.

Project Objectives

Different types of information disorders are not only a technological problem—they influence social relationships, public debate, trust in media, and the functioning of democratic institutions. That’s why we approach them comprehensively and across disciplines. We combine advanced AI technologies with in-depth research on media trends and social impacts. We bring together experts in computer science, journalism, law, and education to ensure project outcomes not only detect information manipulation but also strengthen society’s overall resilience against it.

TECHNOLOGICAL SOLUTIONS

  • We are developing advanced AI tools to detect and analyze various types of information disorders, from clickbait headlines to deepfake content.
  • We focus on automated fact-checking systems, monitoring information manipulation, and identifying patterns in disinformation campaigns.
  • We are creating practical tools for journalists and fact-checkers to help them efficiently verify information and protect media content from unauthorized use.
  • Special emphasis is placed on generative AI threats—for example, developing systems to detect manipulated videos through microexpression and lip movement analysis, and models to identify synthetically generated texts.

EMPIRICAL RESEARCH

  • We monitor not only the spread of different types of information disorders but also how the public perceives their impact and how effectively regulatory measures work.
  • Our longitudinal studies map changes in the media ecosystem and provide insights for regulators and media organizations.
  • We study algorithmic recommendation systems and the phenomenon of echo chambers to understand how different audience types behave on social networks. Additionally, we analyze the persuasiveness of disinformation narratives and how they are crafted to target specific audiences.

SUPPORT FOR POLICY AND INSTITUTIONS

  • We provide expert analyses and legal support for implementing European AI and digital media regulations, such as the Digital Services Act (DSA), AI Act, or TERREG (Regulation on Terrorist Content).
  • We monitor how large platforms fulfill their content moderation obligations, where failures occur, and what mechanisms can improve transparency.
  • We propose recommendations for effective AI regulation in media and help journalists use these technologies safely and responsibly.
  • In addition to expert support for regulators, we focus on public education—developing materials for vulnerable groups (e.g., seniors), organizing hands-on workshops, and creating interactive tools to help people critically evaluate information and recognize manipulative content.

Key Activities

For clarity, the structure of CEDMO 2.0 NPO has been divided into five key activities based on our goals and the expertise of participating institutions:

  • KA1: Expanding CEDMO Trends and CEDMO Index tools
    Strengthening sociological research tracking information manipulation trends in Central Europe and beyond, and creating an index assessing different types of literacy.
    Lead: Charles University (CUNI)
    Partners: Demagog.cz, Masaryk University (MUNI)
  • KA2: Using AI tools to combat disinformation
    Developing AI solutions to enhance fact-checking, detect deepfake content, and analyze disinformation campaigns.
    Lead: Czech Technical University (CTU)
    Partners: Charles University (CUNI), Demagog.cz
  • KA3: Improving digital literacy through AI tools
    Supporting public education in media and digital literacy using modern AI tools.
    Lead: Palacký University Olomouc (UPOL)
    Partner: Czech News Agency (ČTK)
  • KA4: Supporting media transformation through AI
    Helping media organizations leverage AI to improve editorial workflows and strengthen resilience against information manipulation.
    Lead: Czech News Agency (ČTK)
    Partners: C
    zech Technical University (CTU), Charles University (CUNI)
  • KA5: Regulating AI use in media
    Supporting the development of legal frameworks and rules for responsible AI use in digital media and journalism.
    Lead: Masaryk University (MUNI)

Info

The project is being implemented from September 1, 2024, to April 30, 2026, and is supported by the National Recovery Plan (NPO) under the designation MPO 60273/24/21300/21000 CEDMO 2.0 NPO. The main project coordinator is the Czech Technical University in Prague (CTU). More information can be found on the CEDMO website.