Maria Rigaki is a PhD student working on the intersection of machine learning and security. Before joining AIC, she spent many years in the industry, working as a software developer and systems architect. Maria has plenty of experience in designing and integrating complex systems in the areas of telecommunication and emergency response. Her interests rage from exploring the security and privacy issues in machine learning to how AI can become a tool for offensive and defensive applications in the area of security. During spare time, she enjoys playing music and participating in security CTFs.
Propaganda, fake news, and misinformation are not just social science topics. Our security research deals with so-called computational propaganda. In addition to politics, algorithms and automation are increasingly entering communication. Learn how they are being exploited.
Our Ph.D. student and a two-time poster session winner Maria Rigaki shares her secret tips.
We are coordinating a major Horizon Europe project in fair artificial intelligence. Ethical aspects of AI systems lie at the center of the project which aims to develop explainable and transparent AI algorithms.
Apple AirTags and other location-tracking devices are a useful tool when it comes to locating misplaced items or even missing people. But in the hands of stalkers or criminals, these can be weaponized. How can we maintain privacy standards and guarantee fairness? Our AI algorithms offer solutions.