Apple AirTags and other location-tracking devices are a useful tool when it comes to locating misplaced items or even missing people. But in the hands of stalkers or criminals, these can be weaponized. How can we maintain privacy standards and guarantee fairness? Our AI algorithms offer solutions.
Social-computing services such as social networks or crowdsourcing platforms are often governed by distributed algorithms. Through AI research, Jakub Mareček wants to ensure fairness and predictability in such environments that have a direct impact on our society. The results of joint work with colleagues at the Imperial College London, University College Dublin, Monash University, and University of Campinas, have been published in a recent issue of the IEEE IoT Journal (Predictability and Fairness in Social Sensing).
The work can be applied whenever fair decisions are desirable or required within social sensing. The legal requirements of fairness apply to most public organizations and authorities. Implementing fairness is rather challenging, though, because we want the algorithm to be fair and predictable, but at the same time effective.
Cars united for search missions
The online world is one place where algorithms control the outcome but we can get much more physical! Imagine a city area with many parked vehicles on the street and many iPhones in the pockets of people walking past. Together, they form a high-density network that can assist in finding a missing person with Alzheimer’s. How can these vehicles be of service? When awoken by an administrative center, the network of cars proceeds to search for this person on the move using RFID-based techniques (radio waves that transmit signals). However, not all cars have to (and should not) participate in this search and rescue mission.
Can we ensure that only the cars with strategic position, good privacy protection mechanisms and sufficient battery take part? Now that’s where and his team do their magic also known as stochastic control. By implementing their algorithm to the platform, we can regulate which vehicles are actively searching for the moving entity of interest and thus equalize vehicular energy consumption across the network, or seen another way, equalize the intrusion to privacy. These results apply even when the vehicles and people.
Stalkerware: Good, Bad, and Ugly
With increasing popularity of decision systems based on AI algorithms, it is important to ensure privacy measures and fairness are in place. Thanks to the GDPR, which guarantees data protection and privacy for EU citizens, we have the right and means to know how our personal and often sensitive data are used. However, the legitimacy of their actual use is often not observed closely.
We see many cases of misuse of technology and AI algorithms in particular. One example that resonates with Jakub’s research topic is the recent abuse of Apple’s AirTags that became a useful stalking device in the hands of criminals. On the other hand, the same devices could have an immense value when deployed in cases like the one with the Alzheimer’s patient described above or other practical occasions discussed on Reddit forums.
This shows that the vast network or devices which creates a location-tracking platform can be harvested by both criminals and well-intentioned individuals. However, with proper safety measures and algorithms that guarantee fairness and predictability, our society can only benefit from the very same technology that was once feared. Research like the one done extensively by Jakub proves that scientists can bring the missing pieces into business solutions.