The manner in which agents contribute to a social-computing platform (e.g., social sensing, social media) is often governed by distributed algorithms. We explore the guarantees available in situations, where fairness among the agents contributing to the platform is needed.

In such situations, it may be desirable that the algorithms that govern the sensing process have certain properties such as fair and predictable distribution of the work among agents. These requirements are becoming very important and arise in many situations:

-- For example, in applications where the platform is operated by a branch of a government, which often has a legal requirement to ensure  fair and equitable treatment of citizens.In many such situations, the rights of sub-groups and their  representation in the sensing platform must be considered in the mechanism-design process.
One may think of this as akin to ensuring the ability of citizens to vote in a voting system.


-- Another example arises in applications where there are certain types of incentives on offer. In such situations, one is interested in ensuring that agents have an equal chance to avail of these incentives. Often, this is a legal requirement (e.g., stemming from lottery regulations).

-- A further example occurs in applications where written contracts between the participants and the platform are issued, which should involve guarantees of fairness of predictability as some quality-of-service measures.

-- Finally, the same requirements arise in situations where fair and predictable access is mandated for legal or other reasons. Among others, the European Commission aims to regulate certain "high-risk" AI applications. For example, when participants report pollution levels in a neighbourhood and this information is then used to route vehicular traffic, a sensing platform may be legally required to provide a fair and predictable access to participants from all neighbourhoods, to make sure that certain neighbourhoods do not see excessive traffic due to their under-representation in the pollution-sensing scheme.

Generally speaking, fairness issues have not yet been widely considered in the context of the design of social sensing systems. Typically, in social sensing systems, information and actuation capabilities are crowd-sourced to generate functionality to control and influence ensemble behaviour, with the primary objective often being the efficiency of the platform, frequently with some privacy guarantees. Examples of such situations in smart-city applications include sensing to detect and allocate parking spaces, electric charge points, or as we have mentioned, ambient pollution in cities. While prior papers deal with many aspects of crowd-sensing problems, most have focused on the design of efficient crowd-sensing systems. Efficient could mean, for example, systems that minimize energy consumption, or
have the smallest pollution footprint.


Our interest in this paper is somewhat different and stems from a desire to develop systems that are not only efficient, but in which agents' rights to contribute to the platform are characterised by certain fairness and predictability constraints (perhaps out of economic or legal considerations). As we have mentioned, in many such situations, the rights of sub-groups and the representation in a sensing platform must be coded as part of the algorithmic design process.

Often, in such systems, a feedback signal is also used to regulate  the number of participants contributing to the social-sensing platform at any given time. For example, in many situations, a generalisation of a price signal could be used to encourage/discourage agents to contribute to a crowd-sensing platform. In other situations, we might wish to regulate the number of participants contributing to a task to minimize energy consumption.

Designing sensing systems of this form is challenging. Clearly, we wish to allocate access to the sensing platform in a manner that is not wasteful, which gives an optimal return on the use of the resource for society, and which, in addition, gives a guaranteed level of service to each of the agents competing for that resource. Roughly speaking, when we design such systems, we seek to meet the following objectives.

-- Our first objective is to solve the regulation problem. For example, we may wish to ensure that a certain number of agents contribute to the sensing platform at any time (for example, to minimize the monetary burden on the sensing system provider, or to minimize the utilization of some shared communication links).

-- Second, we would then like to develop sensing systems with the optimal behaviour. In the above example, we might select agents that have a lower energy footprint in measuring sensing data.

-- The third objective focuses on the effects of the control on the microscopic properties of the agent population. In particular, we may wish that each agent, on average, receives a fair opportunity to contribute data to the platform, or at a much more fundamental level, we may wish the average allocation access to the platform for each agent over time is a stable quantity that is entirely predictable, and which does not depend on the initial conditions. The need for fair access to the resource may arise for a number of reasons. For example, agents may have paid to write to the platform, or may even be mandated as part of some legal requirement.

The first two of the above objectives are classical control theoretic objectives. The third is somewhat new, even in the context of control engineering. In the paper, we  show that all three objectives can be met in the design of our crowd-sourcing algorithms. To do this, our principal tool will be to develop techniques, whereby we establish conditions that guarantee ergodicity. Specifically, by ergodicity we mean the existence of a unique invariant measure, to which the system is attracted in a statistical sense, irrespective of the initial conditions. Thus, the design of systems for deployment in multi-agent applications must consider not only the traditional notions of regulation and optimisation, but also the guarantees concerning the existence of a unique invariant measure. This is not a trivial task and many familiar strategies fail. Specifically, our principal contribution in this paper is to develop a framework for reasoning about fairness in social sensing in the sense of guaranteeing that the number of queries per participant will be equalised among comparable participants, in expectation, even when the population of participants varies over time. A prerequisite for fairness is predictability in the sense of guaranteeing that the expected number of queries per participant is independent of the initial state.

Various notions of fairness could then be devised and enforced by shaping the so-called unique invariant measure for a related stochastic system, although we demonstrate only the use of one particular notion of fairness in this paper.

In particular, we develop a meta-algorithm for social sensing in  a time-varying setting, for which we prove guarantees of predictability and fairness by reasoning about the existence of a unique invariant measure for a related stochastic system. We believe that our work is one of the first to deal with this problem in a social-sensing context.

Cite as

@ARTICLE{9445023,

 author={Ghosh, Ramen and Mareček, Jakub and Griggs, Wynita M. and Souza, Matheus and Shorten, Robert N.},

 journal={IEEE Internet of Things Journal},

 title={Predictability and Fairness in Social Sensing},

 year={2021},

 volume={},

 number={},

 pages={1-1},

 doi={10.1109/JIOT.2021.3085368}}