We introduce two natural notions of fairness (subgroup and instantaneous) which could establish the study of fairness in forecasting and learning of linear dynamical systems.

Proceedings of the AAAI Conference on Artificial Intelligence, 2021

In machine learning, training data often capture the behaviour of multiple subgroups of some underlying human population. When the amounts of training data for the subgroups are not controlled carefully, under-representation bias arises. We introduce two natural notions of subgroup fairness and instantaneous fairness to address such under-representation bias in time-series forecasting problems. In particular, we consider the subgroup-fair and instant-fair learning of a linear dynamical system (LDS) from multiple trajectories of varying lengths, and the associated forecasting problems.

COMPAS recidivism scores of black and whitedefendants against the actual days before their re-offending.The sample of defendants’ scores are divided into 4 subsamples based on race and type of re-offending, distinguished by colours. Dots and curves with the same colourdenote the scores of one sub-sample and the trajectory extracted from the scores respectively. The cyan curve displaysthe result of "Subgroup-Fair" model with 4 trajectories.

We provide globally convergent methods for the learning problems using hierarchies of convexifications of non-commutative polynomial optimisation problems. Our empirical results on a biased data set motivated by insurance applications and the well-known COMPAS data set demonstrate both the beneficial impact of fairness considerations on statistical performance and encouraging effects of exploiting sparsity on run time.

Cite as

Zhou, Q., Marecek, J. and Shorten, R.N., 2020. Fairness in forecasting and learning linear dynamical systems. arXiv preprint arXiv:2006.07315.