Mathematical optimization is a field of study on the intersection of mathematics, computer science, and electrical engineering that deals with the selection of a best element out of a set with respect to some criterion. The elements of the set are known as feasible solutions and the criterion is known as the objective function. Over the past couple of centuries, much of the work in mathematical optimization has focussed on the case of a convex, time-invariant set of feasible solutions and convex, time-invariant objective functions. This special case has become the work horse of machine learning, artificial intelligence, and most fields of engineering.
In our basic research, we focus study extensions towards (1) certain smooth, non-convex feasible sets and objective functions and (2) time-varying feasible sets and objective functions. The smooth non-convex problems, known as commutative and non-commutative polynomial optimization, have extensive applications in power systems, control theory, and machine learning, among others. The same applications can often benefit from the time-varying extensions.
Particular examples of the former (1) include our paper at AAAI 2024 (https://arxiv.org/abs/2310.04469), which deals with smoothing of a non-smooth non-convex optimization problem and our papers at AAAI 2021 (https://arxiv.org/abs/2006.07315), in the Journal of AI Research (https://arxiv.org/abs/2209.05274), and IEEE Transactions on Automatic Control (https://arxiv.org/abs/2002.01444) deal with non-commutative polynomial optimization. Notably, we can get the present best results on the COMPAS dataset.
Independent of this, in a series of papers in Automatica (e.g., https://arxiv.org/abs/1807.03256, https://arxiv.org/abs/2110.03001, https://arxiv.org/abs/2112.06767) and IJC (https://arxiv.org/abs/2209.13273), we are working on the control on non-linear systems under uncertainty.