Research on the intersection of mathematics, computer science, and electrical engineering with a clear mission: find the optimal solution using advanced computational methods.
Mathematical optimization is a field of study on the intersection of mathematics, computer science, and electrical engineering that deals with the selection of a best element out of a set with respect to some criterion. The elements of the set are known as feasible solutions and the criterion is known as the objective function. Over the past couple of centuries, much of the work in mathematical optimization has focussed on the case of a convex, time-invariant set of feasible solutions and convex, time-invariant objective functions. This special case has become the work horse of machine learning, artificial intelligence, and most fields of engineering.
Research Focus
In our basic research, we focus study extensions towards (1) certain smooth, non-convex feasible sets and objective functions and (2) time-varying feasible sets and objective functions. The smooth non-convex problems, known as commutative and non-commutative polynomial optimization, have extensive applications in power systems, control theory, and machine learning, among others. The same applications can often benefit from the time-varying extensions.
Particular examples of this include our papers at AAAI 2019 (https://arxiv.org/abs/1809.05870) and AAAI 2020 (https://arxiv.org/abs/1809.03550), which deal with time-varying optimization.
Our papers at AAAI 2021 (https://arxiv.org/abs/2006.07315) and in the Journal of AI Research (https://arxiv.org/abs/2209.05274), which deal with non-commutative polynomial optimization. Notably, we can get the present best results on the COMPAS dataset.
In a recent paper in Automatica (https://arxiv.org/abs/2110.03001), we are working on the control on non-linear systems under uncertainty.