Constrained composite optimization and augmented Lagrangian methods

We investigate finite-dimensional constrained structured optimization problems, featuring composite objective functions and set-membership constraints. Offering an expressive yet simple language, this problem class provides a modeling framework for a variety of applications. We study stationarity and regularity concepts, and propose a flexible augmented Lagrangian scheme. We provide a theoretical characterization of the algorithm and its asymptotic properties, deriving convergence results for fully nonconvex problems. It is demonstrated how the inner subproblems can be solved by off-the-shelf proximal methods, notwithstanding the possibility to adopt any solvers, insofar as they return approximate stationary points. Finally, we describe our matrix-free implementation of the proposed algorithm and test it numerically. Illustrative examples show the versatility of constrained composite programs as a modeling tool and expose difficulties arising in this vast problem class.

A Function Approximation Approach for Parametric Optimization

We present a novel approach for approximating the primal and dual parameter-dependent solution functions of parametric optimization problems. We start with an equation reformulation of the first-order necessary optimality conditions. Then, we replace the primal and dual solutions with some approximating functions and find for some test parameters optimal coefficients as solution of a single nonlinear least-squares problem. Under mild assumptions it can be shown that stationary points are global minima and that the function approximations interpolate the solution functions at all test parameters. Further, we have a cheap function evaluation criterion to estimate the approximation error. Finally, we present some preliminary numerical results showing the viability of our approach.

On a primal-dual Newton proximal method for convex quadratic programs

This paper introduces QPDO, a primal-dual method for convex quadratic programs which builds upon and weaves together the proximal point algorithm and a damped semismooth Newton method. The outer proximal regularization yields a numerically stable method, and we interpret the proximal operator as the unconstrained minimization of the primal-dual proximal augmented Lagrangian function. This allows the inner Newton scheme to exploit sparse symmetric linear solvers and multi-rank factorization updates. Moreover, the linear systems are always solvable independently from the problem data and exact linesearch can be performed. The proposed method can handle degenerate problems, provides a mechanism for infeasibility detection, and can exploit warm starting, while requiring only convexity. We present details of our open-source C implementation and report on numerical results against state-of-the-art solvers. QPDO proves to be a simple, robust, and efficient numerical method for convex quadratic programming.