Skip to main content
Seminar | Mathematics and Computer Science

An Inexact Trust-Region Algorithm for Nonsmooth Nonconvex Optimization

LANS Seminar

Abstract: Nonsmooth optimization problems are prevalent in various areas of applied science. In many applications, it is common to minimize the sum of a smooth nonconvex function and a nonsmooth convex function. For example, in imaging, data science and inverse problems, one often minimizes the data misfit plus a sparsifying regularizer such as the L1 norm or the total variation.

In this talk, we develop a new trust-region method to efficiently solve this class of problems. Our method is unique in that it permits and systematically controls the use of inexact objective function value and derivative evaluations, while maintaining global convergence guarantees. Provided one can compute the proximal mapping of the nonsmooth objective function, our method is a simple modification to the traditional trust-region algorithm for smooth unconstrained optimization. Moreover, when using a quadratic Taylor model, our algorithm represents a matrix-free proximal Newton-type method that permits indefinite Hessians. Consequently, our method is well-suited for solving large-scale problems. We demonstrate the efficacy of our algorithm on various examples from PDE-constrained optimization.

Bio: Drew Kouri is a staff member in the Optimization and Uncertainty Quantification Department at Sandia National Laboratories. He received his BS and MS (2008) in mathematics from Case Western Research University, and his MA (2010) and PhD (2012) in computational and applied mathematics from Rice University. Before joining Sandia, he was the J. H. Wilkinson Fellow at Argonne National Laboratory. His research focuses on the analysis and numerical solution of PDE-constrained optimization and stochastic programming problems. In addition, he is the lead developer of the Rapid Optimization Library, which is a C++ library for large-scale, matrix-free nonlinear optimization.