Skip to main content
Press Release | Mathematics and Computer Science

Argonne researchers publish review article of derivative-free optimization methods

Three researchers from Argonne’s Mathematics and Computer Science division — Jeffrey Larson, Matt Menickelly, and Stefan Wild — have published a paper that reviews recent developments in optimization methods that do not require the availability of derivatives to find optimal solutions. Also referred to as black-box or zero-order optimization, such methods are widely used in scientific, engineering, and artificial intelligence applications. The paper, titled Derivative-free optimization methods,” was published in the June edition of Acta Numerica and was highlighted on the cover. Acta Numerica is the top-cited journal in the mathematical sciences.

In the review article,  the researchers emphasize the distinctions among methods based on particular problem classes. They categorize the methods into six types, based on assumptions about the properties of the black-box functions. They begin with an overview of deterministic methods applied to unconstrained, nonconvex optimization problems where the objective function is defined by a deterministic black-box oracle.” They then discuss in turn randomized methods, methods for structured objectives, methods for stochastic optimization, and methods for handling constraints.

While acknowledging the foundational work on derivative-free methods, the researchers focus particularly on developments in the past ten years – years that have seen a dramatic increase in the number of peer-reviewed papers in the literature. A specific goal of the survey was to unite methods that have been developed from distinct communities. From Wild’s perspective, The rapid acceleration in machine learning research makes this an especially exciting time to consider the (often independent) developments in the learning, mathematical optimization, and statistics communities.” Wild notes that the methods surveyed are increasingly being used in artificial intelligence settings, including the training of optimal policies through techniques such as reinforcement learning.  

The paper is available on the web. A bibliography and additional information are also available