A new package: FletcherPenaltySolver.jl

2 minute read

Published:

The package FletcherPenaltySolver.jl is now a Julia package !! Very happy about this one because it has been a long project and the solver with great for large problems. Besides, we always teach penalty methods at the University, but efficient implementations are scarce.

Algorithm

FletcherPenaltySolver.jl is a new Julia implementation of Fletcher’s penalty method, introduced in the ’70s, for equality-constrained nonlinear optimization models

min f(x)     s.t.     c(x) = 0.

where $f:\mathbb{R}^n \rightarrow \mathbb{R}$ and $h:\mathbb{R}^n \rightarrow \mathbb{R}^m$ are twice continuously differentiable. Fletcher’s penalty method is an iterative method that aims to compute a local minimum using first and second-order derivatives.

Fletcher’s penalty method replaces the problem with a parametric bound-constrained optimization problem via Fletcher’s penalty function, whose local minimum can be found by any solver designed for such problems. This function is also smooth under classical assumptions, and the penalty function is exact, i.e. local minimizers are minimizers of the penalty function for all values of the parameters sufficiently large. The main computational kernel for evaluating the penalty function and its derivatives is the solution of a certain saddle-point system. If the system matrix is available explicitly, we can factorize it once and reuse the factors to evaluate the penalty function and its derivatives. The penalty function can also be adapted to be factorization-free by solving the linear system iteratively.

It uses other JuliaSmoothOptimizers packages for development. In particular, NLPModels.jl is used for defining the problem, and SolverCore.jl for the output.

The algorithm is a regularized Fletcher’s penalty algorithm. For equality-constrained problems, the method iteratively solves an unconstrained problem. Any solver compatible with Stopping.jl can be used. By default, we use ipopt from NLPModelsIpopt.jl to solve the subproblem, but other solvers can be used such as knitro from NLPModelsKnitro.jl or any solvers from JSOSolvers.jl. The Stopping version of these solvers is available in StoppingInterface.jl. If you have a solver that handles equality-constrained, it is also possible to solve linear equality constraints separately.

It uses LDLFactorizations.jl by default to evaluate the derivatives of the penalized subproblem, but one can also use a matrix-free version with Krylov.jl.

Installation

pkg> add FletcherPenaltySolver

Example

using FletcherPenaltySolver, ADNLPModels

# Rosenbrock
nlp = ADNLPModel(x -> 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2, [-1.2; 1.0])
stats = fps_solve(nlp)

# Constrained
nlp = ADNLPModel(
  x -> 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2,
  [-1.2; 1.0],
  x->[x[1] * x[2] - 1],
  [0.0],[1.0],
)
stats = fps_solve(nlp)

References

Estrin, R., Friedlander, M. P., Orban, D., & Saunders, M. A. (2020). Implementing a smooth exact penalty function for equality-constrained nonlinear optimization. SIAM Journal on Scientific Computing, 42(3), A1809-A1835. 10.1137/19M1238265

How to Cite

If you use FletcherPenaltySolver.jl in your work, please cite using the format given in CITATION.cff.