Table of Contents
- Available linear solvers
- The linear algebra library
- Usage
- Output
- List of IPOPT Options
- Termination
- Output
- NLP
- NLP Scaling
- Initialization
- Warm Start
- Miscellaneous
- Barrier Parameter Update
- Line Search
- Linear Solver
- Step Calculation
- Restoration Phase
- Hessian Approximation
- MA27 Linear Solver
- MA57 Linear Solver
- MA77 Linear Solver
- MA86 Linear Solver
- MA97 Linear Solver
- Pardiso (pardiso-project.org) Linear Solver
- Pardiso (MKL) Linear Solver
- Mumps Linear Solver
- Detailed Options Description
COIN-OR IPOPT (Interior Point Optimizer) is an open-source solver for large-scale nonlinear programming (NLP). The code has been written primarily by Andreas Wächter.
IPOPT implements an interior point line search filter method for nonlinear programming models which functions can be nonconvex, but should be twice continuously differentiable. For more information on the algorithm we refer to [143, 192, 193, 194, 195] and the IPOPT documentation. Most of the IPOPT documentation in the section was taken from the IPOPT manual [103] .
Available linear solvers
The performance and robustness of IPOPT on larger models heavily relies on the used solver for sparse symmetric indefinite linear systems.
GAMS/IPOPT includes the sparse solver MUMPS [7, 8] (currently the default), and MKL PARDISO [165, 166]. The latter is not available for systems on ARM64 CPUs. In the commerically licensed GAMS/IPOPTH version, also the Harwell Subroutine Library (HSL) solvers MA27, MA57, HSL_MA86, and HSL_MA97 are available and MA27 is used by default.
MUMPS, MA57, HSL_MA86, and HSL_MA97 use METIS for matrix ordering [102], see also the METIS manual . METIS is copyrighted by the regents of the University of Minnesota.
IPOPT and IPOPTH can exploit parallelization of the linear solvers MKL Pardiso, HSL MA86, and HSL MA97 and the linear algebra routines (see next section).
The linear solver is chosen by the linear_solver option. Benchmarks have shown that MA57 and HSL_MA97 are often able to outperform MA27 on larger instances. Further, PARDISO often allows for performance that is better than MUMPS and similar to the HSL solvers. If IPOPT fails to solve an instance with PARDISO, it's worth to try changing the options pardisomkl_order and pardisomkl_max_iterative_refinement_steps.
It is also possible to use the linear solver PARDISO from the PARDISO Solver Project or the HSL routines with GAMS/IPOPT if a user provides libraries that can be loaded at runtime. PARDISO from the PARDISO Solver Project can provide performance that exceeds the one of PARDISO from Intel MKL. To build the HSL routines, COIN-OR project ThirdParty-HSL may be useful. See also options linear_solver, linear_system_scaling, nlp_scaling_method, pardisolib, and hsllib. Note that it is your responsibility to ensure that you are entitled to download and use these routines!
The linear algebra library
On systems for AMD and Intel CPUs, the IPOPT library distributed by GAMS and most of the linear solvers used by IPOPT use the Intel oneAPI Math Kernel Library (MKL), which provides a fast and parallel implementation of linear algebra routines (BLAS/LAPACK) and the linear solver PARDISO. MKL chooses an internal code path that provides the best possible performance for the used CPU type. As a consequence, results can be different when changing from one CPU to another. By setting an environment variable, the code path to use can be set by the user. See the Intel MKL documentation regarding Conditional Numerical Reproducibility for more details.
Intel MKL has been optimized for use with Intel CPUs. On CPUs from other vendors, MKL may not use an internal code path that could provide a better performance on that CPU. For example, it may not use AVX2 instructions on an AMD CPU that provides AVX2. However, Intel recently started to add optimized code for AMD's Zen CPUs.
To gain more insight into the use of MKL in GAMS/IPOPT, one may set the environment variable MKL_VERBOSE
to 1
. This will print out information about the MKL library used, functions being called, time spend there, etc.
On the GAMS system for macOS on ARM64 CPUs, the Apple Accelerate framework is used as linear algebra library.
Usage
The following statement can be used inside your GAMS program to specify using IPOPT
Option NLP = IPOPT; { or LP, RMIP, DNLP, RMINLP, QCP, RMIQCP, CNS }
The above statement should appear before the Solve statement. If IPOPT was specified as the default solver during GAMS installation, the above statement is not necessary.
To use IPOPTH, the statement should be
Option NLP = IPOPTH; { or LP, RMIP, DNLP, RMINLP, QCP, RMIQCP, CNS }
Specification of Options
IPOPT has many options that can be adjusted for the algorithm (see Section List of IPOPT Options). Options are all identified by a string name, and their values can be of one of three types: Number (real), Integer, or String. Number options are used for things like tolerances, integer options are used for things like maximum number of iterations, and string options are used for setting algorithm details, like the NLP scaling method. Options can be set by creating a ipopt.opt
file in the directory you are executing IPOPT.
The ipopt.opt
file is read line by line and each line should contain the option name, followed by whitespace, and then the value. Comments can be included with the #
symbol. For example, the following is a valid ipopt.opt
file:
# This is a comment # Turn off the NLP scaling nlp_scaling_method none # Change the initial barrier parameter mu_init 1e-2 # Set the max number of iterations max_iter 500
GAMS/IPOPT understands currently the following GAMS parameters: reslim (time limit), iterlim (iteration limit), domlim (domain violation limit). Further the option threads can be used to control the number of threads used in the linear algebra routines and the linear solver. Setting threads=0
currently does not enable multithreaded linear algebra.
Warmstarting IPOPT
As an interior point solver, it is difficult to warm start IPOPT. By default, only the level values of the variables are passed as starting point to IPOPT. Setting the IPOPT option warm_start_init_point to yes
enables that also dual values for variables and constraints are passed to IPOPT.
However, the expected behavior that IPOPT finishes within one iteration if optimal primal and dual values are passed is not reached this way, yet. This is, because IPOPT by default moves any initial value that is close to a bound into the interior. The amount on how much the initial point is moved can be controlled by various bound_push
and bound_frac
options. To make IPOPT accept an optimal primal/dual solution within one iteration, it should be sufficient to set the following options:
warm_start_init_point yes warm_start_bound_push 1e-9 warm_start_bound_frac 1e-9 warm_start_slack_bound_frac 1e-9 warm_start_slack_bound_push 1e-9 warm_start_mult_bound_push 1e-9
Further, it can be useful to specify that IPOPT can use a less central path in its first iterations by reducing the value of option mu_init. This option is only used if option mu_strategy is set to "monotone", so the option file entries would be
mu_strategy monotone mu_init 0.0001
Finally, IPOPT by default checks whether it should scale the problem. The computed scaling depends on the starting point, which can be undesired when warmstarting. Thus, it may be useful to turn off scaling via option nlp_scaling_method:
nlp_scaling_method none
If a modified but structually equivalent problem instance is solved, e.g., via GUSS, option warm_start_init_point is automatically set to "yes" for every solve following the first one. If this is not desired, warm_start_init_point should explicitly be set to "no" in an IPOPT options file.
Output
This section describes the standard IPOPT console output. The output is designed to provide a quick summary of each iteration as IPOPT solves the problem.
Before IPOPT starts to solve the problem, it displays the problem statistics (number of nonzero-elements in the matrices, number of variables, etc.). Note that if you have fixed variables (both upper and lower bounds are equal), IPOPT may remove these variables from the problem internally and not include them in the problem statistics.
Following the problem statistics, IPOPT will begin to solve the problem and you will see output resembling the following,
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 1.6109693e+01 1.12e+01 5.28e-01 0.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 1.8029749e+01 9.90e-01 6.62e+01 0.1 2.05e+00 - 2.14e-01 1.00e+00f 1 2 1.8719906e+01 1.25e-02 9.04e+00 -2.2 5.94e-02 2.0 8.04e-01 1.00e+00h 1
and the columns of output are defined as
item
The current iteration count. This includes regular iterations and iterations while in restoration phase. If the algorithm is in the restoration phase, the letter
r
will be appended to the iteration number.
objective
The unscaled objective value at the current point. During the restoration phase, this value remains the unscaled objective value for the original problem.
inf_pr
The unscaled constraint violation at the current point. This quantity is the infinity-norm (max) of the (unscaled) constraint violation. During the restoration phase, this value remains the constraint violation of the original problem at the current point. The option inf_pr_output can be used to switch to the printing of a different quantity. During the restoration phase, this value is the primal infeasibility of the original problem at the current point.
inf_du
The scaled dual infeasibility at the current point. This quantity measure the infinity-norm (max) of the internal dual infeasibility (Eq. (4a) in [194]), including inequality constraints reformulated using slack variables and problem scaling. During the restoration phase, this is the value of the dual infeasibility for the restoration phase problem.
lg(mu)
log10 of the value of the barrier parameter μ.
||d||
The infinity norm (max) of the primal step (for the original variables
x
and the internal slack variabless
). During the restoration phase, this value includes the values of additional variables,p
andn
in Eq. (10) of [194] .
lg(rg)
log10 of the value of the regularization term for the Hessian of the Lagrangian in the augmented system ( \(\delta_w\) in Eq. (26) of [194]). A dash (
-
) indicates that no regularization was done.
alpha_du
The stepsize for the dual variables ( \(\alpha^z_k\) in Eq. (14c) of [194]).
alpha_pr
The stepsize for the primal variables ( \(\alpha_k\) in Eq. (14a) of [194]). The number is usually followed by a character for additional diagnostic information regarding the step acceptance criterion:
f
: f-type iteration in the filter method w/o second order correctionF
: f-type iteration in the filter method w/ second order correctionh
: h-type iteration in the filter method w/o second order correctionH
: h-type iteration in the filter method w/ second order correctionk
: penalty value unchanged in merit function method w/o second order correctionK
: penalty value unchanged in merit function method w/ second order correctionn
: penalty value updated in merit function method w/o second order correctionN
: penalty value updated in merit function method w/ second order correctionR
: Restoration phase just startedw
: in watchdog procedures
: step accepted in soft restoration phaset
/T
: tiny step accepted without line searchr
: some previous iterate restored
ls
The number of backtracking line search steps (does not include second-order correction steps).
Note that the step acceptance mechanisms in IPOPT consider the barrier objective function (Eq. (3a) in [194]) which is usually different from the value reported in the objective
column. Similarly, for the purposes of the step acceptance, the constraint violation is measured for the internal problem formulation, which includes slack variables for inequality constraints and potentially scaling of the constraint functions. This value, too, is usually different from the value reported in inf_pr
. As a consequence, a new iterate might have worse values both for the objective function and the constraint violation as reported in the iteration output, seemingly contradicting globalization procedure.
When the algorithm terminates, IPOPT will output a message to the screen. The following is a list of the possible output messages and a brief description.
Optimal Solution Found.
This message indicates that IPOPT found a (locally) optimal point within the desired tolerances.
Solved To Acceptable Level.
This indicates that the algorithm did not converge to the ''desired'' tolerances, but that it was able to obtain a point satisfying the ''acceptable'' tolerance level as specified by
acceptable-*
options. This may happen if the desired tolerances are too small for the current problem.
Feasible point for square problem found.
This message is printed if the problem is "square" (i.e., it has as many equality constraints as free variables) and IPOPT found a point that is feasible w.r.t. constr_viol_tol. It may, however, not be feasible w.r.t. tol.
Converged to a point of local infeasibility. Problem may be infeasible.
The restoration phase converged to a point that is a minimizer for the constraint violation (in the \(\ell_1\)-norm), but is not feasible for the original problem. This indicates that the problem may be infeasible (or at least that the algorithm is stuck at a locally infeasible point). The returned point (the minimizer of the constraint violation) might help you to find which constraint is causing the problem. If you believe that the NLP is feasible, it might help to start the optimization from a different point.
Search Direction is becoming Too Small.
This indicates that IPOPT is calculating very small step sizes and making very little progress. This could happen if the problem has been solved to the best numerical accuracy possible given the current scaling.
Iterates divering; problem might be unbounded.
This message is printed if the max-norm of the iterates becomes larger than the value of the option diverging_iterates_tol. This can happen if the problem is unbounded below and the iterates are diverging.
Stopping optimization at current point as requested by user.
This message is printed if either an interrupt signal was received (e.g., Ctrl+C was pressed) or the domain violation limit is reached.
Maximum Number of Iterations Exceeded.
This indicates that IPOPT has exceeded the maximum number of iterations as specified by the IPOPT option max_iter or the GAMS option iterlim.
Maximum wallclock time exceeded.
This indicates that IPOPT has exceeded the maximum number of wallclock seconds as specified by the IPOPT option max_wall_time or the GAMS option reslim.
Maximum CPU time exceeded.
This indicates that IPOPT has exceeded the maximum number of CPU seconds as specified by the IPOPT option max_cpu_time.
Restoration Failed!
This indicates that the restoration phase failed to find a feasible point that was acceptable to the filter line search for the original problem. This could happen if the problem is highly degenerate, does not satisfy the constraint qualification, or if an external or extrinsic function in GAMS provides incorrect derivative information.
Error in step computation!
This messages is printed if IPOPT is unable to compute a step towards a new iterate and the current iterate is not acceptable for the specified tolerances.
A possible reason is that a search direction could not be computed despite several attempts to modify the iteration matrix. Usually, the value of the regularization parameter then becomes too large.
Another reason is that the feasibility restoration phase could not be activated because the current iterate is not infeasible. Reasons for this again include that the problem is highly degenerate, badly scaled, or does not satisfy the constraint qualification. Before IPOPT 3.14, this resulted in a Restoration_Failed status code with message "Restoration phase is called at almost feasible point...".
Problem has too few degrees of freedom.
This indicates that your problem, as specified, has too few degrees of freedom. This can happen if you have too many equality constraints, or if you fix too many variables (IPOPT removes fixed variables by default, see also the option fixed_variable_treatment).
Not enough memory.
An error occurred while trying to allocate memory. The problem may be too large for your current memory and swap configuration. Sometimes it can help to choose a different linear solver.
INTERNAL ERROR: Unknown SolverReturn value - Notify IPOPT Authors.
An unknown internal error has occurred. Please notify the authors of the GAMS/IPOPT link or IPOPT (refer to support@gams.com).
Diagnostic Tags for IPOPT
To print additional diagnostic tags for each iteration of IPOPT, set the options print_info_string to yes
. With this, a tag will appear at the end of an iteration line with the following diagnostic meaning that are useful to flag difficulties for a particular IPOPT run. The following is a list of possible strings:
!
: Tighten resto tolerance if only slightly infeasible, see Sec. 3.3 in [194]A
: Current iteration is acceptable (alternate termination)a
: Perturbation for PD Singularity can't be done, assume singular, see Sec. 3.1 in [194]C
: Second Order Correction taken, see Sec. 2.4 in [194]Dh
: Hessian degenerate based on multiple iterations, see Sec. 3.1 in [194]Dhj
: Hessian/Jacobian degenerate based on multiple iterations, see Sec. 3.1 in [194]Dj
: Jacobian degenerate based on multiple iterations, see Sec. 3.1 in [194]dx
: \(\delta_x\) perturbation too large, see Sec. 3.1 in [194]e
: Cutting back α due to evaluation error (in backtracking line search)F-
: Filter should be reset, but maximal resets exceeded, see Sec. 2.3 in [194]F+
: Resetting filter due to last few rejections of filter, see Sec. 2.3 in [194]L
: Degenerate Jacobian, \(\delta_c\) already perturbed, see Sec. 3.1 in [194]l
: Degenerate Jacobian, \(\delta_c\) perturbed, see Sec. 3.1 in [194]M
: Magic step taken for slack variables (in backtracking line search)Nh
: Hessian not yet degenerate, see Sec. 3.1 in [194]Nhj
: Hessian/Jacobian not yet degenerate, see Sec. 3.1 in [194]Nj
: Jacobian not yet degenerate, see Sec. 3.1 in [194]NW
: Warm start initialization failed (in Warm Start Initialization)q
: PD system possibly singular, attempt to improve solution quality, see Sec. 3.1 in [194]R
: Solution of restoration phase, see Sec. 3.3 in [194]S
: PD system possibly singular, accept current solution, see Sec. 3.1 in [194]s
: PD system singular, see Sec. 3.1 in [194]s
: Square Problem. Set multipliers to zero (default initialization routine)Tmax
: Trial θ is larger than θmax (filter parameter, Eq. (21) in [194])W
: Watchdog line search procedure successful, see Sec. 3.2 in [194]w
: Watchdog line search procedure unsuccessful, stopped, see Sec. 3.2 in [194]Wb
: Undoing most recent SR1 update, see Sec. 5.4.1 in [21]We
: Skip Limited-Memory Update in restoration phase, see Sec. 5.4.1 in [21]Wp
: Safeguard \(B^0 = \sigma I\) for Limited-Memory Update, see Sec. 5.4.1 in [21]Wr
: Resetting Limited-Memory Update, see Sec. 5.4.1 in [21]Ws
: Skip Limited-Memory Update since \(s^Ty\) is not positive, see Sec. 5.4.1 in [21]WS
: Skip Limited-Memory Update since \(\Delta x\) is too small, see Sec. 5.4.1 in [21]y
: Dual infeasibility, use least square multiplier update (during IPOPT algorithm)z
: Apply correction to bound multiplier if too large (during IPOPT algorithm)
List of IPOPT Options
Termination
Option | Description | Default |
---|---|---|
acceptable_compl_inf_tol | "Acceptance" threshold for the complementarity conditions. | 0.01 |
acceptable_constr_viol_tol | "Acceptance" threshold for the constraint violation. | 0.01 |
acceptable_dual_inf_tol | "Acceptance" threshold for the dual infeasibility. | 1e+10 |
acceptable_iter | Number of "acceptable" iterates before triggering termination. | 0 |
acceptable_obj_change_tol | "Acceptance" stopping criterion based on objective function change. | 1e+20 |
acceptable_tol | "Acceptable" convergence tolerance (relative). | 1e-06 |
compl_inf_tol | Desired threshold for the complementarity conditions. | 0.0001 |
constr_viol_tol | Desired threshold for the constraint and variable bound violation. | 1e-06 |
diverging_iterates_tol | Threshold for maximal value of primal iterates. | 1e+20 |
dual_inf_tol | Desired threshold for the dual infeasibility. | 1 |
max_cpu_time | Maximum number of CPU seconds. | 1e+20 |
max_iter | Maximum number of iterations. | GAMS iterlim |
max_wall_time | Maximum number of walltime clock seconds. | GAMS reslim |
mu_target | Desired value of complementarity. | 0 |
tol | Desired convergence tolerance (relative). | 1e-08 |
Options for expert users | ||
s_max | Scaling threshold for the NLP error. | 100 |
Output
Option | Description | Default |
---|---|---|
inf_pr_output | Determines what value is printed in the "inf_pr" output column. | original |
print_eval_error | Switch to enable printing information about function evaluation errors into the GAMS listing file. | yes |
print_frequency_iter | Determines at which iteration frequency the summarizing iteration output line should be printed. | 1 |
print_frequency_time | Determines at which time frequency the summarizing iteration output line should be printed. | 0 |
print_info_string | Enables printing of additional info string at end of iteration output. | no |
print_level | Output verbosity level. | 5 |
print_options_mode | format in which to print options documentation | text |
print_timing_statistics | Switch to print timing statistics. | no |
report_mininfeas_solution | Switch to report intermediate solution with minimal constraint violation to GAMS if the final solution is not feasible. | no |
Options for expert users | ||
print_advanced_options | whether to print also advanced options | no |
NLP
Option | Description | Default |
---|---|---|
bound_relax_factor | Factor for initial relaxation of the bounds. | 1e-10 |
check_derivatives_for_naninf | Indicates whether it is desired to check for Nan/Inf in derivative matrices | no |
fixed_variable_treatment | Determines how fixed variables should be handled. | make_parameter |
honor_original_bounds | Indicates whether final points should be projected into original bounds. | no |
Options for expert users | ||
dependency_detection_with_rhs | Indicates if the right hand sides of the constraints should be considered in addition to gradients during dependency detection | no |
dependency_detector | Indicates which linear solver should be used to detect linearly dependent equality constraints. | none |
kappa_d | Weight for linear damping term (to handle one-sided bounds). | 1e-05 |
NLP Scaling
Option | Description | Default |
---|---|---|
nlp_scaling_max_gradient | Maximum gradient after NLP scaling. | 100 |
nlp_scaling_method | Select the technique used for scaling the NLP. | gradient-based if GAMS scaleopt is not set, otherwise none |
nlp_scaling_min_value | Minimum value of gradient-based scaling values. | 1e-08 |
Options for expert users | ||
nlp_scaling_constr_target_gradient | Target value for constraint function gradient size. | 0 |
nlp_scaling_obj_target_gradient | Target value for objective function gradient size. | 0 |
Initialization
Option | Description | Default |
---|---|---|
bound_frac | Desired minimum relative distance from the initial point to bound. | 0.01 |
bound_mult_init_method | Initialization method for bound multipliers | constant |
bound_mult_init_val | Initial value for the bound multipliers. | 1 |
bound_push | Desired minimum absolute distance from the initial point to bound. | 0.01 |
constr_mult_init_max | Maximum allowed least-square guess of constraint multipliers. | 1000 |
least_square_init_duals | Least square initialization of all dual variables | no |
least_square_init_primal | Least square initialization of the primal variables | no |
slack_bound_frac | Desired minimum relative distance from the initial slack to bound. | 0.01 |
slack_bound_push | Desired minimum absolute distance from the initial slack to bound. | 0.01 |
Warm Start
Option | Description | Default |
---|---|---|
warm_start_bound_frac | same as bound_frac for the regular initializer | 0.001 |
warm_start_bound_push | same as bound_push for the regular initializer | 0.001 |
warm_start_init_point | Warm-start for initial point | yes, if run on modified model instance (e.g., from GUSS), otherwise no |
warm_start_mult_bound_push | same as mult_bound_push for the regular initializer | 0.001 |
warm_start_mult_init_max | Maximum initial value for the equality multipliers. | 1e+06 |
warm_start_slack_bound_frac | same as slack_bound_frac for the regular initializer | 0.001 |
warm_start_slack_bound_push | same as slack_bound_push for the regular initializer | 0.001 |
Options for expert users | ||
warm_start_target_mu | 0 |
Miscellaneous
Option | Description | Default |
---|---|---|
timing_statistics | Indicates whether to measure time spend in components of Ipopt and NLP evaluation | no |
Options for expert users | ||
replace_bounds | Whether all variable bounds should be replaced by inequality constraints | no |
Barrier Parameter Update
Option | Description | Default |
---|---|---|
adaptive_mu_globalization | Globalization strategy for the adaptive mu selection mode. | obj-constr-filter |
barrier_tol_factor | Factor for mu in barrier stop test. | 10 |
fixed_mu_oracle | Oracle for the barrier parameter when switching to fixed mode. | average_compl |
mu_init | Initial value for the barrier parameter. | 0.1 |
mu_linear_decrease_factor | Determines linear decrease rate of barrier parameter. | 0.2 |
mu_max | Maximum value for barrier parameter. | 100000 |
mu_max_fact | Factor for initialization of maximum value for barrier parameter. | 1000 |
mu_min | Minimum value for barrier parameter. | 1e-11 |
mu_oracle | Oracle for a new barrier parameter in the adaptive strategy. | quality-function |
mu_strategy | Update strategy for barrier parameter. | adaptive |
mu_superlinear_decrease_power | Determines superlinear decrease rate of barrier parameter. | 1.5 |
quality_function_max_section_steps | Maximum number of search steps during direct search procedure determining the optimal centering parameter. | 8 |
Options for expert users | ||
adaptive_mu_kkt_norm_type | Norm used for the KKT error in the adaptive mu globalization strategies. | 2-norm-squared |
adaptive_mu_kkterror_red_fact | Sufficient decrease factor for "kkt-error" globalization strategy. | 0.9999 |
adaptive_mu_kkterror_red_iters | Maximum number of iterations requiring sufficient progress. | 4 |
adaptive_mu_monotone_init_factor | Determines the initial value of the barrier parameter when switching to the monotone mode. | 0.8 |
adaptive_mu_restore_previous_iterate | Indicates if the previous accepted iterate should be restored if the monotone mode is entered. | no |
filter_margin_fact | Factor determining width of margin for obj-constr-filter adaptive globalization strategy. | 1e-05 |
filter_max_margin | Maximum width of margin in obj-constr-filter adaptive globalization strategy. | 1 |
mu_allow_fast_monotone_decrease | Allow skipping of barrier problem if barrier test is already met. | yes |
quality_function_balancing_term | The balancing term included in the quality function for centrality. | none |
quality_function_centrality | The penalty term for centrality that is included in quality function. | none |
quality_function_norm_type | Norm used for components of the quality function. | 2-norm-squared |
quality_function_section_qf_tol | Tolerance for the golden section search procedure determining the optimal centering parameter (in the function value space). | 0 |
quality_function_section_sigma_tol | Tolerance for the section search procedure determining the optimal centering parameter (in sigma space). | 0.01 |
sigma_max | Maximum value of the centering parameter. | 100 |
sigma_min | Minimum value of the centering parameter. | 1e-06 |
tau_min | Lower bound on fraction-to-the-boundary parameter tau. | 0.99 |
Line Search
Option | Description | Default |
---|---|---|
accept_every_trial_step | Always accept the first trial step. | no |
alpha_for_y | Method to determine the step size for constraint multipliers (alpha_y) . | primal |
alpha_for_y_tol | Tolerance for switching to full equality multiplier steps. | 10 |
max_soc | Maximum number of second order correction trial steps at each iteration. | 4 |
recalc_y | Tells the algorithm to recalculate the equality and inequality multipliers as least square estimates. | no |
recalc_y_feas_tol | Feasibility threshold for recomputation of multipliers. | 1e-06 |
soc_method | Ways to apply second order correction | 0 |
watchdog_shortened_iter_trigger | Number of shortened iterations that trigger the watchdog. | 10 |
watchdog_trial_iter_max | Maximum number of watchdog iterations. | 3 |
Options for expert users | ||
accept_after_max_steps | Accept a trial point after maximal this number of steps even if it does not satisfy line search conditions. | -1 |
alpha_min_frac | Safety factor for the minimal step size (before switching to restoration phase). | 0.05 |
alpha_red_factor | Fractional reduction of the trial step size in the backtracking line search. | 0.5 |
constraint_violation_norm_type | Norm to be used for the constraint violation in the line search. | 1-norm |
corrector_compl_avrg_red_fact | Complementarity tolerance factor for accepting corrector step. | 1 |
corrector_type | The type of corrector steps that should be taken. | none |
delta | Multiplier for constraint violation in the switching rule. | 1 |
eta_phi | Relaxation factor in the Armijo condition. | 1e-08 |
filter_reset_trigger | Number of iterations that trigger the filter reset. | 5 |
gamma_phi | Relaxation factor in the filter margin for the barrier function. | 1e-08 |
gamma_theta | Relaxation factor in the filter margin for the constraint violation. | 1e-05 |
kappa_sigma | Factor limiting the deviation of dual variables from primal estimates. | 1e+10 |
kappa_soc | Factor in the sufficient reduction rule for second order correction. | 0.99 |
line_search_method | Globalization method used in backtracking line search | filter |
max_filter_resets | Maximal allowed number of filter resets | 5 |
nu_inc | Increment of the penalty parameter. | 0.0001 |
nu_init | Initial value of the penalty parameter. | 1e-06 |
obj_max_inc | Determines the upper bound on the acceptable increase of barrier objective function. | 5 |
rho | Value in penalty parameter update formula. | 0.1 |
s_phi | Exponent for linear barrier function model in the switching rule. | 2.3 |
s_theta | Exponent for current constraint violation in the switching rule. | 1.1 |
skip_corr_if_neg_curv | Whether to skip the corrector step in negative curvature iteration. | yes |
skip_corr_in_monotone_mode | Whether to skip the corrector step during monotone barrier parameter mode. | yes |
slack_move | Correction size for very small slacks. | 1.81899e-12 |
theta_max_fact | Determines upper bound for constraint violation in the filter. | 10000 |
theta_min_fact | Determines constraint violation threshold in the switching rule. | 0.0001 |
tiny_step_tol | Tolerance for detecting numerically insignificant steps. | 2.22045e-15 |
tiny_step_y_tol | Tolerance for quitting because of numerically insignificant steps. | 0.01 |
Linear Solver
Option | Description | Default |
---|---|---|
hsllib | Name of library containing HSL routines for load at runtime | libhsl.so (Linux), libhsl.dylib (macOS), libhsl.dll (Windows) |
linear_scaling_on_demand | Flag indicating that linear scaling is only done if it seems required. | yes |
linear_solver | Linear solver used for step computations. | ma27, if IpoptH, otherwise mumps |
linear_system_scaling | Method for scaling the linear system. | mc19, if IpoptH, otherwise none |
pardisolib | Name of library containing Pardiso routines (from pardiso-project.org) for load at runtime | libpardiso.so (Linux), libpardiso.dylib (macOS), libpardiso.dll (Windows) |
Step Calculation
Option | Description | Default |
---|---|---|
fast_step_computation | Indicates if the linear system should be solved quickly. | no |
first_hessian_perturbation | Size of first x-s perturbation tried. | 0.0001 |
jacobian_regularization_value | Size of the regularization for rank-deficient constraint Jacobians. | 1e-08 |
max_hessian_perturbation | Maximum value of regularization parameter for handling negative curvature. | 1e+20 |
max_refinement_steps | Maximum number of iterative refinement steps per linear system solve. | 10 |
mehrotra_algorithm | Indicates whether to do Mehrotra's predictor-corrector algorithm. | no |
min_hessian_perturbation | Smallest perturbation of the Hessian block. | 1e-20 |
min_refinement_steps | Minimum number of iterative refinement steps per linear system solve. | 1 |
neg_curv_test_reg | Whether to do the curvature test with the primal regularization (see Zavala and Chiang, 2014). | yes |
neg_curv_test_tol | Tolerance for heuristic to ignore wrong inertia. | 0 |
perturb_dec_fact | Decrease factor for x-s perturbation. | 0.333333 |
perturb_inc_fact | Increase factor for x-s perturbation. | 8 |
perturb_inc_fact_first | Increase factor for x-s perturbation for very first perturbation. | 100 |
Options for expert users | ||
jacobian_regularization_exponent | Exponent for mu in the regularization for rank-deficient constraint Jacobians. | 0.25 |
perturb_always_cd | Active permanent perturbation of constraint linearization. | no |
residual_improvement_factor | Minimal required reduction of residual test ratio in iterative refinement. | 1 |
residual_ratio_max | Iterative refinement tolerance | 1e-10 |
residual_ratio_singular | Threshold for declaring linear system singular after failed iterative refinement. | 1e-05 |
Restoration Phase
Option | Description | Default |
---|---|---|
bound_mult_reset_threshold | Threshold for resetting bound multipliers after the restoration phase. | 1000 |
constr_mult_reset_threshold | Threshold for resetting equality and inequality multipliers after restoration phase. | 0 |
evaluate_orig_obj_at_resto_trial | Determines if the original objective function should be evaluated at restoration phase trial points. | yes |
expect_infeasible_problem | Enable heuristics to quickly detect an infeasible problem. | no |
expect_infeasible_problem_ctol | Threshold for disabling "expect_infeasible_problem" option. | 0.001 |
expect_infeasible_problem_ytol | Multiplier threshold for activating "expect_infeasible_problem" option. | 1e+08 |
required_infeasibility_reduction | Required reduction of infeasibility before leaving restoration phase. | 0.9 |
soft_resto_pderror_reduction_factor | Required reduction in primal-dual error in the soft restoration phase. | 0.9999 |
start_with_resto | Whether to switch to restoration phase in first iteration. | no |
Options for expert users | ||
max_resto_iter | Maximum number of successive iterations in restoration phase. | 3000000 |
max_soft_resto_iters | Maximum number of iterations performed successively in soft restoration phase. | 10 |
resto_failure_feasibility_threshold | Threshold for primal infeasibility to declare failure of restoration phase. | 0 |
resto_penalty_parameter | Penalty parameter in the restoration phase objective function. | 1000 |
resto_proximity_weight | Weighting factor for the proximity term in restoration phase objective. | 1 |
Hessian Approximation
Option | Description | Default |
---|---|---|
hessian_approximation | Indicates what Hessian information is to be used. | exact |
limited_memory_init_val | Value for B0 in low-rank update. | 1 |
limited_memory_init_val_max | Upper bound on value for B0 in low-rank update. | 1e+08 |
limited_memory_init_val_min | Lower bound on value for B0 in low-rank update. | 1e-08 |
limited_memory_initialization | Initialization strategy for the limited memory quasi-Newton approximation. | scalar1 |
limited_memory_max_history | Maximum size of the history for the limited quasi-Newton Hessian approximation. | 6 |
limited_memory_max_skipping | Threshold for successive iterations where update is skipped. | 2 |
limited_memory_special_for_resto | Determines if the quasi-Newton updates should be special during the restoration phase. | no |
limited_memory_update_type | Quasi-Newton update formula for the limited memory quasi-Newton approximation. | bfgs |
Options for expert users | ||
hessian_approximation_space | Indicates in which subspace the Hessian information is to be approximated. | nonlinear-variables |
limited_memory_aug_solver | Strategy for solving the augmented system for low-rank Hessian. | sherman-morrison |
MA27 Linear Solver
Option | Description | Default |
---|---|---|
ma27_la_init_factor | Real workspace memory for MA27. | 5 |
ma27_liw_init_factor | Integer workspace memory for MA27. | 5 |
ma27_meminc_factor | Increment factor for workspace size for MA27. | 2 |
ma27_pivtol | Pivot tolerance for the linear solver MA27. | 1e-08 |
ma27_pivtolmax | Maximum pivot tolerance for the linear solver MA27. | 0.0001 |
ma27_print_level | Debug printing level for the linear solver MA27 | 0 |
Options for expert users | ||
ma27_ignore_singularity | Whether to use MA27's ability to solve a linear system even if the matrix is singular. | no |
ma27_skip_inertia_check | Whether to always pretend that inertia is correct. | no |
MA57 Linear Solver
Option | Description | Default |
---|---|---|
ma57_automatic_scaling | Controls whether to enable automatic scaling in MA57 | no |
ma57_block_size | Controls block size used by Level 3 BLAS in MA57BD | 16 |
ma57_node_amalgamation | Node amalgamation parameter | 16 |
ma57_pivot_order | Controls pivot order in MA57 | 5 |
ma57_pivtol | Pivot tolerance for the linear solver MA57. | 1e-08 |
ma57_pivtolmax | Maximum pivot tolerance for the linear solver MA57. | 0.0001 |
ma57_pre_alloc | Safety factor for work space memory allocation for the linear solver MA57. | 1.05 |
ma57_print_level | Debug printing level for the linear solver MA57 | 0 |
ma57_small_pivot_flag | Handling of small pivots | 0 |
MA77 Linear Solver
Option | Description | Default |
---|---|---|
ma77_buffer_lpage | Number of scalars per MA77 in-core buffer page in the out-of-core solver MA77 | 4096 |
ma77_buffer_npage | Number of pages that make up MA77 buffer | 1600 |
ma77_file_size | Target size of each temporary file for MA77, scalars per type | 2097152 |
ma77_maxstore | Maximum storage size for MA77 in-core mode | 0 |
ma77_nemin | Node Amalgamation parameter | 8 |
ma77_order | Controls type of ordering used by MA77 | metis |
ma77_print_level | Debug printing level for the linear solver MA77 | -1 |
ma77_small | Zero Pivot Threshold | 1e-20 |
ma77_static | Static Pivoting Threshold | 0 |
ma77_u | Pivoting Threshold | 1e-08 |
ma77_umax | Maximum Pivoting Threshold | 0.0001 |
MA86 Linear Solver
Option | Description | Default |
---|---|---|
ma86_nemin | Node Amalgamation parameter | 32 |
ma86_order | Controls type of ordering | auto |
ma86_print_level | Debug printing level | -1 |
ma86_scaling | Controls scaling of matrix | mc64 |
ma86_small | Zero Pivot Threshold | 1e-20 |
ma86_static | Static Pivoting Threshold | 0 |
ma86_u | Pivoting Threshold | 1e-08 |
ma86_umax | Maximum Pivoting Threshold | 0.0001 |
MA97 Linear Solver
Option | Description | Default |
---|---|---|
ma97_nemin | Node Amalgamation parameter | 8 |
ma97_order | Controls type of ordering | auto |
ma97_print_level | Debug printing level | -1 |
ma97_scaling | Specifies strategy for scaling | dynamic |
ma97_small | Zero Pivot Threshold | 1e-20 |
ma97_u | Pivoting Threshold | 1e-08 |
ma97_umax | Maximum Pivoting Threshold | 0.0001 |
Options for expert users | ||
ma97_scaling1 | First scaling. | mc64 |
ma97_scaling2 | Second scaling. | mc64 |
ma97_scaling3 | Third scaling. | mc64 |
ma97_solve_blas3 | Controls if blas2 or blas3 routines are used for solve | no |
ma97_switch1 | First switch, determine when ma97_scaling1 is enabled. | od_hd_reuse |
ma97_switch2 | Second switch, determine when ma97_scaling2 is enabled. | never |
ma97_switch3 | Third switch, determine when ma97_scaling3 is enabled. | never |
Pardiso (pardiso-project.org) Linear Solver
Option | Description | Default |
---|---|---|
pardiso_matching_strategy | Matching strategy to be used by Pardiso | complete+2x2 |
pardiso_max_iterative_refinement_steps | Limit on number of iterative refinement steps. | 0 |
pardiso_msglvl | Pardiso message level | 0 |
pardiso_order | Controls the fill-in reduction ordering algorithm for the input matrix. | metis |
Options for expert users | ||
pardiso_iter_coarse_size | Maximum Size of Coarse Grid Matrix | 5000 |
pardiso_iter_dropping_factor | dropping value for incomplete factor | 0.5 |
pardiso_iter_dropping_schur | dropping value for sparsify schur complement factor | 0.1 |
pardiso_iter_inverse_norm_factor | 5e+06 | |
pardiso_iter_max_levels | Maximum Size of Grid Levels | 10 |
pardiso_iter_max_row_fill | max fill for each row | 10000000 |
pardiso_iter_relative_tol | Relative Residual Convergence | 1e-06 |
pardiso_iterative | Switch for iterative solver in Pardiso library | no |
pardiso_max_droptol_corrections | Maximal number of decreases of drop tolerance during one solve. | 4 |
pardiso_max_iter | Maximum number of Krylov-Subspace Iteration | 500 |
pardiso_redo_symbolic_fact_only_if_inertia_wrong | Toggle for handling case when elements were perturbed by Pardiso. | no |
pardiso_repeated_perturbation_means_singular | Whether to assume that matrix is singular if elements were perturbed after recent symbolic factorization. | no |
pardiso_skip_inertia_check | Whether to pretend that inertia is correct. | no |
Pardiso (MKL) Linear Solver
Option | Description | Default |
---|---|---|
pardisomkl_matching_strategy | Matching strategy to be used by Pardiso | complete+2x2 |
pardisomkl_max_iterative_refinement_steps | Limit on number of iterative refinement steps. | 1 |
pardisomkl_msglvl | Pardiso message level | 0 |
pardisomkl_order | Controls the fill-in reduction ordering algorithm for the input matrix. | metis |
Options for expert users | ||
pardisomkl_redo_symbolic_fact_only_if_inertia_wrong | Toggle for handling case when elements were perturbed by Pardiso. | no |
pardisomkl_repeated_perturbation_means_singular | Whether to assume that matrix is singular if elements were perturbed after recent symbolic factorization. | no |
pardisomkl_skip_inertia_check | Whether to pretend that inertia is correct. | no |
Mumps Linear Solver
Option | Description | Default |
---|---|---|
mumps_mem_percent | Percentage increase in the estimated working space for MUMPS. | 1000 |
mumps_permuting_scaling | Controls permuting and scaling in MUMPS | 7 |
mumps_pivot_order | Controls pivot order in MUMPS | 7 |
mumps_pivtol | Pivot tolerance for the linear solver MUMPS. | 1e-06 |
mumps_pivtolmax | Maximum pivot tolerance for the linear solver MUMPS. | 0.1 |
mumps_print_level | Debug printing level for the linear solver MUMPS | 0 |
mumps_scaling | Controls scaling in MUMPS | 77 |
Options for expert users | ||
mumps_dep_tol | Threshold to consider a pivot at zero in detection of linearly dependent constraints with MUMPS. | 0 |
Detailed Options Description
accept_after_max_steps (advanced): Accept a trial point after maximal this number of steps even if it does not satisfy line search conditions. ↵
Setting this to -1 disables this option.
Range: {-1, ..., ∞}
Default: -1
accept_every_trial_step: Always accept the first trial step. ↵
Setting this option to "yes" essentially disables the line search and makes the algorithm take aggressive steps, without global convergence guarantees.
Range: yes, no
Default: no
acceptable_compl_inf_tol: "Acceptance" threshold for the complementarity conditions. ↵
Absolute tolerance on the complementarity. "Acceptable" termination requires that the max-norm of the (unscaled) complementarity is less than this threshold; see also acceptable_tol.
Range: (0, ∞]
Default: 0.01
acceptable_constr_viol_tol: "Acceptance" threshold for the constraint violation. ↵
Absolute tolerance on the constraint violation. "Acceptable" termination requires that the max-norm of the (unscaled) constraint violation is less than this threshold; see also acceptable_tol.
Range: (0, ∞]
Default: 0.01
acceptable_dual_inf_tol: "Acceptance" threshold for the dual infeasibility. ↵
Absolute tolerance on the dual infeasibility. "Acceptable" termination requires that the (max-norm of the unscaled) dual infeasibility is less than this threshold; see also acceptable_tol.
Range: (0, ∞]
Default: 1e+10
acceptable_iter: Number of "acceptable" iterates before triggering termination. ↵
If the algorithm encounters this many successive "acceptable" iterates (see "acceptable_tol"), it terminates, assuming that the problem has been solved to best possible accuracy given round-off. If it is set to zero, this heuristic is disabled.
Range: {0, ..., ∞}
Default: 0
acceptable_obj_change_tol: "Acceptance" stopping criterion based on objective function change. ↵
If the relative change of the objective function (scaled by Max(1,|f(x)|)) is less than this value, this part of the acceptable tolerance termination is satisfied; see also acceptable_tol. This is useful for the quasi-Newton option, which has trouble to bring down the dual infeasibility.
Range: [0, ∞]
Default: 1e+20
acceptable_tol: "Acceptable" convergence tolerance (relative). ↵
Determines which (scaled) overall optimality error is considered to be "acceptable". There are two levels of termination criteria. If the usual "desired" tolerances (see tol, dual_inf_tol etc) are satisfied at an iteration, the algorithm immediately terminates with a success message. On the other hand, if the algorithm encounters "acceptable_iter" many iterations in a row that are considered "acceptable", it will terminate before the desired convergence tolerance is met. This is useful in cases where the algorithm might not be able to achieve the "desired" level of accuracy.
Range: (0, ∞]
Default: 1e-06
adaptive_mu_globalization: Globalization strategy for the adaptive mu selection mode. ↵
To achieve global convergence of the adaptive version, the algorithm has to switch to the monotone mode (Fiacco-McCormick approach) when convergence does not seem to appear. This option sets the criterion used to decide when to do this switch. (Only used if option "mu_strategy" is chosen as "adaptive".)
value meaning kkt-error nonmonotone decrease of kkt-error obj-constr-filter 2-dim filter for objective and constraint violation never-monotone-mode disables globalization Default: obj-constr-filter
adaptive_mu_kkt_norm_type (advanced): Norm used for the KKT error in the adaptive mu globalization strategies. ↵
When computing the KKT error for the globalization strategies, the norm to be used is specified with this option. Note, this option is also used in the QualityFunctionMuOracle.
value meaning 1-norm use the 1-norm (abs sum) 2-norm-squared use the 2-norm squared (sum of squares) max-norm use the infinity norm (max) 2-norm use 2-norm Default: 2-norm-squared
adaptive_mu_kkterror_red_fact (advanced): Sufficient decrease factor for "kkt-error" globalization strategy. ↵
For the "kkt-error" based globalization strategy, the error must decrease by this factor to be deemed sufficient decrease.
Range: (0, 1)
Default: 0.9999
adaptive_mu_kkterror_red_iters (advanced): Maximum number of iterations requiring sufficient progress. ↵
For the "kkt-error" based globalization strategy, sufficient progress must be made for "adaptive_mu_kkterror_red_iters" iterations. If this number of iterations is exceeded, the globalization strategy switches to the monotone mode.
Range: {0, ..., ∞}
Default: 4
adaptive_mu_monotone_init_factor (advanced): Determines the initial value of the barrier parameter when switching to the monotone mode. ↵
When the globalization strategy for the adaptive barrier algorithm switches to the monotone mode and fixed_mu_oracle is chosen as "average_compl", the barrier parameter is set to the current average complementarity times the value of "adaptive_mu_monotone_init_factor".
Range: (0, ∞]
Default: 0.8
adaptive_mu_restore_previous_iterate (advanced): Indicates if the previous accepted iterate should be restored if the monotone mode is entered. ↵
When the globalization strategy for the adaptive barrier algorithm switches to the monotone mode, it can either start from the most recent iterate (no), or from the last iterate that was accepted (yes).
Range: yes, no
Default: no
alpha_for_y: Method to determine the step size for constraint multipliers (alpha_y) . ↵
value meaning primal use primal step size bound-mult use step size for the bound multipliers (good for LPs) min use the min of primal and bound multipliers max use the max of primal and bound multipliers full take a full step of size one min-dual-infeas choose step size minimizing new dual infeasibility safer-min-dual-infeas like "min_dual_infeas", but safeguarded by "min" and "max" primal-and-full use the primal step size, and full step if delta_x ≤ alpha_for_y_tol dual-and-full use the dual step size, and full step if delta_x ≤ alpha_for_y_tol acceptor Call LSAcceptor to get step size for y Default: primal
alpha_for_y_tol: Tolerance for switching to full equality multiplier steps. ↵
This is only relevant if "alpha_for_y" is chosen "primal-and-full" or "dual-and-full". The step size for the equality constraint multipliers is taken to be one if the max-norm of the primal step is less than this tolerance.
Range: [0, ∞]
Default: 10
alpha_min_frac (advanced): Safety factor for the minimal step size (before switching to restoration phase). ↵
This is gamma_alpha in Eqn. (23) in the implementation paper.
Range: (0, 1)
Default: 0.05
alpha_red_factor (advanced): Fractional reduction of the trial step size in the backtracking line search. ↵
At every step of the backtracking line search, the trial step size is reduced by this factor.
Range: (0, 1)
Default: 0.5
barrier_tol_factor: Factor for mu in barrier stop test. ↵
The convergence tolerance for each barrier problem in the monotone mode is the value of the barrier parameter times "barrier_tol_factor". This option is also used in the adaptive mu strategy during the monotone mode. This is kappa_epsilon in implementation paper.
Range: (0, ∞]
Default: 10
bound_frac: Desired minimum relative distance from the initial point to bound. ↵
Determines how much the initial point might have to be modified in order to be sufficiently inside the bounds (together with "bound_push"). (This is kappa_2 in Section 3.6 of implementation paper.)
Range: (0, 0.5]
Default: 0.01
bound_mult_init_method: Initialization method for bound multipliers ↵
This option defines how the iterates for the bound multipliers are initialized. If "constant" is chosen, then all bound multipliers are initialized to the value of "bound_mult_init_val". If "mu-based" is chosen, then each value is initialized to the the value of "mu_init" divided by the corresponding slack variable. This latter option might be useful if the starting point is close to the optimal solution.
value meaning constant set all bound multipliers to the value of bound_mult_init_val mu-based initialize to mu_init/x_slack Default: constant
bound_mult_init_val: Initial value for the bound multipliers. ↵
All dual variables corresponding to bound constraints are initialized to this value.
Range: (0, ∞]
Default: 1
bound_mult_reset_threshold: Threshold for resetting bound multipliers after the restoration phase. ↵
After returning from the restoration phase, the bound multipliers are updated with a Newton step for complementarity. Here, the change in the primal variables during the entire restoration phase is taken to be the corresponding primal Newton step. However, if after the update the largest bound multiplier exceeds the threshold specified by this option, the multipliers are all reset to 1.
Range: [0, ∞]
Default: 1000
bound_push: Desired minimum absolute distance from the initial point to bound. ↵
Determines how much the initial point might have to be modified in order to be sufficiently inside the bounds (together with "bound_frac"). (This is kappa_1 in Section 3.6 of implementation paper.)
Range: (0, ∞]
Default: 0.01
bound_relax_factor: Factor for initial relaxation of the bounds. ↵
Before start of the optimization, the bounds given by the user are relaxed. This option sets the factor for this relaxation. Additional, the constraint violation tolerance constr_viol_tol is used to bound the relaxation by an absolute value. If it is set to zero, then then bounds relaxation is disabled. See Eqn.(35) in implementation paper. Note that the constraint violation reported by Ipopt at the end of the solution process does not include violations of the original (non-relaxed) variable bounds. See also option honor_original_bounds.
Range: [0, ∞]
Default: 1e-10
check_derivatives_for_naninf: Indicates whether it is desired to check for Nan/Inf in derivative matrices ↵
Activating this option will cause an error if an invalid number is detected in the constraint Jacobians or the Lagrangian Hessian. If this is not activated, the test is skipped, and the algorithm might proceed with invalid numbers and fail. If test is activated and an invalid number is detected, the matrix is written to output with print_level corresponding to J_MOREDETAILED (7); so beware of large output!
Range: yes, no
Default: no
compl_inf_tol: Desired threshold for the complementarity conditions. ↵
Absolute tolerance on the complementarity. Successful termination requires that the max-norm of the (unscaled) complementarity is less than this threshold.
Range: (0, ∞]
Default: 0.0001
constr_mult_init_max: Maximum allowed least-square guess of constraint multipliers. ↵
Determines how large the initial least-square guesses of the constraint multipliers are allowed to be (in max-norm). If the guess is larger than this value, it is discarded and all constraint multipliers are set to zero. This options is also used when initializing the restoration phase. By default, "resto.constr_mult_init_max" (the one used in RestoIterateInitializer) is set to zero.
Range: [0, ∞]
Default: 1000
constr_mult_reset_threshold: Threshold for resetting equality and inequality multipliers after restoration phase. ↵
After returning from the restoration phase, the constraint multipliers are recomputed by a least square estimate. This option triggers when those least-square estimates should be ignored.
Range: [0, ∞]
Default: 0
constr_viol_tol: Desired threshold for the constraint and variable bound violation. ↵
Absolute tolerance on the constraint and variable bound violation. Successful termination requires that the max-norm of the (unscaled) constraint violation is less than this threshold. If option bound_relax_factor is not zero 0, then Ipopt relaxes given variable bounds. The value of constr_viol_tol is used to restrict the absolute amount of this bound relaxation.
Range: (0, ∞]
Default: 1e-06
constraint_violation_norm_type (advanced): Norm to be used for the constraint violation in the line search. ↵
Determines which norm should be used when the algorithm computes the constraint violation in the line search.
value meaning 1-norm use the 1-norm 2-norm use the 2-norm max-norm use the infinity norm Default: 1-norm
corrector_compl_avrg_red_fact (advanced): Complementarity tolerance factor for accepting corrector step. ↵
This option determines the factor by which complementarity is allowed to increase for a corrector step to be accepted. Changing this option is experimental.
Range: (0, ∞]
Default: 1
corrector_type (advanced): The type of corrector steps that should be taken. ↵
If "mu_strategy" is "adaptive", this option determines what kind of corrector steps should be tried. Changing this option is experimental.
value meaning none no corrector affine corrector step towards mu=0 primal-dual corrector step towards current mu Default: none
delta (advanced): Multiplier for constraint violation in the switching rule. ↵
See Eqn. (19) in the implementation paper.
Range: (0, ∞]
Default: 1
dependency_detection_with_rhs (advanced): Indicates if the right hand sides of the constraints should be considered in addition to gradients during dependency detection ↵
Range: yes, no
Default: no
dependency_detector (advanced): Indicates which linear solver should be used to detect linearly dependent equality constraints. ↵
This is experimental and does not work well.
value meaning none don't check; no extra work at beginning mumps use MUMPS Default: none
diverging_iterates_tol: Threshold for maximal value of primal iterates. ↵
If any component of the primal iterates exceeded this value (in absolute terms), the optimization is aborted with the exit message that the iterates seem to be diverging.
Range: (0, ∞]
Default: 1e+20
dual_inf_tol: Desired threshold for the dual infeasibility. ↵
Absolute tolerance on the dual infeasibility. Successful termination requires that the max-norm of the (unscaled) dual infeasibility is less than this threshold.
Range: (0, ∞]
Default: 1
eta_phi (advanced): Relaxation factor in the Armijo condition. ↵
See Eqn. (20) in the implementation paper.
Range: (0, 0.5)
Default: 1e-08
evaluate_orig_obj_at_resto_trial: Determines if the original objective function should be evaluated at restoration phase trial points. ↵
Enabling this option makes the restoration phase algorithm evaluate the objective function of the original problem at every trial point encountered during the restoration phase, even if this value is not required. In this way, it is guaranteed that the original objective function can be evaluated without error at all accepted iterates; otherwise the algorithm might fail at a point where the restoration phase accepts an iterate that is good for the restoration phase problem, but not the original problem. On the other hand, if the evaluation of the original objective is expensive, this might be costly.
Range: yes, no
Default: yes
expect_infeasible_problem: Enable heuristics to quickly detect an infeasible problem. ↵
This options is meant to activate heuristics that may speed up the infeasibility determination if you expect that there is a good chance for the problem to be infeasible. In the filter line search procedure, the restoration phase is called more quickly than usually, and more reduction in the constraint violation is enforced before the restoration phase is left. If the problem is square, this option is enabled automatically.
Range: yes, no
Default: no
expect_infeasible_problem_ctol: Threshold for disabling "expect_infeasible_problem" option. ↵
If the constraint violation becomes smaller than this threshold, the "expect_infeasible_problem" heuristics in the filter line search are disabled. If the problem is square, this options is set to 0.
Range: [0, ∞]
Default: 0.001
expect_infeasible_problem_ytol: Multiplier threshold for activating "expect_infeasible_problem" option. ↵
If the max norm of the constraint multipliers becomes larger than this value and "expect_infeasible_problem" is chosen, then the restoration phase is entered.
Range: (0, ∞]
Default: 1e+08
fast_step_computation: Indicates if the linear system should be solved quickly. ↵
If enabled, the algorithm assumes that the linear system that is solved to obtain the search direction is solved sufficiently well. In that case, no residuals are computed to verify the solution and the computation of the search direction is a little faster.
Range: yes, no
Default: no
filter_margin_fact (advanced): Factor determining width of margin for obj-constr-filter adaptive globalization strategy. ↵
When using the adaptive globalization strategy, "obj-constr-filter", sufficient progress for a filter entry is defined as follows: (new obj) < (filter obj) - filter_margin_fact*(new constr-viol) OR (new constr-viol) < (filter constr-viol) - filter_margin_fact*(new constr-viol). For the description of the "kkt-error-filter" option see "filter_max_margin".
Range: (0, 1)
Default: 1e-05
filter_max_margin (advanced): Maximum width of margin in obj-constr-filter adaptive globalization strategy. ↵
Range: (0, ∞]
Default: 1
filter_reset_trigger (advanced): Number of iterations that trigger the filter reset. ↵
If the filter reset heuristic is active and the number of successive iterations in which the last rejected trial step size was rejected because of the filter, the filter is reset.
Range: {1, ..., ∞}
Default: 5
first_hessian_perturbation: Size of first x-s perturbation tried. ↵
The first value tried for the x-s perturbation in the inertia correction scheme. This is delta_0 in the implementation paper.
Range: (0, ∞]
Default: 0.0001
fixed_mu_oracle: Oracle for the barrier parameter when switching to fixed mode. ↵
Determines how the first value of the barrier parameter should be computed when switching to the "monotone mode" in the adaptive strategy. (Only considered if "adaptive" is selected for option "mu_strategy".)
value meaning probing Mehrotra's probing heuristic loqo LOQO's centrality rule quality-function minimize a quality function average_compl base on current average complementarity Default: average_compl
fixed_variable_treatment: Determines how fixed variables should be handled. ↵
The main difference between those options is that the starting point in the "make_constraint" case still has the fixed variables at their given values, whereas in the case "make_parameter(_nodual)" the functions are always evaluated with the fixed values for those variables. Also, for "relax_bounds", the fixing bound constraints are relaxed (according to" bound_relax_factor"). For all but "make_parameter_nodual", bound multipliers are computed for the fixed variables.
value meaning make_parameter Remove fixed variable from optimization variables make_parameter_nodual Remove fixed variable from optimization variables and do not compute bound multipliers for fixed variables make_constraint Add equality constraints fixing variables relax_bounds Relax fixing bound constraints Default: make_parameter
gamma_phi (advanced): Relaxation factor in the filter margin for the barrier function. ↵
See Eqn. (18a) in the implementation paper.
Range: (0, 1)
Default: 1e-08
gamma_theta (advanced): Relaxation factor in the filter margin for the constraint violation. ↵
See Eqn. (18b) in the implementation paper.
Range: (0, 1)
Default: 1e-05
hessian_approximation: Indicates what Hessian information is to be used. ↵
This determines which kind of information for the Hessian of the Lagrangian function is used by the algorithm.
value meaning exact Use second derivatives provided by the NLP. limited-memory Perform a limited-memory quasi-Newton approximation Default: exact
hessian_approximation_space (advanced): Indicates in which subspace the Hessian information is to be approximated. ↵
value meaning nonlinear-variables only in space of nonlinear variables. all-variables in space of all variables (without slacks) Default: nonlinear-variables
honor_original_bounds: Indicates whether final points should be projected into original bounds. ↵
Ipopt might relax the bounds during the optimization (see, e.g., option "bound_relax_factor"). This option determines whether the final point should be projected back into the user-provide original bounds after the optimization. Note that violations of constraints and complementarity reported by Ipopt at the end of the solution process are for the non-projected point.
Range: yes, no
Default: no
hsllib: Name of library containing HSL routines for load at runtime ↵
Range: string
Default: libhsl.so (Linux), libhsl.dylib (macOS), libhsl.dll (Windows)
inf_pr_output: Determines what value is printed in the "inf_pr" output column. ↵
Ipopt works with a reformulation of the original problem, where slacks are introduced and the problem might have been scaled. The choice "internal" prints out the constraint violation of this formulation. With "original" the true constraint violation in the original NLP is printed.
value meaning internal max-norm of violation of internal equality constraints original maximal constraint violation in original NLP Default: original
jacobian_regularization_exponent (advanced): Exponent for mu in the regularization for rank-deficient constraint Jacobians. ↵
This is kappa_c in the implementation paper.
Range: [0, ∞]
Default: 0.25
jacobian_regularization_value: Size of the regularization for rank-deficient constraint Jacobians. ↵
This is bar delta_c in the implementation paper.
Range: [0, ∞]
Default: 1e-08
kappa_d (advanced): Weight for linear damping term (to handle one-sided bounds). ↵
See Section 3.7 in implementation paper.
Range: [0, ∞]
Default: 1e-05
kappa_sigma (advanced): Factor limiting the deviation of dual variables from primal estimates. ↵
If the dual variables deviate from their primal estimates, a correction is performed. See Eqn. (16) in the implementation paper. Setting the value to less than 1 disables the correction.
Range: (0, ∞]
Default: 1e+10
kappa_soc (advanced): Factor in the sufficient reduction rule for second order correction. ↵
This option determines how much a second order correction step must reduce the constraint violation so that further correction steps are attempted. See Step A-5.9 of Algorithm A in the implementation paper.
Range: (0, ∞]
Default: 0.99
least_square_init_duals: Least square initialization of all dual variables ↵
If set to yes, Ipopt tries to compute least-square multipliers (considering ALL dual variables). If successful, the bound multipliers are possibly corrected to be at least bound_mult_init_val. This might be useful if the user doesn't know anything about the starting point, or for solving an LP or QP. This overwrites option "bound_mult_init_method".
value meaning no use bound_mult_init_val and least-square equality constraint multipliers yes overwrite user-provided point with least-square estimates Default: no
least_square_init_primal: Least square initialization of the primal variables ↵
If set to yes, Ipopt ignores the user provided point and solves a least square problem for the primal variables (x and s) to fit the linearized equality and inequality constraints.This might be useful if the user doesn't know anything about the starting point, or for solving an LP or QP.
value meaning no take user-provided point yes overwrite user-provided point with least-square estimates Default: no
limited_memory_aug_solver (advanced): Strategy for solving the augmented system for low-rank Hessian. ↵
value meaning sherman-morrison use Sherman-Morrison formula extended use an extended augmented system Default: sherman-morrison
limited_memory_init_val: Value for B0 in low-rank update. ↵
The starting matrix in the low rank update, B0, is chosen to be this multiple of the identity in the first iteration (when no updates have been performed yet), and is constantly chosen as this value, if "limited_memory_initialization" is "constant".
Range: (0, ∞]
Default: 1
limited_memory_init_val_max: Upper bound on value for B0 in low-rank update. ↵
The starting matrix in the low rank update, B0, is chosen to be this multiple of the identity in the first iteration (when no updates have been performed yet), and is constantly chosen as this value, if "limited_memory_initialization" is "constant".
Range: (0, ∞]
Default: 1e+08
limited_memory_init_val_min: Lower bound on value for B0 in low-rank update. ↵
The starting matrix in the low rank update, B0, is chosen to be this multiple of the identity in the first iteration (when no updates have been performed yet), and is constantly chosen as this value, if "limited_memory_initialization" is "constant".
Range: (0, ∞]
Default: 1e-08
limited_memory_initialization: Initialization strategy for the limited memory quasi-Newton approximation. ↵
Determines how the diagonal Matrix B_0 as the first term in the limited memory approximation should be computed.
value meaning scalar1 sigma = s^Ty/s^Ts scalar2 sigma = y^Ty/s^Ty scalar3 arithmetic average of scalar1 and scalar2 scalar4 geometric average of scalar1 and scalar2 constant sigma = limited_memory_init_val Default: scalar1
limited_memory_max_history: Maximum size of the history for the limited quasi-Newton Hessian approximation. ↵
This option determines the number of most recent iterations that are taken into account for the limited-memory quasi-Newton approximation.
Range: {0, ..., ∞}
Default: 6
limited_memory_max_skipping: Threshold for successive iterations where update is skipped. ↵
If the update is skipped more than this number of successive iterations, the quasi-Newton approximation is reset.
Range: {1, ..., ∞}
Default: 2
limited_memory_special_for_resto: Determines if the quasi-Newton updates should be special during the restoration phase. ↵
Until Nov 2010, Ipopt used a special update during the restoration phase, but it turned out that this does not work well. The new default uses the regular update procedure and it improves results. If for some reason you want to get back to the original update, set this option to "yes".
Range: yes, no
Default: no
limited_memory_update_type: Quasi-Newton update formula for the limited memory quasi-Newton approximation. ↵
value meaning bfgs BFGS update (with skipping) sr1 SR1 (not working well) Default: bfgs
line_search_method (advanced): Globalization method used in backtracking line search ↵
Only the "filter" choice is officially supported. But sometimes, good results might be obtained with the other choices.
value meaning filter Filter method cg-penalty Chen-Goldfarb penalty function penalty Standard penalty function Default: filter
linear_scaling_on_demand: Flag indicating that linear scaling is only done if it seems required. ↵
This option is only important if a linear scaling method (e.g., mc19) is used. If you choose "no", then the scaling factors are computed for every linear system from the start. This can be quite expensive. Choosing "yes" means that the algorithm will start the scaling method only when the solutions to the linear system seem not good, and then use it until the end.
Range: yes, no
Default: yes
linear_solver: Linear solver used for step computations. ↵
Determines which linear algebra package is to be used for the solution of the augmented linear system (for obtaining the search directions). Note, that MA27, MA57, MA86, and MA97 are included with a commercially supported GAMS/IpoptH license only. To use MA27, MA57, MA86, or MA97 with GAMS/Ipopt, or to use HSL_MA77, a HSL library needs to be provided by the user. To use Pardiso from pardiso-project.org, a Pardiso library needs to be provided by the user. ATTENTION: Before Ipopt 3.14 (GAMS 36), value pardiso specified to use Pardiso from Intel MKL. With GAMS 36, this value has been renamed to pardisomkl. On GAMS systems for ARM64 CPUs, option value
pardisomkl
is not available.
value meaning ma27 IpoptH: use the Harwell routine MA27; Ipopt: load the Harwell routine MA27 from user-provided library ma57 IpoptH: use the Harwell routine MA57; Ipopt: load the Harwell routine MA57 from user-provided library ma77 load the Harwell routine HSL_MA77 from user-provided library ma86 IpoptH: use the Harwell routine HSL_MA86; Ipopt: load the Harwell routine HSL_MA86 from user-provided library ma97 IpoptH: use the Harwell routine HSL_MA97; Ipopt: load the Harwell routine HSL_MA97 from user-provided library pardiso load the Pardiso package from pardiso-project.org from user-provided library at runtime pardisomkl use the Pardiso package from Intel MKL mumps use the Mumps package Default: ma27, if IpoptH, otherwise mumps
linear_system_scaling: Method for scaling the linear system. ↵
Determines the method used to compute symmetric scaling factors for the augmented system (see also the "linear_scaling_on_demand" option). This scaling is independent of the NLP problem scaling. Note, that MC19 is included with a commercially supported GAMS/IpoptH license only. To use MC19 with GAMS/Ipopt, a HSL library needs to be provided by the user.
value meaning none no scaling will be performed mc19 IpoptH: use the Harwell routine MC19; Ipopt: load the Harwell routine MC19 from user-provided library slack-based use the slack values Default: mc19, if IpoptH, otherwise none
ma27_ignore_singularity (advanced): Whether to use MA27's ability to solve a linear system even if the matrix is singular. ↵
Setting this option to "yes" means that Ipopt will call MA27 to compute solutions for right hand sides, even if MA27 has detected that the matrix is singular (but is still able to solve the linear system). In some cases this might be better than using Ipopt's heuristic of small perturbation of the lower diagonal of the KKT matrix.
Range: yes, no
Default: no
ma27_la_init_factor: Real workspace memory for MA27. ↵
The initial real workspace memory = la_init_factor * memory required by unfactored system. Ipopt will increase the workspace size by ma27_meminc_factor if required.
Range: [1, ∞]
Default: 5
ma27_liw_init_factor: Integer workspace memory for MA27. ↵
The initial integer workspace memory = liw_init_factor * memory required by unfactored system. Ipopt will increase the workspace size by ma27_meminc_factor if required.
Range: [1, ∞]
Default: 5
ma27_meminc_factor: Increment factor for workspace size for MA27. ↵
If the integer or real workspace is not large enough, Ipopt will increase its size by this factor.
Range: [1, ∞]
Default: 2
ma27_pivtol: Pivot tolerance for the linear solver MA27. ↵
A smaller number pivots for sparsity, a larger number pivots for stability.
Range: (0, 1)
Default: 1e-08
ma27_pivtolmax: Maximum pivot tolerance for the linear solver MA27. ↵
Ipopt may increase pivtol as high as ma27_pivtolmax to get a more accurate solution to the linear system.
Range: (0, 1)
Default: 0.0001
ma27_print_level: Debug printing level for the linear solver MA27 ↵
0: no printing; 1: Error messages only; 2: Error and warning messages; 3: Error and warning messages and terse monitoring; 4: All information.
Range: {0, ..., 4}
Default: 0
ma27_skip_inertia_check (advanced): Whether to always pretend that inertia is correct. ↵
Setting this option to "yes" essentially disables inertia check. This option makes the algorithm non-robust and easily fail, but it might give some insight into the necessity of inertia control.
Range: yes, no
Default: no
ma57_automatic_scaling: Controls whether to enable automatic scaling in MA57 ↵
For higher reliability of the MA57 solver, you may want to set this option to yes. This is ICNTL(15) in MA57.
Range: yes, no
Default: no
ma57_block_size: Controls block size used by Level 3 BLAS in MA57BD ↵
This is ICNTL(11) in MA57.
Range: {1, ..., ∞}
Default: 16
ma57_node_amalgamation: Node amalgamation parameter ↵
This is ICNTL(12) in MA57.
Range: {1, ..., ∞}
Default: 16
ma57_pivot_order: Controls pivot order in MA57 ↵
This is ICNTL(6) in MA57.
Range: {0, ..., 5}
Default: 5
ma57_pivtol: Pivot tolerance for the linear solver MA57. ↵
A smaller number pivots for sparsity, a larger number pivots for stability.
Range: (0, 1)
Default: 1e-08
ma57_pivtolmax: Maximum pivot tolerance for the linear solver MA57. ↵
Ipopt may increase pivtol as high as ma57_pivtolmax to get a more accurate solution to the linear system.
Range: (0, 1)
Default: 0.0001
ma57_pre_alloc: Safety factor for work space memory allocation for the linear solver MA57. ↵
If 1 is chosen, the suggested amount of work space is used. However, choosing a larger number might avoid reallocation if the suggest values do not suffice.
Range: [1, ∞]
Default: 1.05
ma57_print_level: Debug printing level for the linear solver MA57 ↵
0: no printing; 1: Error messages only; 2: Error and warning messages; 3: Error and warning messages and terse monitoring; ≥4: All information.
Range: {0, ..., ∞}
Default: 0
ma57_small_pivot_flag: Handling of small pivots ↵
If set to 1, then when small entries defined by CNTL(2) are detected they are removed and the corresponding pivots placed at the end of the factorization. This can be particularly efficient if the matrix is highly rank deficient. This is ICNTL(16) in MA57.
Range: {0, ..., 1}
Default: 0
ma77_buffer_lpage: Number of scalars per MA77 in-core buffer page in the out-of-core solver MA77 ↵
Must be at most ma77_file_size.
Range: {1, ..., ∞}
Default: 4096
ma77_buffer_npage: Number of pages that make up MA77 buffer ↵
Number of pages of size buffer_lpage that exist in-core for the out-of-core solver MA77.
Range: {1, ..., ∞}
Default: 1600
ma77_file_size: Target size of each temporary file for MA77, scalars per type ↵
MA77 uses many temporary files, this option controls the size of each one. It is measured in the number of entries (int or double), NOT bytes.
Range: {1, ..., ∞}
Default: 2097152
ma77_maxstore: Maximum storage size for MA77 in-core mode ↵
If greater than zero, the maximum size of factors stored in core before out-of-core mode is invoked.
Range: {0, ..., ∞}
Default: 0
ma77_nemin: Node Amalgamation parameter ↵
Two nodes in elimination tree are merged if result has fewer than ma77_nemin variables.
Range: {1, ..., ∞}
Default: 8
ma77_order: Controls type of ordering used by MA77 ↵
value meaning amd Use the HSL_MC68 approximate minimum degree algorithm metis Use the MeTiS nested dissection algorithm (if available) Default: metis
ma77_print_level: Debug printing level for the linear solver MA77 ↵
<0: no printing; 0: Error and warning messages only; 1: Limited diagnostic printing; >1 Additional diagnostic printing.
Range: {-∞, ..., ∞}
Default: -1
ma77_small: Zero Pivot Threshold ↵
Any pivot less than ma77_small is treated as zero.
Range: [0, ∞]
Default: 1e-20
ma77_static: Static Pivoting Threshold ↵
See MA77 documentation. Either ma77_static=0.0 or ma77_static>ma77_small. ma77_static=0.0 disables static pivoting.
Range: [0, ∞]
Default: 0
ma77_u: Pivoting Threshold ↵
See MA77 documentation.
Range: [0, 0.5]
Default: 1e-08
ma77_umax: Maximum Pivoting Threshold ↵
Maximum value to which u will be increased to improve quality.
Range: [0, 0.5]
Default: 0.0001
ma86_nemin: Node Amalgamation parameter ↵
Two nodes in elimination tree are merged if result has fewer than ma86_nemin variables.
Range: {1, ..., ∞}
Default: 32
ma86_order: Controls type of ordering ↵
value meaning auto Try both AMD and MeTiS, pick best amd Use the HSL_MC68 approximate minimum degree algorithm metis Use the MeTiS nested dissection algorithm (if available) Default: auto
ma86_print_level: Debug printing level ↵
<0: no printing; 0: Error and warning messages only; 1: Limited diagnostic printing; >1 Additional diagnostic printing.
Range: {-∞, ..., ∞}
Default: -1
ma86_scaling: Controls scaling of matrix ↵
value meaning none Do not scale the linear system matrix mc64 Scale linear system matrix using MC64 mc77 Scale linear system matrix using MC77 [1,3,0] Default: mc64
ma86_small: Zero Pivot Threshold ↵
Any pivot less than ma86_small is treated as zero.
Range: [0, ∞]
Default: 1e-20
ma86_static: Static Pivoting Threshold ↵
See MA86 documentation. Either ma86_static=0.0 or ma86_static>ma86_small. ma86_static=0.0 disables static pivoting.
Range: [0, ∞]
Default: 0
ma86_u: Pivoting Threshold ↵
See MA86 documentation.
Range: [0, 0.5]
Default: 1e-08
ma86_umax: Maximum Pivoting Threshold ↵
Maximum value to which u will be increased to improve quality.
Range: [0, 0.5]
Default: 0.0001
ma97_nemin: Node Amalgamation parameter ↵
Two nodes in elimination tree are merged if result has fewer than ma97_nemin variables.
Range: {1, ..., ∞}
Default: 8
ma97_order: Controls type of ordering ↵
value meaning auto Use HSL_MA97 heuristic to guess best of AMD and METIS best Try both AMD and MeTiS, pick best amd Use the HSL_MC68 approximate minimum degree algorithm metis Use the MeTiS nested dissection algorithm matched-auto Use the HSL_MC80 matching with heuristic choice of AMD or METIS matched-metis Use the HSL_MC80 matching based ordering with METIS matched-amd Use the HSL_MC80 matching based ordering with AMD Default: auto
ma97_print_level: Debug printing level ↵
<0: no printing; 0: Error and warning messages only; 1: Limited diagnostic printing; >1 Additional diagnostic printing.
Range: {-∞, ..., ∞}
Default: -1
ma97_scaling: Specifies strategy for scaling ↵
value meaning none Do not scale the linear system matrix mc30 Scale all linear system matrices using MC30 mc64 Scale all linear system matrices using MC64 mc77 Scale all linear system matrices using MC77 [1,3,0] dynamic Dynamically select scaling according to rules specified by ma97_scalingX and ma97_switchX options. Default: dynamic
ma97_scaling1 (advanced): First scaling. ↵
If ma97_scaling=dynamic, this scaling is used according to the trigger ma97_switch1. If ma97_switch2 is triggered it is disabled.
value meaning none No scaling mc30 Scale linear system matrix using MC30 mc64 Scale linear system matrix using MC64 mc77 Scale linear system matrix using MC77 [1,3,0] Default: mc64
ma97_scaling2 (advanced): Second scaling. ↵
If ma97_scaling=dynamic, this scaling is used according to the trigger ma97_switch2. If ma97_switch3 is triggered it is disabled.
value meaning none No scaling mc30 Scale linear system matrix using MC30 mc64 Scale linear system matrix using MC64 mc77 Scale linear system matrix using MC77 [1,3,0] Default: mc64
ma97_scaling3 (advanced): Third scaling. ↵
If ma97_scaling=dynamic, this scaling is used according to the trigger ma97_switch3.
value meaning none No scaling mc30 Scale linear system matrix using MC30 mc64 Scale linear system matrix using MC64 mc77 Scale linear system matrix using MC77 [1,3,0] Default: mc64
ma97_small: Zero Pivot Threshold ↵
Any pivot less than ma97_small is treated as zero.
Range: [0, ∞]
Default: 1e-20
ma97_solve_blas3 (advanced): Controls if blas2 or blas3 routines are used for solve ↵
value meaning no Use BLAS2 (faster, some implementations bit incompatible) yes Use BLAS3 (slower) Default: no
ma97_switch1 (advanced): First switch, determine when ma97_scaling1 is enabled. ↵
If ma97_scaling=dynamic, ma97_scaling1 is enabled according to this condition. If ma97_switch2 occurs this option is henceforth ignored.
value meaning never Scaling is never enabled. at_start Scaling to be used from the very start. at_start_reuse Scaling to be used on first iteration, then reused thereafter. on_demand Scaling to be used after Ipopt request improved solution (i.e. iterative refinement has failed). on_demand_reuse As on_demand, but reuse scaling from previous itr high_delay Scaling to be used after more than 0.05*n delays are present high_delay_reuse Scaling to be used only when previous itr created more that 0.05*n additional delays, otherwise reuse scaling from previous itr od_hd Combination of on_demand and high_delay od_hd_reuse Combination of on_demand_reuse and high_delay_reuse Default: od_hd_reuse
ma97_switch2 (advanced): Second switch, determine when ma97_scaling2 is enabled. ↵
If ma97_scaling=dynamic, ma97_scaling2 is enabled according to this condition. If ma97_switch3 occurs this option is henceforth ignored.
value meaning never Scaling is never enabled. at_start Scaling to be used from the very start. at_start_reuse Scaling to be used on first iteration, then reused thereafter. on_demand Scaling to be used after Ipopt request improved solution (i.e. iterative refinement has failed). on_demand_reuse As on_demand, but reuse scaling from previous itr high_delay Scaling to be used after more than 0.05*n delays are present high_delay_reuse Scaling to be used only when previous itr created more that 0.05*n additional delays, otherwise reuse scaling from previous itr od_hd Combination of on_demand and high_delay od_hd_reuse Combination of on_demand_reuse and high_delay_reuse Default: never
ma97_switch3 (advanced): Third switch, determine when ma97_scaling3 is enabled. ↵
If ma97_scaling=dynamic, ma97_scaling3 is enabled according to this condition.
value meaning never Scaling is never enabled. at_start Scaling to be used from the very start. at_start_reuse Scaling to be used on first iteration, then reused thereafter. on_demand Scaling to be used after Ipopt request improved solution (i.e. iterative refinement has failed). on_demand_reuse As on_demand, but reuse scaling from previous itr high_delay Scaling to be used after more than 0.05*n delays are present high_delay_reuse Scaling to be used only when previous itr created more that 0.05*n additional delays, otherwise reuse scaling from previous itr od_hd Combination of on_demand and high_delay od_hd_reuse Combination of on_demand_reuse and high_delay_reuse Default: never
ma97_u: Pivoting Threshold ↵
See MA97 documentation.
Range: [0, 0.5]
Default: 1e-08
ma97_umax: Maximum Pivoting Threshold ↵
See MA97 documentation.
Range: [0, 0.5]
Default: 0.0001
max_cpu_time: Maximum number of CPU seconds. ↵
A limit on CPU seconds that Ipopt can use to solve one problem. If during the convergence check this limit is exceeded, Ipopt will terminate with a corresponding message.
Range: (0, ∞]
Default: 1e+20
max_filter_resets (advanced): Maximal allowed number of filter resets ↵
A positive number enables a heuristic that resets the filter, whenever in more than "filter_reset_trigger" successive iterations the last rejected trial steps size was rejected because of the filter. This option determine the maximal number of resets that are allowed to take place.
Range: {0, ..., ∞}
Default: 5
max_hessian_perturbation: Maximum value of regularization parameter for handling negative curvature. ↵
In order to guarantee that the search directions are indeed proper descent directions, Ipopt requires that the inertia of the (augmented) linear system for the step computation has the correct number of negative and positive eigenvalues. The idea is that this guides the algorithm away from maximizers and makes Ipopt more likely converge to first order optimal points that are minimizers. If the inertia is not correct, a multiple of the identity matrix is added to the Hessian of the Lagrangian in the augmented system. This parameter gives the maximum value of the regularization parameter. If a regularization of that size is not enough, the algorithm skips this iteration and goes to the restoration phase. This is delta_w^max in the implementation paper.
Range: (0, ∞]
Default: 1e+20
max_iter: Maximum number of iterations. ↵
The algorithm terminates with a message if the number of iterations exceeded this number.
Range: {0, ..., ∞}
Default: GAMS iterlim
max_refinement_steps: Maximum number of iterative refinement steps per linear system solve. ↵
Iterative refinement (on the full unsymmetric system) is performed for each right hand side. This option determines the maximum number of iterative refinement steps.
Range: {0, ..., ∞}
Default: 10
max_resto_iter (advanced): Maximum number of successive iterations in restoration phase. ↵
The algorithm terminates with an error message if the number of iterations successively taken in the restoration phase exceeds this number.
Range: {0, ..., ∞}
Default: 3000000
max_soc: Maximum number of second order correction trial steps at each iteration. ↵
Choosing 0 disables the second order corrections. This is p^{max} of Step A-5.9 of Algorithm A in the implementation paper.
Range: {0, ..., ∞}
Default: 4
max_soft_resto_iters (advanced): Maximum number of iterations performed successively in soft restoration phase. ↵
If the soft restoration phase is performed for more than so many iterations in a row, the regular restoration phase is called.
Range: {0, ..., ∞}
Default: 10
max_wall_time: Maximum number of walltime clock seconds. ↵
A limit on walltime clock seconds that Ipopt can use to solve one problem. If during the convergence check this limit is exceeded, Ipopt will terminate with a corresponding message.
Range: (0, ∞]
Default: GAMS reslim
mehrotra_algorithm: Indicates whether to do Mehrotra's predictor-corrector algorithm. ↵
If enabled, line search is disabled and the (unglobalized) adaptive mu strategy is chosen with the "probing" oracle, and "corrector_type=affine" is used without any safeguards; you should not set any of those options explicitly in addition. Also, unless otherwise specified, the values of "bound_push", "bound_frac", and "bound_mult_init_val" are set more aggressive, and sets "alpha_for_y=bound_mult". The Mehrotra's predictor-corrector algorithm works usually very well for LPs and convex QPs.
Range: yes, no
Default: no
min_hessian_perturbation: Smallest perturbation of the Hessian block. ↵
The size of the perturbation of the Hessian block is never selected smaller than this value, unless no perturbation is necessary. This is delta_w^min in implementation paper.
Range: [0, ∞]
Default: 1e-20
min_refinement_steps: Minimum number of iterative refinement steps per linear system solve. ↵
Iterative refinement (on the full unsymmetric system) is performed for each right hand side. This option determines the minimum number of iterative refinements (i.e. at least "min_refinement_steps" iterative refinement steps are enforced per right hand side.)
Range: {0, ..., ∞}
Default: 1
mu_allow_fast_monotone_decrease (advanced): Allow skipping of barrier problem if barrier test is already met. ↵
value meaning no Take at least one iteration per barrier problem even if the barrier test is already met for the updated barrier parameter yes Allow fast decrease of mu if barrier test it met Default: yes
mu_init: Initial value for the barrier parameter. ↵
This option determines the initial value for the barrier parameter (mu). It is only relevant in the monotone, Fiacco-McCormick version of the algorithm. (i.e., if "mu_strategy" is chosen as "monotone")
Range: (0, ∞]
Default: 0.1
mu_linear_decrease_factor: Determines linear decrease rate of barrier parameter. ↵
For the Fiacco-McCormick update procedure the new barrier parameter mu is obtained by taking the minimum of mu*"mu_linear_decrease_factor" and mu^"superlinear_decrease_power". This is kappa_mu in implementation paper. This option is also used in the adaptive mu strategy during the monotone mode.
Range: (0, 1)
Default: 0.2
mu_max: Maximum value for barrier parameter. ↵
This option specifies an upper bound on the barrier parameter in the adaptive mu selection mode. If this option is set, it overwrites the effect of mu_max_fact. (Only used if option "mu_strategy" is chosen as "adaptive".)
Range: (0, ∞]
Default: 100000
mu_max_fact: Factor for initialization of maximum value for barrier parameter. ↵
This option determines the upper bound on the barrier parameter. This upper bound is computed as the average complementarity at the initial point times the value of this option. (Only used if option "mu_strategy" is chosen as "adaptive".)
Range: (0, ∞]
Default: 1000
mu_min: Minimum value for barrier parameter. ↵
This option specifies the lower bound on the barrier parameter in the adaptive mu selection mode. By default, it is set to the minimum of 1e-11 and min("tol","compl_inf_tol")/("barrier_tol_factor"+1), which should be a reasonable value. (Only used if option "mu_strategy" is chosen as "adaptive".)
Range: (0, ∞]
Default: 1e-11
mu_oracle: Oracle for a new barrier parameter in the adaptive strategy. ↵
Determines how a new barrier parameter is computed in each "free-mode" iteration of the adaptive barrier parameter strategy. (Only considered if "adaptive" is selected for option "mu_strategy").
value meaning probing Mehrotra's probing heuristic loqo LOQO's centrality rule quality-function minimize a quality function Default: quality-function
mu_strategy: Update strategy for barrier parameter. ↵
Determines which barrier parameter update strategy is to be used.
value meaning monotone use the monotone (Fiacco-McCormick) strategy adaptive use the adaptive update strategy Default: adaptive
mu_superlinear_decrease_power: Determines superlinear decrease rate of barrier parameter. ↵
For the Fiacco-McCormick update procedure the new barrier parameter mu is obtained by taking the minimum of mu*"mu_linear_decrease_factor" and mu^"superlinear_decrease_power". This is theta_mu in implementation paper. This option is also used in the adaptive mu strategy during the monotone mode.
Range: (1, 2)
Default: 1.5
mu_target: Desired value of complementarity. ↵
Usually, the barrier parameter is driven to zero and the termination test for complementarity is measured with respect to zero complementarity. However, in some cases it might be desired to have Ipopt solve barrier problem for strictly positive value of the barrier parameter. In this case, the value of "mu_target" specifies the final value of the barrier parameter, and the termination tests are then defined with respect to the barrier problem for this value of the barrier parameter.
Range: [0, ∞]
Default: 0
mumps_dep_tol (advanced): Threshold to consider a pivot at zero in detection of linearly dependent constraints with MUMPS. ↵
This is CNTL(3) in MUMPS.
Range: real
Default: 0
mumps_mem_percent: Percentage increase in the estimated working space for MUMPS. ↵
When significant extra fill-in is caused by numerical pivoting, larger values of mumps_mem_percent may help use the workspace more efficiently. On the other hand, if memory requirement are too large at the very beginning of the optimization, choosing a much smaller value for this option, such as 5, might reduce memory requirements.
Range: {0, ..., ∞}
Default: 1000
mumps_permuting_scaling: Controls permuting and scaling in MUMPS ↵
This is ICNTL(6) in MUMPS.
Range: {0, ..., 7}
Default: 7
mumps_pivot_order: Controls pivot order in MUMPS ↵
This is ICNTL(7) in MUMPS.
Range: {0, ..., 7}
Default: 7
mumps_pivtol: Pivot tolerance for the linear solver MUMPS. ↵
A smaller number pivots for sparsity, a larger number pivots for stability.
Range: [0, 1]
Default: 1e-06
mumps_pivtolmax: Maximum pivot tolerance for the linear solver MUMPS. ↵
Ipopt may increase pivtol as high as pivtolmax to get a more accurate solution to the linear system.
Range: [0, 1]
Default: 0.1
mumps_print_level: Debug printing level for the linear solver MUMPS ↵
0: no printing; 1: Error messages only; 2: Error, warning, and main statistic messages; 3: Error and warning messages and terse diagnostics; ≥4: All information.
Range: {0, ..., ∞}
Default: 0
mumps_scaling: Controls scaling in MUMPS ↵
This is ICNTL(8) in MUMPS.
Range: {-2, ..., 77}
Default: 77
neg_curv_test_reg: Whether to do the curvature test with the primal regularization (see Zavala and Chiang, 2014). ↵
value meaning yes use primal regularization with the inertia-free curvature test no use original IPOPT approach, in which the primal regularization is ignored Default: yes
neg_curv_test_tol: Tolerance for heuristic to ignore wrong inertia. ↵
If nonzero, incorrect inertia in the augmented system is ignored, and Ipopt tests if the direction is a direction of positive curvature. This tolerance is alpha_n in the paper by Zavala and Chiang (2014) and it determines when the direction is considered to be sufficiently positive. A value in the range of [1e-12, 1e-11] is recommended.
Range: [0, ∞]
Default: 0
nlp_scaling_constr_target_gradient (advanced): Target value for constraint function gradient size. ↵
If a positive number is chosen, the scaling factors for the constraint functions are computed so that the gradient has the max norm of the given size at the starting point. This overrides nlp_scaling_max_gradient for the constraint functions.
Range: [0, ∞]
Default: 0
nlp_scaling_max_gradient: Maximum gradient after NLP scaling. ↵
This is the gradient scaling cut-off. If the maximum gradient is above this value, then gradient based scaling will be performed. Scaling parameters are calculated to scale the maximum gradient back to this value. (This is g_max in Section 3.8 of the implementation paper.) Note: This option is only used if "nlp_scaling_method" is chosen as "gradient-based".
Range: (0, ∞]
Default: 100
nlp_scaling_method: Select the technique used for scaling the NLP. ↵
Selects the technique used for scaling the problem internally before it is solved. For user-scaling, the parameters come from the NLP.
value meaning none no problem scaling will be performed gradient-based scale the problem so the maximum gradient at the starting point is nlp_scaling_max_gradient equilibration-based scale the problem so that first derivatives are of order 1 at random points (GAMS/Ipopt: requires user-provided library with HSL routine MC19) Default: gradient-based if GAMS scaleopt is not set, otherwise none
nlp_scaling_min_value: Minimum value of gradient-based scaling values. ↵
This is the lower bound for the scaling factors computed by gradient-based scaling method. If some derivatives of some functions are huge, the scaling factors will otherwise become very small, and the (unscaled) final constraint violation, for example, might then be significant. Note: This option is only used if "nlp_scaling_method" is chosen as "gradient-based".
Range: [0, ∞]
Default: 1e-08
nlp_scaling_obj_target_gradient (advanced): Target value for objective function gradient size. ↵
If a positive number is chosen, the scaling factor for the objective function is computed so that the gradient has the max norm of the given size at the starting point. This overrides nlp_scaling_max_gradient for the objective function.
Range: [0, ∞]
Default: 0
nu_inc (advanced): Increment of the penalty parameter. ↵
Range: (0, ∞]
Default: 0.0001
nu_init (advanced): Initial value of the penalty parameter. ↵
Range: (0, ∞]
Default: 1e-06
obj_max_inc (advanced): Determines the upper bound on the acceptable increase of barrier objective function. ↵
Trial points are rejected if they lead to an increase in the barrier objective function by more than obj_max_inc orders of magnitude.
Range: (1, ∞]
Default: 5
pardiso_iter_coarse_size (advanced): Maximum Size of Coarse Grid Matrix ↵
DPARM(3)
Range: {1, ..., ∞}
Default: 5000
pardiso_iter_dropping_factor (advanced): dropping value for incomplete factor ↵
DPARM(5)
Range: (0, 1)
Default: 0.5
pardiso_iter_dropping_schur (advanced): dropping value for sparsify schur complement factor ↵
DPARM(6)
Range: (0, 1)
Default: 0.1
pardiso_iter_inverse_norm_factor (advanced): ↵
DPARM(8)
Range: (1, ∞]
Default: 5e+06
pardiso_iter_max_levels (advanced): Maximum Size of Grid Levels ↵
DPARM(4)
Range: {1, ..., ∞}
Default: 10
pardiso_iter_max_row_fill (advanced): max fill for each row ↵
DPARM(7)
Range: {1, ..., ∞}
Default: 10000000
pardiso_iter_relative_tol (advanced): Relative Residual Convergence ↵
DPARM(2)
Range: (0, 1)
Default: 1e-06
pardiso_iterative (advanced): Switch for iterative solver in Pardiso library ↵
Range: yes, no
Default: no
pardiso_matching_strategy: Matching strategy to be used by Pardiso ↵
This is IPAR(13) in Pardiso manual.
value meaning complete Match complete (IPAR(13)=1) complete+2x2 Match complete+2x2 (IPAR(13)=2) constraints Match constraints (IPAR(13)=3) Default: complete+2x2
pardiso_max_droptol_corrections (advanced): Maximal number of decreases of drop tolerance during one solve. ↵
This is relevant only for iterative Pardiso options.
Range: {1, ..., ∞}
Default: 4
pardiso_max_iter (advanced): Maximum number of Krylov-Subspace Iteration ↵
DPARM(1)
Range: {1, ..., ∞}
Default: 500
pardiso_max_iterative_refinement_steps: Limit on number of iterative refinement steps. ↵
The solver does not perform more than the absolute value of this value steps of iterative refinement and stops the process if a satisfactory level of accuracy of the solution in terms of backward error is achieved. If negative, the accumulation of the residue uses extended precision real and complex data types. Perturbed pivots result in iterative refinement. The solver automatically performs two steps of iterative refinements when perturbed pivots are obtained during the numerical factorization and this option is set to 0.
Range: {-∞, ..., ∞}
Default: 0
pardiso_msglvl: Pardiso message level ↵
This is MSGLVL in the Pardiso manual.
Range: {0, ..., ∞}
Default: 0
pardiso_order: Controls the fill-in reduction ordering algorithm for the input matrix. ↵
value meaning amd minimum degree algorithm one metis MeTiS nested dissection algorithm pmetis parallel (OpenMP) version of MeTiS nested dissection algorithm four five Default: metis
pardiso_redo_symbolic_fact_only_if_inertia_wrong (advanced): Toggle for handling case when elements were perturbed by Pardiso. ↵
value meaning no Always redo symbolic factorization when elements were perturbed yes Only redo symbolic factorization when elements were perturbed if also the inertia was wrong Default: no
pardiso_repeated_perturbation_means_singular (advanced): Whether to assume that matrix is singular if elements were perturbed after recent symbolic factorization. ↵
Range: yes, no
Default: no
pardiso_skip_inertia_check (advanced): Whether to pretend that inertia is correct. ↵
Setting this option to "yes" essentially disables inertia check. This option makes the algorithm non-robust and easily fail, but it might give some insight into the necessity of inertia control.
Range: yes, no
Default: no
pardisolib: Name of library containing Pardiso routines (from pardiso-project.org) for load at runtime ↵
Range: string
Default: libpardiso.so (Linux), libpardiso.dylib (macOS), libpardiso.dll (Windows)
pardisomkl_matching_strategy: Matching strategy to be used by Pardiso ↵
This is IPAR(13) in Pardiso manual.
value meaning complete Match complete (IPAR(13)=1) complete+2x2 Match complete+2x2 (IPAR(13)=2) constraints Match constraints (IPAR(13)=3) Default: complete+2x2
pardisomkl_max_iterative_refinement_steps: Limit on number of iterative refinement steps. ↵
The solver does not perform more than the absolute value of this value steps of iterative refinement and stops the process if a satisfactory level of accuracy of the solution in terms of backward error is achieved. If negative, the accumulation of the residue uses extended precision real and complex data types. Perturbed pivots result in iterative refinement. The solver automatically performs two steps of iterative refinements when perturbed pivots are obtained during the numerical factorization and this option is set to 0.
Range: {-∞, ..., ∞}
Default: 1
pardisomkl_msglvl: Pardiso message level ↵
This is MSGLVL in the Pardiso manual.
Range: {0, ..., ∞}
Default: 0
pardisomkl_order: Controls the fill-in reduction ordering algorithm for the input matrix. ↵
value meaning amd minimum degree algorithm one undocumented metis MeTiS nested dissection algorithm pmetis parallel (OpenMP) version of MeTiS nested dissection algorithm Default: metis
pardisomkl_redo_symbolic_fact_only_if_inertia_wrong (advanced): Toggle for handling case when elements were perturbed by Pardiso. ↵
value meaning no Always redo symbolic factorization when elements were perturbed yes Only redo symbolic factorization when elements were perturbed if also the inertia was wrong Default: no
pardisomkl_repeated_perturbation_means_singular (advanced): Whether to assume that matrix is singular if elements were perturbed after recent symbolic factorization. ↵
Range: yes, no
Default: no
pardisomkl_skip_inertia_check (advanced): Whether to pretend that inertia is correct. ↵
Setting this option to "yes" essentially disables inertia check. This option makes the algorithm non-robust and easily fail, but it might give some insight into the necessity of inertia control.
Range: yes, no
Default: no
perturb_always_cd (advanced): Active permanent perturbation of constraint linearization. ↵
Enabling this option leads to using the delta_c and delta_d perturbation for the computation of every search direction. Usually, it is only used when the iteration matrix is singular.
Range: yes, no
Default: no
perturb_dec_fact: Decrease factor for x-s perturbation. ↵
The factor by which the perturbation is decreased when a trial value is deduced from the size of the most recent successful perturbation. This is kappa_w^- in the implementation paper.
Range: (0, 1)
Default: 0.333333
perturb_inc_fact: Increase factor for x-s perturbation. ↵
The factor by which the perturbation is increased when a trial value was not sufficient - this value is used for the computation of all perturbations except for the first. This is kappa_w^+ in the implementation paper.
Range: (1, ∞]
Default: 8
perturb_inc_fact_first: Increase factor for x-s perturbation for very first perturbation. ↵
The factor by which the perturbation is increased when a trial value was not sufficient - this value is used for the computation of the very first perturbation and allows a different value for the first perturbation than that used for the remaining perturbations. This is bar_kappa_w^+ in the implementation paper.
Range: (1, ∞]
Default: 100
print_advanced_options (advanced): whether to print also advanced options ↵
Range: yes, no
Default: no
print_eval_error: Switch to enable printing information about function evaluation errors into the GAMS listing file. ↵
Range: no, yes
Default: yes
print_frequency_iter: Determines at which iteration frequency the summarizing iteration output line should be printed. ↵
Summarizing iteration output is printed every print_frequency_iter iterations, if at least print_frequency_time seconds have passed since last output.
Range: {1, ..., ∞}
Default: 1
print_frequency_time: Determines at which time frequency the summarizing iteration output line should be printed. ↵
Summarizing iteration output is printed if at least print_frequency_time seconds have passed since last output and the iteration number is a multiple of print_frequency_iter.
Range: [0, ∞]
Default: 0
print_info_string: Enables printing of additional info string at end of iteration output. ↵
This string contains some insider information about the current iteration. For details, look for "Diagnostic Tags" in the Ipopt documentation.
Range: yes, no
Default: no
print_level: Output verbosity level. ↵
Sets the default verbosity level for console output. The larger this value the more detailed is the output.
Range: {0, ..., 12}
Default: 5
print_options_mode: format in which to print options documentation ↵
value meaning text Ordinary text latex LaTeX formatted doxygen Doxygen (markdown) formatted Default: text
print_timing_statistics: Switch to print timing statistics. ↵
If selected, the program will print the time spend for selected tasks. This implies timing_statistics=yes.
Range: yes, no
Default: no
quality_function_balancing_term (advanced): The balancing term included in the quality function for centrality. ↵
This determines whether a term is added to the quality function that penalizes situations where the complementarity is much smaller than dual and primal infeasibilities. Only used if option "mu_oracle" is set to "quality-function".
value meaning none no balancing term is added cubic Max(0,Max(dual_inf,primal_inf)-compl)^3 Default: none
quality_function_centrality (advanced): The penalty term for centrality that is included in quality function. ↵
This determines whether a term is added to the quality function to penalize deviation from centrality with respect to complementarity. The complementarity measure here is the xi in the Loqo update rule. Only used if option "mu_oracle" is set to "quality-function".
value meaning none no penalty term is added log complementarity * the log of the centrality measure reciprocal complementarity * the reciprocal of the centrality measure cubed-reciprocal complementarity * the reciprocal of the centrality measure cubed Default: none
quality_function_max_section_steps: Maximum number of search steps during direct search procedure determining the optimal centering parameter. ↵
The golden section search is performed for the quality function based mu oracle. Only used if option "mu_oracle" is set to "quality-function".
Range: {0, ..., ∞}
Default: 8
quality_function_norm_type (advanced): Norm used for components of the quality function. ↵
Only used if option "mu_oracle" is set to "quality-function".
value meaning 1-norm use the 1-norm (abs sum) 2-norm-squared use the 2-norm squared (sum of squares) max-norm use the infinity norm (max) 2-norm use 2-norm Default: 2-norm-squared
quality_function_section_qf_tol (advanced): Tolerance for the golden section search procedure determining the optimal centering parameter (in the function value space). ↵
The golden section search is performed for the quality function based mu oracle. Only used if option "mu_oracle" is set to "quality-function".
Range: [0, 1)
Default: 0
quality_function_section_sigma_tol (advanced): Tolerance for the section search procedure determining the optimal centering parameter (in sigma space). ↵
The golden section search is performed for the quality function based mu oracle. Only used if option "mu_oracle" is set to "quality-function".
Range: [0, 1)
Default: 0.01
recalc_y: Tells the algorithm to recalculate the equality and inequality multipliers as least square estimates. ↵
This asks the algorithm to recompute the multipliers, whenever the current infeasibility is less than recalc_y_feas_tol. Choosing yes might be helpful in the quasi-Newton option. However, each recalculation requires an extra factorization of the linear system. If a limited memory quasi-Newton option is chosen, this is used by default.
value meaning no use the Newton step to update the multipliers yes use least-square multiplier estimates Default: no
recalc_y_feas_tol: Feasibility threshold for recomputation of multipliers. ↵
If recalc_y is chosen and the current infeasibility is less than this value, then the multipliers are recomputed.
Range: (0, ∞]
Default: 1e-06
replace_bounds (advanced): Whether all variable bounds should be replaced by inequality constraints ↵
This option must be set for the inexact algorithm.
Range: yes, no
Default: no
report_mininfeas_solution: Switch to report intermediate solution with minimal constraint violation to GAMS if the final solution is not feasible. ↵
This option allows to obtain the most feasible solution found by Ipopt during the iteration process, if it stops at a (locally) infeasible solution, due to a limit (time, iterations, ...), or with a failure in the restoration phase.
Range: no, yes
Default: no
required_infeasibility_reduction: Required reduction of infeasibility before leaving restoration phase. ↵
The restoration phase algorithm is performed, until a point is found that is acceptable to the filter and the infeasibility has been reduced by at least the fraction given by this option.
Range: [0, 1)
Default: 0.9
residual_improvement_factor (advanced): Minimal required reduction of residual test ratio in iterative refinement. ↵
If the improvement of the residual test ratio made by one iterative refinement step is not better than this factor, iterative refinement is aborted.
Range: (0, ∞]
Default: 1
residual_ratio_max (advanced): Iterative refinement tolerance ↵
Iterative refinement is performed until the residual test ratio is less than this tolerance (or until "max_refinement_steps" refinement steps are performed).
Range: (0, ∞]
Default: 1e-10
residual_ratio_singular (advanced): Threshold for declaring linear system singular after failed iterative refinement. ↵
If the residual test ratio is larger than this value after failed iterative refinement, the algorithm pretends that the linear system is singular.
Range: (0, ∞]
Default: 1e-05
resto_failure_feasibility_threshold (advanced): Threshold for primal infeasibility to declare failure of restoration phase. ↵
If the restoration phase is terminated because of the "acceptable" termination criteria and the primal infeasibility is smaller than this value, the restoration phase is declared to have failed. The default value is actually 1e2*tol, where tol is the general termination tolerance.
Range: [0, ∞]
Default: 0
resto_penalty_parameter (advanced): Penalty parameter in the restoration phase objective function. ↵
This is the parameter rho in equation (31a) in the Ipopt implementation paper.
Range: (0, ∞]
Default: 1000
resto_proximity_weight (advanced): Weighting factor for the proximity term in restoration phase objective. ↵
This determines how the parameter zeta in equation (29a) in the implementation paper is computed. zeta here is resto_proximity_weight*sqrt(mu), where mu is the current barrier parameter.
Range: [0, ∞]
Default: 1
rho (advanced): Value in penalty parameter update formula. ↵
Range: (0, 1)
Default: 0.1
s_max (advanced): Scaling threshold for the NLP error. ↵
See paragraph after Eqn. (6) in the implementation paper.
Range: (0, ∞]
Default: 100
s_phi (advanced): Exponent for linear barrier function model in the switching rule. ↵
See Eqn. (19) in the implementation paper.
Range: (1, ∞]
Default: 2.3
s_theta (advanced): Exponent for current constraint violation in the switching rule. ↵
See Eqn. (19) in the implementation paper.
Range: (1, ∞]
Default: 1.1
sigma_max (advanced): Maximum value of the centering parameter. ↵
This is the upper bound for the centering parameter chosen by the quality function based barrier parameter update. Only used if option "mu_oracle" is set to "quality-function".
Range: (0, ∞]
Default: 100
sigma_min (advanced): Minimum value of the centering parameter. ↵
This is the lower bound for the centering parameter chosen by the quality function based barrier parameter update. Only used if option "mu_oracle" is set to "quality-function".
Range: [0, ∞]
Default: 1e-06
skip_corr_if_neg_curv (advanced): Whether to skip the corrector step in negative curvature iteration. ↵
The corrector step is not tried if negative curvature has been encountered during the computation of the search direction in the current iteration. This option is only used if "mu_strategy" is "adaptive". Changing this option is experimental.
Range: yes, no
Default: yes
skip_corr_in_monotone_mode (advanced): Whether to skip the corrector step during monotone barrier parameter mode. ↵
The corrector step is not tried if the algorithm is currently in the monotone mode (see also option "barrier_strategy"). This option is only used if "mu_strategy" is "adaptive". Changing this option is experimental.
Range: yes, no
Default: yes
slack_bound_frac: Desired minimum relative distance from the initial slack to bound. ↵
Determines how much the initial slack variables might have to be modified in order to be sufficiently inside the inequality bounds (together with "slack_bound_push"). (This is kappa_2 in Section 3.6 of implementation paper.)
Range: (0, 0.5]
Default: 0.01
slack_bound_push: Desired minimum absolute distance from the initial slack to bound. ↵
Determines how much the initial slack variables might have to be modified in order to be sufficiently inside the inequality bounds (together with "slack_bound_frac"). (This is kappa_1 in Section 3.6 of implementation paper.)
Range: (0, ∞]
Default: 0.01
slack_move (advanced): Correction size for very small slacks. ↵
Due to numerical issues or the lack of an interior, the slack variables might become very small. If a slack becomes very small compared to machine precision, the corresponding bound is moved slightly. This parameter determines how large the move should be. Its default value is mach_eps^{3/4}. See also end of Section 3.5 in implementation paper - but actual implementation might be somewhat different.
Range: [0, ∞]
Default: 1.81899e-12
soc_method: Ways to apply second order correction ↵
This option determines the way to apply second order correction, 0 is the method described in the implementation paper. 1 is the modified way which adds alpha on the rhs of x and s rows.
Range: {0, ..., 1}
Default: 0
soft_resto_pderror_reduction_factor: Required reduction in primal-dual error in the soft restoration phase. ↵
The soft restoration phase attempts to reduce the primal-dual error with regular steps. If the damped primal-dual step (damped only to satisfy the fraction-to-the-boundary rule) is not decreasing the primal-dual error by at least this factor, then the regular restoration phase is called. Choosing "0" here disables the soft restoration phase.
Range: [0, ∞]
Default: 0.9999
start_with_resto: Whether to switch to restoration phase in first iteration. ↵
Setting this option to "yes" forces the algorithm to switch to the feasibility restoration phase in the first iteration. If the initial point is feasible, the algorithm will abort with a failure.
Range: yes, no
Default: no
tau_min (advanced): Lower bound on fraction-to-the-boundary parameter tau. ↵
This is tau_min in the implementation paper. This option is also used in the adaptive mu strategy during the monotone mode.
Range: (0, 1)
Default: 0.99
theta_max_fact (advanced): Determines upper bound for constraint violation in the filter. ↵
The algorithmic parameter theta_max is determined as theta_max_fact times the maximum of 1 and the constraint violation at initial point. Any point with a constraint violation larger than theta_max is unacceptable to the filter (see Eqn. (21) in the implementation paper).
Range: (0, ∞]
Default: 10000
theta_min_fact (advanced): Determines constraint violation threshold in the switching rule. ↵
The algorithmic parameter theta_min is determined as theta_min_fact times the maximum of 1 and the constraint violation at initial point. The switching rule treats an iteration as an h-type iteration whenever the current constraint violation is larger than theta_min (see paragraph before Eqn. (19) in the implementation paper).
Range: (0, ∞]
Default: 0.0001
timing_statistics: Indicates whether to measure time spend in components of Ipopt and NLP evaluation ↵
The overall algorithm time is unaffected by this option.
Range: yes, no
Default: no
tiny_step_tol (advanced): Tolerance for detecting numerically insignificant steps. ↵
If the search direction in the primal variables (x and s) is, in relative terms for each component, less than this value, the algorithm accepts the full step without line search. If this happens repeatedly, the algorithm will terminate with a corresponding exit message. The default value is 10 times machine precision.
Range: [0, ∞]
Default: 2.22045e-15
tiny_step_y_tol (advanced): Tolerance for quitting because of numerically insignificant steps. ↵
If the search direction in the primal variables (x and s) is, in relative terms for each component, repeatedly less than tiny_step_tol, and the step in the y variables is smaller than this threshold, the algorithm will terminate.
Range: [0, ∞]
Default: 0.01
tol: Desired convergence tolerance (relative). ↵
Determines the convergence tolerance for the algorithm. The algorithm terminates successfully, if the (scaled) NLP error becomes smaller than this value, and if the (absolute) criteria according to "dual_inf_tol", "constr_viol_tol", and "compl_inf_tol" are met. This is epsilon_tol in Eqn. (6) in implementation paper. See also "acceptable_tol" as a second termination criterion. Note, some other algorithmic features also use this quantity to determine thresholds etc.
Range: (0, ∞]
Default: 1e-08
warm_start_bound_frac: same as bound_frac for the regular initializer ↵
Range: (0, 0.5]
Default: 0.001
warm_start_bound_push: same as bound_push for the regular initializer ↵
Range: (0, ∞]
Default: 0.001
warm_start_init_point: Warm-start for initial point ↵
Indicates whether this optimization should use a warm start initialization, where values of primal and dual variables are given (e.g., from a previous optimization of a related problem.)
value meaning no do not use the warm start initialization yes use the warm start initialization Default: yes, if run on modified model instance (e.g., from GUSS), otherwise no
warm_start_mult_bound_push: same as mult_bound_push for the regular initializer ↵
Range: (0, ∞]
Default: 0.001
warm_start_mult_init_max: Maximum initial value for the equality multipliers. ↵
Range: real
Default: 1e+06
warm_start_slack_bound_frac: same as slack_bound_frac for the regular initializer ↵
Range: (0, 0.5]
Default: 0.001
warm_start_slack_bound_push: same as slack_bound_push for the regular initializer ↵
Range: (0, ∞]
Default: 0.001
warm_start_target_mu (advanced): ↵
Experimental!
Range: real
Default: 0
watchdog_shortened_iter_trigger: Number of shortened iterations that trigger the watchdog. ↵
If the number of successive iterations in which the backtracking line search did not accept the first trial point exceeds this number, the watchdog procedure is activated. Choosing "0" here disables the watchdog procedure.
Range: {0, ..., ∞}
Default: 10
watchdog_trial_iter_max: Maximum number of watchdog iterations. ↵
This option determines the number of trial iterations allowed before the watchdog procedure is aborted and the algorithm returns to the stored point.
Range: {1, ..., ∞}
Default: 3