An Overview of Math Programming Solvers

Posted on: 19 Sep, 2022 General Tips

A clear advantage to using a system like GAMS is the large and diverse set of solvers included with the system: a real solver zoo! But this raises the question: which solver(s) should I use? This overview is aimed at people just starting out with modeling in GAMS and should provide a basic understanding of how math programming solvers work, their similarities and differences, and how to choose one or more solvers for a particular project. N.B.: this overview is intended to be USEFUL, not exhaustive, definitive, or even precisely correct in every detail.

Solvers expect input in a format that is typically not easily readable by humans. Algebraic modeling languages such as GAMS facilitate the process of translating human readable models into a format the solvers can understand. GAMS ships with a wide range of solvers, both free and commercial, with each one of them typically focusing on one or more problem types, which can broadly be categorized into either linear (LP), or non-linear (NLP), and mixed integer extensions of the two (MILP and MINLP).

In the time since solvers first appeared on PC platforms, there have been tremendous advances in the underlying algorithms, which now enable routine solving of problems that were intractable not that long ago. For example, in the early 1990s, a state of the art LP solver was able to solve models with several tens of thousands of variables on the hardware available at the time 1. Now, models with millions of variables can be solved quickly. While some of this is certainly attributable to increases in CPU speed and better memory, most of the speedup is due to the improvements of the algorithms. Even though the underlying ideas are often the same, each solver vendor has their own additional bag of tricks and trade secrets to improve performance, such as efficient presolve processes that can substantially decrease the amount of computation to be performed by the main solver algorithm.

Each solver therefore has its particular strengths and weaknesses when solving specific problem instances. One of the first takeaways here is that it is often not possible to predict the performance of a particular solver, given a particular problem instance. Therefore, the best approach is often to just try a few solvers on a representative sample of models, and then pick the best performing one. Luckily, the separation of model formulation and model solving provided by algebraic modeling languages such as GAMS makes this particularly easy. Switching a solver is as easy as changing a single line of model code.

Fig. 1: Overview of the solvers included with GAMS. LP: Linear Program, QCP: Quadratically Contrained Program, NLP: Non-linear Program, MCP: Mixed Complementarity Program. Solvers that lie on the boundaries of two problem types are well suited to solve both problem types. Please note that this figure does not accurately reflect every solvers capabilites. Instead, it is meant to give a quick anser to the question: Which solver should I try first, given a problem class?
This figure has been updated on Feb 14, 2024: SCIP and SoPLEX have been placed under an open source license in November 2023, COPT is able to handle MIQCPs since GAMS 42, and Octeract has been removed with GAMS 46.

Solver Overview

Linear Programming

In linear programming (LP), all model constraints and the objective function are linear equations. Many practical problems can be solved using LP techniques, such as in planning, production or transportation problems.

It was George Dantzig, who in the 1950s pushed forward the development of LP codes using the simplex algorithm. These early codes were used in the following decades e.g. in the oil industry and the military on mainframe compute hardware. The advent of the IBM PC in 1980 eventually democratized the use of these algorithms and enabled more users to take advantage of mathematical optimization. Linear programming became widely used.

Early commercially successful LP solvers available for PCs in the 1980s / 1990s were XPRESS and CPLEX. XPRESS was originally developed in 1983 by DASH Optimization, run by Bob Daniel and Robert Ashford (who is now president of Optimization Direct, the developer of ODH-CPLEX). The software was sold to FICO in 2008, and is still one of the most successful solvers. CPLEX was started by Robert Bixby (now at Gurobi) in 1987 and is now owned by IBM. Throughout the years, both XPRESS and CPLEX have continuously been improved and are among of the most performant LP solvers to this day.

Especially during the time period between the late 1980s and the early 2000s, tremendous improvements in LP performance have been made, despite the fact that many believed that LP algorithms had matured to a point where they could not be improved much. In an interesting study using CPLEX, Bixby found that due to the establishment of the dual simplex algorithm, improved linear algebra, and other algorithmic factors, the speedup in LP solver performance was around 3300-fold between 1988 and 2004 just due to the better algorithms, and factoring in concurrent PC hardware improvements, the total speedup was an astonishing 5-million-fold 2. In the following two decades, progress on LP solver algorithms has slowed down, but still there was a continuous improvement of commercial solvers: A recent study by Koch et al3 found that between the early 2000s and 2020, the speedup due to better LP algorithms was on average around 9-fold, and around 180-fold when you factor in the hardware improvements during that time frame.

As of the writing of this article, in addition to CPLEX and XPRESS, several high quality commercial LP solvers are available: (1) MOSEK, developed by Erling D. Andersen in Denmark, was released in 1999, (2) GUROBI was released in 2009 and developed by a team of developers who had before been instrumental in the development in CPLEX (Gu, Rothberg, Bixby). (3) COPT by Cardinal Optimization, is the first commercially available LP solver from China. It was released in 2019 and shows impressive benchmark results on par with or better than the long established products. The algorithms used in these modern LP solvers are the primal or dual simplex algorithms, and interior point methods such as the barrier method. All commercial solvers offer several algorithms and can often decide by themselves which algorithm gives the best performance for a given problem. Also, the commercial codes listed above can all handle mixed integer linear programming, as described in the next section.

In addition to commercial solvers, some open source alternatives are available. The first, Cbc (“COIN-OR Branch and Cut”), has been around for many years and is part of each GAMS distribution. Cbc itself is a mixed integer solver (see section below), but it ships with the Clp (“Coin-or linear programming”) solver and uses that to solve LPs. Another open source LP solver is SoPLEX , developed by the Zuse-Institute, Berlin. SoPLEX is also included in the GAMS distribution. The relatively new solver HiGHS , developed at the University of Edinburgh, shows great performance in benchmarks and appears to be a good alternative for Cbc and SoPLEX.

Mixed Integer Linear Programming

Many practical optimization problems require solving mixed integer models, where one or more of the variables is restricted to discrete values. Often these integer variables are used for decision making and can model “yes or no” decisions (“should we build a warehouse here, or not?”), or to model an industrial process ("How should we cut stock material to minimize waste? "). The addition of discrete variables typically makes models much more difficult to solve. All commercially available solvers mentioned in the previous section are able to solve mixed integer linear programs (MILPs) and take advantage of their strong LP performance to handle them.

The reason for this is that MILPs are most often solved using branching methods such as the “branch and cut” method, which create a search tree of subproblems: in a first step, in the root node, the MILP is simplified to an LP by allowing the integer variables to take on continuous values (the “LP-relaxation”). The relaxed model is then solved using a fast LP algorithm. In some lucky cases, the relaxed integer variables take on integer values in the optimal solution and the problem is solved in the first relaxation, but in most cases this is not the case. The model is then branched on one of the integer variables that take on fractional values in the relaxed model. For the two child nodes at each branching point, a lower or upper bound on the branching variable, equal to the next higher or lower integer value is enforced, creating another pair of LPs. These LP subproblems can be solved independently and in parallel, taking advantage of multicore CPUs. The process of branching at potentially many variables creates a search tree that can be quite large. To narrow down on an optimal solution of the original problem, a pruning process is performed. During this step, leaves of the tree are investigated and pruned if the LP solution of the leaf is worse than the current best known solution from some other place of the tree (“pruned by bound”), if the leaf LP has an infeasible solution (pruned by infeasibility), or if the leaf produces an integer solution (“pruned by integrality”). If a leaf cannot be pruned, another branching point is added. Once there is no way to branch any further, the decision variables of the leaf with the best solution constitute the optimal solution to the original problem. A nice explanation of the process can be found in this video on the Gurobi Youtube channel.

An interesting commercial solver not mentioned in the previous section is ODH-CPLEX by Optimization Direct 4. This solver is dedicated to solving MIPs and is unique in that it makes use of the symbolic names in the model to decompose it into submodels, which can be solved efficiently in parallel.

In addition to commercial solvers, some open source alternatives are available, such as the previously mentioned CBC and HiGHS. Another open source MILP solver with good performance is SCIP (“Solving Constraint Integer Programs”). Just as has been the case with LP solvers, the solver companies have added many tricks to improve MILP performance. Koch et al estimates that algorithmic improvements alone are responsible for a speedup of commercial MIP solvers of approx 50-fold in the last two decades, and factoring in hardware improvements the overall speedup has been around 1000-fold during the same time frame.

Nonlinear Problems

Many physical or engineering problems cannot be modeled adequately using linear programming, since many of the equations describing physical phenomena are nonlinear in nature. The same is true for problems in the finance or economics area. These nonlinear problems (NLPs) can be much more challenging to solve than LP or MILP problems. The level of difficulty depends on some key properties of the problem at hand.

The most important property determining the solvability of an NLP is convexity. For a convex model, a line joining any two feasible points of the problem is fully contained in the feasible region. This implies that a local solution must be a global solution. (If we know something about curvature, we can talk about solution uniqueness as well). Convexity also suggests that “sliding downhill” (i.e. moving in a direction that stays feasible and reduces the linearized objective) will eventually reach the optimal solution. Gradient information can conveniently be provided to the solver by GAMS.

There are several good solvers that are able to solve nonlinear problems. First, some of the LP / MIP solvers like CPLEX, GUROBI, XPRESS, or COPT can solve a (convex) subclass of NLPs, in which only quadratic nonlinearities occur, so called “quadratically constrained problems” (QCPs). MOSEK is a solver that can also solve these problems, but is particularly good for solving conic problems. Many quadratically constrained problems can be reformulated as conic problems and solved elegantly this way.

Going beyond QCPs to more general NLPs, other solvers are required. The oldest commercially available one is MINOS 5, which solves successive subproblems of the original NLP, in which the nonlinear constraints are replaced by linearized versions. MINOS was developed by Bruce Murtagh and Michael Saunders in the 1980s and is still relevant and competitive today. Another NLP solver named CONOPT, released in 1985 by Arne Drud at ARKI Consulting, uses the general reduced gradient (GRG) algorithm, which is particularly useful for solving very nonlinear problems where other methods have difficulty finding feasible solutions. Yet another NLP solver, SNOPT was developed by Philip Gill, Walter Murray and Michael Saunders at Stanford and UCSD and uses a sequential quadratic programming method to solve large scale NLPs. It is useful for problems where the evaluation of gradients is computationally expensive, since it uses less function evaluations than CONOPT or MINOS. In 2001 KNITRO was developed by Richard Waltz, Jorge Nocedal, Todd Plantenga and Richard Byrd. KNITRO is a very powerful solver for large scale NLPs. An open source alternative is the free solver IPOPT, also available as “IPOPTH” from GAMS, with included commercial high performance subroutines for the linear solver code required to solve subproblems.

Mixed Integer Nonlinear Problems

Just like you turn an LP into a MILP by adding one or more discrete variables to the model, the same can be done with non-linear programs.

There are several levels of difficulty here, depending on the order of non-linearity. If you turn a quadratically constrained problem into a mixed integer quadratically constrained problem (MIQCP), often the same solvers that can handle MILPs and QCPs can also handle these problems: CPLEX, GUROBI, XPRESS, MOSEK and KNITRO are good solvers to try first for MIQCPs.

When you add integer variables to a general nonlinear problem, you create a mixed integer nonlinear program (MINLP), and the mechanisms provided by the solvers from the MILP camp do not suffice. Just like a non-convex NLP, the system can have several local optima, and in larger models it can be very difficult or even impossible to know if an identified solution is in fact a global solution. DICOPT, developed by Duran & Grossmann in the 1980s 6, was the first commercial solver for MINLP problems. This was followed by alphaECP (Westerlund et al) in the 1990s. Good open source alternatives for MINLPs are SCIP, and the relatively new SHOT , developed by Andreas Lundell and Jan Kronqvist, and released in 2020. A webinar recording explaining the underlying principles of this solver and how to use it with GAMS is available on YouTube.

Similarly to the progress with MILP algorithms, in the last two decades there have been steady improvements in MINLP solver algorithms, which include convexification of non-convex problems, decomposition techniques and interval methods.

There are specialized solvers for finding global solutions of nonconvex problems. Similarly to the MILP solvers explained above, these solvers subdivide the original model into smaller submodels that can be solved with existing algorithms. BARON, developed by Nick Sahinidis at The Optimization Firm, implements a “branch and bound” type algorithm, explained in Ryoo & Sahinidis (1995)7. BARON has been part of the GAMS distribution since 2001, and is one of the best performing of these global solvers. The second established and high quality global optimization solver is LINDO, which uses “branch and cut” mechanisms, and can handle non-smooth functions such as “abs()”, which are often problematic since they cannot be differentiated at all points. Another branch-and-bound global optimization solver is ANTIGONE, developed by Flouders & Misener 8. ANTIGONE exploits special structures to solve convex relaxations of the non-convex original problem, and hands off solving the relaxed subproblems to CPLEX and CONOPT. Another solid choice with good performance for global optimization is the before mentioned SCIP . In addition to the established global optimization solvers listed above, there is a new entrant to the market called Octeract, which is particularly good at making use of parallel processing on modern CPU architectures.

Mixed Complementarity Problems

The MCP problem type (prevalent in the field of economics) is structured differently from the problem types in the previous sections. An MCP has no objective, and no constraints in the usual sense. Instead, an MCP is a set of complementarity conditions: for each variable, a matching function complements or is perpendicular to the variable with regard to its bounds. GAMS ships with the dedicated MCP solvers PATH, MILES, and NLPEC. In addition, KNITRO has gained the ability to solve MCP problems in recent releases.

How to Choose a Solver

While digesting the above paragraphs, you might have asked yourself how on earth you can possibly decide which of the many solvers is the correct one to try? Fortunately, with an algebraic modeling system such as GAMS this process is not as hard as it seems. Here is a suggested workflow for those who just start out with a new project and do not have any GAMS license yet:

  1. Start with a GAMS demo license https://gams.com/try_gams/ .

  2. Once your prototype exceeds the size limitations of the demo license, you can upgrade to a GAMS / Base license, which includes a range of open source solvers. These might be sufficient for your problem.

  3. Once you hit the limits on the free solvers (e.g. in terms of speed, robustness, or solution quality), it might be time to upgrade to a high performance commercial solver. We can provide you with time limited evaluation licenses for our commercial solvers, so you can try out different ones and then purchase the one that performs best for your problem.

The ability to quickly change solvers with a single line of code is a huge benefit of GAMS, and you should make use of it! There are publicly available benchmarks that favor some solvers over others, but it is really important to keep in mind that your own model is not part of the benchmark and might behave differently.

Refer to figure 1 to narrow down to the solvers that are recommended for your problem type.


  1. Robert E. Bixby, “Solving Real-World Linear Programs: A Decade and More of Progress,” Operations Research 50, no. 1 (February 2002): 3–15, https://doi.org/10.1287/opre.50.1.3.17780  ↩︎

  2. Robert E Bixby, “A Brief History of Linear and Mixed-Integer Programming Computation,” Documenta Mathematica, 2012, 16, https://www.math.uni-bielefeld.de/documenta/vol-ismp/25_bixby-robert.pdf  ↩︎

  3. Thorsten Koch et al., “Progress in Mathematical Programming Solvers from 2001 to 2020,” EURO Journal on Computational Optimization 10 (January 1, 2022): 100031, https://doi.org/10.1016/j.ejco.2022.100031↩︎

  4. http://www.optimizationdirect.com/  ↩︎

  5. https://web.stanford.edu/group/SOL/algo.html  ↩︎

  6. Marco A. Duran and Ignacio E. Grossmann, “An Outer-Approximation Algorithm for a Class of Mixed-Integer Nonlinear Programs,” Mathematical Programming 36, no. 3 (October 1, 1986): 307–39, https://doi.org/10.1007/BF02592064↩︎

  7. H. S. Ryoo and N. V. Sahinidis, “Global Optimization of Nonconvex NLPs and MINLPs with Applications in Process Design,” Computers & Chemical Engineering 19, no. 5 (May 1, 1995): 551–66, https://doi.org/10.1016/0098-1354(94)00097-2↩︎

  8. Ruth Misener and Christodoulos A. Floudas, “Global Optimization of Mixed-Integer Quadratically-Constrained Quadratic Programs (MIQCQP) through Piecewise-Linear and Edge-Concave Relaxations,” Mathematical Programming 136, no. 1 (December 2012): 155–82, https://doi.org/10.1007/s10107-012-0555-6↩︎