Recently, I’ve been learning about how SMT solvers are implemented. One of the most surprising things I’ve learned is that under the hood, many state-of-the-art SMT solvers use the simplex algorithm to solve satisfiability problems.

Until recently, all I knew about the simplex algorithm was what I vaguely remembered from reading about it in CLRS a long time ago.^{1}: that it was the standard method for solving linear programming problems. Wikipedia has a nice example of such a problem:

Suppose that a farmer has a piece of farm land, say $L$ km

^{2}, to be planted with either wheat or barley or some combination of the two. The farmer has a limited amount of fertilizer, $F$ kilograms, and insecticide, $P$ kilograms. Every square kilometer of wheat requires $F_1$ kilograms of fertilizer and $P_1$ kilograms of insecticide, while every square kilometer of barley requires $F_2$ kilograms of fertilizer and $P_2$ kilograms of insecticide. Let $S_1$ be the selling price of wheat per square kilometer, and $S_2$ be the selling price of barley. If we denote the area of land planted with wheat and barley by $x_1$ and $x_2$ respectively, then profit can be maximized by choosing optimal values for $x_1$ and $x_2$.

This optimization problem can be solved by maximizing a function of $x_1$ and $x_2$ — in particular, $S_1 \dot x_1 + S_2 \dot x_2$ — subject to constraints that capture what we know about the relationships between the other variables involved. Linear programming solvers, both open-source ones like GLPK and proprietary ones like Gurobi, use some version of the simplex algorithm to solve such problems. That’s what the simplex algorithm is for, or so I thought.

SAT and SMT solvers, on the other hand, solve satisfiability problems: problems where, given a formula with Boolean variables in it, we need to figure out whether or not there is some assignment of values to variables that will cause the expression to evaluate to “true”. (In the case of SMT solving, the formula is something fancier than just a Boolean formula, but the basic idea is the same.) Satisfiability is a decision problem, not an optimzation problem. I would have thought that since SAT and SMT problems are decision problems and linear programming problems are optimization problems, that SAT and SMT solvers and linear programming solvers wouldn’t have much in common. But that’s not the case!

The simplex algorithm works in phases: it first tries to find what’s called a feasible solution to the problem, in which all the constraints are satisfied, and then it iterates toward an optimal solution. If you want to use it for solving a decision problem rather than an optimization problem, you can skip the second phase and stop as soon as you’ve either satisfied the constraints or determined that they are unsatisfiable.

What’s more, you might not even need to use an SMT solver to solve your SMT problem; a linear programming solver might be enough. Suppose that the particular theory you’re working with is linear real arithmetic. That means that in addition to just Boolean variables and Boolean operators in the formulas you’re trying to determine the satisfiability of, you can also have equations and inequalities containing real-valued variables, and operations on them such as addition, subtraction, and multiplication. If your formula so happens to be a conjunction of these equations and inequalities, your in luck: the problem you’re trying to do exactly coincides with the problem that LP solvers solve!

There are various introductions to the simplex algorithm online, but the best treatment of it that I know of is in Chapter 29 of CLRS. If you don’t have a copy of CLRS handy, or if your copy is currently in use as a doorstop, you may or may not be able to find a PDF online by searching for terms such as, uh, “clrs” and “pdf”. ↩