Formulating good constraint
Good for Branch-and-Bound.
Let \(\mathcal{F}=\left\{\mathbf{x}_{1}, \ldots, \mathbf{x}_{k}\right\}\) be the set of feasible integer solutions to a particular integer optimization problem. Assume that the feasible set is bounded and, therefore \(\mathcal F\) is infinite.
Consider the convex hull of \(\mathcal F\), that is:
\(conv(\mathcal F)\) is a polyhedron with integer extreme points. If we knew \(conv(\mathcal F)\) explicitly we could solve the integer optimization problem as an LP.
Our target to reach "good constraint" is to come up with an LP relaxation which has a feasible region \(P\) as close as possible to \(conv(F)\), and a tight LP relaxation bound.
The quality of a formulation of an integer optimization problem with feasible solution set \(\mathcal F\) can be judged by the closeness of the feasible set of its linear relaxation to the convex hull of \(\mathcal F\).
Acting on the decision variables
-
Introducing extra \(0-1\) binary variables as useful in the branching process.
Assume \(l\in\{north,South\}\) and \(p\in\{laser,xray\}\), we have:
\[\sum_{l} \sum_{p} \delta_{l p}=1 \]Introducing \(\delta\) to be:
\[\begin{aligned} &\sum_{p} \delta_{\text {North, } p}=\delta \\ &\sum_{p} \delta_{\text {South, } p}=(1-\delta) \end{aligned} \]When branching, setting \(\delta=0\) equals to while force two variables equal to 0, say \(\delta_{North,laser/xray}\). There will be no branching on them and thence no additional nodes created.
-
Additional integer variables may be useful if they act as slack variables.
It might however be advantageous to treat these slack variables as integer and to give them priority in the branching process. Once their value has been fixed in the branching process the new constraint will restrict the feasible region of the LP relaxation by changing the right-hand side to \(b-u\), thus shrinking the feasible.
Acting on the constraint
Cuts.
-
Expansion.
Expand following compact constraint
\[\delta_{1}+\delta_{2}+\cdots+\delta_{n} \leq n \delta \]into
\[\begin{aligned} &\delta_{1} \leq \delta \\ &\delta_{2} \leq \delta \\ &\vdots \\ &\delta_{n} \leq \delta \end{aligned} \]The feasible region of the expanded is smaller than the compact constraint.
-
Cover.
Consider the following pure \(0-1\) constraint:
\[a_{1} \delta_{1}+a_{2} \delta_{2}+\cdots+a_{n} \delta_{n} \leq a_{0} \]A subset \(\{i_1,...,i_r\}\) of \(\{1,...,n\}\) is said to be a cover if:
\[a_{i_{1}}+a_{i_{2}}+\cdots+a_{i_{r}}>a_{0} \]A cover is said to be minimal cover if no subset of the cover is also is itself a cover.
To extend a minimal cover \(MC=\{i_1,..,i_r\}\), we could
- Choose the largest \(a_{i_j}\) where \(i_j\in MC\)
- Take the set of indices \(\{i_k|i_k\notin MC,a_{i_k}\geq a_{i_j}\}\).
- Add this set of indices to \(MC\) giving the extended cover \(\{i_1,...,i_{r+s}\}\).
Then we can add constraint:
\[\delta_{i_{1}}+\delta_{i_{2}}+\cdots+\delta_{i_{r+s}} \leq r-1 \]
Cuts should be considered beneficial only if the computational effort required to:
- derive the cuts and
- solve the modified problem
does not exceed the computational effort of solving the original problem without modification.
Tightening the Bounds
big-\(M\) constraints:
set \(M\) small has three benefits:
-
reduce the size of feasible region. Consider \(M=M_1+M_2+...+M_n\), and:
\[x_{1}+x_{2}+\cdots+x_{n} \leq M \delta \]then it is preferable to write:
\[\begin{aligned} &x_{1}-M_{1} \delta \leq 0 \\ &x_{2}-M_{2} \delta \leq 0 \\ &\vdots \\ &x_{n}-M_{n} \delta \leq 0 \end{aligned} \]as the LP model is more constrained.
-
Assume above constraint, \(\delta\) has cost \(c\) in objective function. If \(x=1\), then \(\delta=x/M\) is sufficient, which will bear a cost \(cx/M\) in objective value. This cost would decrease dramatically with \(M\) increasing, which will leads to loose bound.
-
If \(x=5,\delta = 0.0005\), a small \(M\) can avoid solver to treat illegal solution \(x=5,\delta =0\) to be feasible.
Symmetry
In many situations, there exists a set of indistinguishable objects for which individual decision variables must be dened. Given any solution to a model for such a problem, several equivalent "symmetric solutions" can be obtained by simply reindexing these indistinguishable objects.
Solving methods:
- Using symmetry-breaking constraints, say, introduce symmetry-breaking constraint by lexicographically
ordering the rolls: \(y_{1} \geq y_{2} \geq y_{3} \geq \cdots \geq y_{p}\). In this case we do not use roll \(n\) unless roll \(n-1\) is used. - Perturbating the cost function.
- Reformulating the problem.
Reformulation
- In TSP, the subtour elimination constraint
are required approximately \(2^{|\mathcal V|}\). Better to replaced by:
where \(u_i,i\in\mathcal V\setminus\{0\}\) be a continuous variable representing the order in which city \(i\) is visited. This formulation has \(n\) more variables but \(n(n-1)\) constraints.
- In SCF, we have\[y_{ij}\leq |T|x_{ij} \]which is much loose than MCF:\[w_{ij}^k\leq x_{ij} \]
P.s.: For SCF:
For MCF: