0% found this document useful (0 votes)
45 views79 pages

Advanced Gurobi Optimization Techniques

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views79 pages

Advanced Gurobi Optimization Techniques

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Gurobi Days Paris

Advanced Gurobi
Algorithms
Roland Wunderling

October 2022

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved


Agenda

• Problem Types
• Presolve
• Algorithms for Continuous
Optimization
• Algorithms for Discrete Optimization

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 2


Problem Types

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 3


Problem types
and Algorithms
Presolve
Continuous Mixed Integer
• LP • MILP
• Simplex • Branch-and-Cut
• Barrier (+ crossover) • MIQP (convex)
• Convex QP • MIQCP (convex)
• QP Simplex • Outer approximation
• Barrier
• MIQCP (non-convex)
• Convex QCP • Spatial branching
• Barrier
• Non-convex QP
• Same as for non-convex MIQCP

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 4


Presolve

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 5


Presolve

Solution
Solution
Original Presolved Optimize Unpresolve
Model Presolve Model

• Purpose of Presolve • Some Presolve Reduction


• Reduce the model size • Bound tightening
• Improve linear algebra during solve • Aggregation
• Tighten the formulation (for MIP) • Coefficient strengthening
• Identify problem sub-structure • …

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 6


Example

• Knapsack constraint with binary variables:


Presolve – Coefficient 10𝑏1 + 5𝑏2 + 10𝑏3 + 11𝑏4 ≤ 23
Strengthening
• After strengthening:
Given a constraint
𝑏1 + 𝑏2 + 𝑏3 + 𝑏4 ≤ 2
• 𝑎𝑥 ≤ 𝑏,
where 𝑙 ≤ 𝑥 ≤ 𝑢 and • Is it Valid?
some 𝑥𝑗 integral • Rewrite as
10 𝑏1 + 𝑏2 + 𝑏3 + 𝑏4 − 5𝑏2 + 𝑏4 ≤ 23
Replace with 𝑎 ′ 𝑥 ≤ 𝑏′, such that • Infeasible for
𝑏1 + 𝑏2 + 𝑏3 + 𝑏4 ≥ 3
• It is valid for all 𝑥 ∈ 𝑋
• It dominates the original • Is it stronger?
constraint • Consider
3
𝑏1 , 𝑏2, 𝑏3, 𝑏4 = (1, , 1, 0)
5
© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 7
Impact of Presolve

For MIP more powerful than just for LP:


• Exploit integrality
• Round fractional bounds and right-hand sides
• Lifting/coefficient strengthening
• Probing
• Does not need to preserve duality
• We only need to “uncrush” a primal solution
• Neither a dual solution nor a basis needs to be “uncrushed”
• Larger work limits
• For a given problem size solving a MIP takes much more time than solving an LP
• We can spend more time in presolve

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 8


Presolve – Performance Impact on MIP
45% 42%

40%

35%

30%

25%

20% 17%
14% 13%
15%
10% 9%
10%
6% 6% 6% 5% 4% 4% 3% 3% 3% 3%
5%
1% 1% 1% 0% 0% 0% 0%
0%

Time limit: 10000 sec. Test set has 3182 models:


Intel Xeon CPU E3-1240 v3 @ 3.40GHz - degradation measured individually on >10s bracket: ~1200 models
4 cores, 8 hyper-threads Benchmark data based on Gurobi 6.5
32 GB RAM Results from Achterberg, Bixby, Gu, Rothberg, Weninger (2020): “Presolve Reductions in Mixed Integer Programming”

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 9


Presolve Log

gfd-schedulen180f7d50m30k18: 457985 rows, 227535 columns, 1233372 nonzeros


Thread count: 8 physical cores, 16 logical processors, using up to 8 threads
Optimize a model with 457985 rows, 227535 columns and 1233372 nonzeros
Model fingerprint: 0x14c90069
Variable types: 33102 continuous, 194433 integer (0 binary)
Coefficient statistics:
Matrix range [1e+00, 2e+02]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+05]
RHS range [1e+00, 3e+02]
Presolve removed 211497 rows and 111352 columns (presolve time = 7s) ...
Presolve removed 227114 rows and 113578 columns
Presolve time: 7.59s
Presolved: 230871 rows, 113957 columns, 661696 nonzeros Model reduction: 50%
Variable types: 0 continuous, 113957 integer (113246 binary)
Solve time reduction:
8 minutes instead of 15 hours
and counting

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 10


Continuous Optimization

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 11


Continuous Algorithms
for LP / QP / QCP

Primal & dual simplex method Barrier method


• Numerically stable (if you are careful) • Numerically more challenging
• Easy to restart after a model • Restart remains open research question
modification
• Can effectively exploit multiple cores
• Does not parallelize well
• Crossover to Simplex Solution
• Basic solutions are often desirable
Concurrent optimization
• Run both simplex and barrier simultaneously
• Solution is reported by first one to finish
• Great use of multiple CPU cores
• Best mix of speed and robustness
• Deterministic and non-deterministic versions available
© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 12
Simplex Algorithms

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 13


Linear Program

• Problem statement

𝒎𝒊𝒏 𝒄′ 𝒙
𝒔. 𝒕. 𝑨𝒙 = 𝒃
𝒙 ≥ 𝟎
• Optimal solution can be found at a vertex
• Intersection of n constraints satisfied with
equality
• Pick 𝑨𝒙 = 𝒃 and 𝒙𝑵 = 𝟎, 𝑵 = 𝒏 − 𝒎
• Then 𝒙𝑩 = 𝑨𝑩−𝟏 𝒃 − 𝑵𝒙𝑵 , 𝑩 = 𝟏, … , 𝒏 \𝑵
• Basis
• Partition: 𝟏, … , 𝒏 = 𝑩 ∪ 𝑵, 𝑩 ∩ 𝑵 = ∅
• Such that 𝑨𝑩 is non-singular

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 14


Linear Program
𝒛𝒋

• Primal feasibility
• All constraints must be satisfied
• 𝒙𝑩 ≥ 𝟎

• Dual feasiblity (optimality)


• Consider rays of the recession cone
• Scalar product with objective function are the
reduced cost 𝒛𝒋 = 𝒄𝒋 − 𝒄𝑻𝑩 𝑨−𝟏
𝑩 𝑨𝒋
• Dual feasible if z𝑵 ≥ 𝟎

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 15


Simplex Algorithm

Primal Simplex Dual Simplex


• Start with primal feasible basis • Start with dual feasible basis
• But dual infeasible • But primal infeasible

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 16


Simplex Algorithm

Primal Simplex Dual Simplex


• Pricing: Pick an improving direction • Start with dual feasible basis
• Ratio Test: Pick feasible intersection • But primal infeasible

𝑧𝑗 < 0

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 17


Simplex Algorithm

Primal Simplex Dual Simplex


• Pricing: Pick an improving direction • Pricing: Pick a violated constraint
• Ratio Test: Pick feasible intersection • Ratio Test: Pick dual feasible
intersection

𝑥𝐵𝑖 < 0

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 18


Simplex Algorithm

Primal Simplex Dual Simplex


• Update Basis • Pricing: Pick a violated constraint
• Ratio Test: Pick dual feasible
intersection

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 19


Simplex Algorithm

Primal Simplex Dual Simplex


• Update Basis • Update Basis

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 20


Simplex Algorithm

Primal Simplex Dual Simplex


• Update Basis • Update Basis
• Repeat

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 21


Simplex Algorithm

Primal Simplex Dual Simplex


• Update Basis • Update Basis
• Repeat • Repeat

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 22


Simplex Algorithm

Primal Simplex Dual Simplex


• Repeat? • Update Basis
• Degeneracy: No progress • Repeat

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 23


Simplex Algorithm

Primal Simplex Dual Simplex


• Repeat? • Repeat?
• Degeneracy: No progress • Degeneracy: No progress in objective

If you can get stuck


due to degeneracy,
why not move
through the
interior?

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 24


Computational Steps of the Simplex Algorithms
Cheap Expensive
LU – Decomposition: 𝐴𝐵 = 𝐿𝑈

Primal Primal Vector: 𝐿𝑈𝑥𝐵 = 𝑏 − 𝐴𝑁 𝑥𝑁 Dual


Dual Vectors: 𝑦𝑇 𝐿𝑈 = 𝑐𝐵𝑇 , 𝑑𝑁𝑧𝑁= =𝑦𝑇𝑦𝐴𝑇𝑁𝐴𝑁
Pricing: select 𝑗, with 𝑧𝑗 < 0 Pricing: select 𝑖, with 𝑥𝐵𝑖 < 0

Primal Update Vector: Δ𝑥𝐵 𝑇 𝐿𝑈 = 𝐴𝑗 Dual Update Vectors: Δ𝑦𝑇 𝐿𝑈 = 𝑒𝐵𝑇𝑖 Δ𝑧𝑁 = Δ𝑦𝑇 𝐴
x𝑁

Ratio Test: select 𝑖 Ratio Test: select 𝑗


Dual Update Vectors: Δ𝑦𝑇 𝐿𝑈 = 𝑒𝐵𝑇𝑖 Δ𝑧𝑁 = Δ𝑦𝑇 x𝐴𝑁 Primal Update Vector: Δ𝑥𝐵 𝑇 𝐿𝑈 = 𝐴𝑗

Update LU – Decomposition: 𝐿𝑈 → 𝐿𝑈′

Update primal Vector: 𝑥𝐵′ = 𝑥𝐵 + 𝜎Δ𝑥𝐵


Update dual Vector: 𝑧𝑁′ = 𝑧𝑁 + 𝜏Δ𝑧𝑁

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 25


Iteration Objective Primal Inf. Dual Inf. Time
0 4.3000000e+01 7.750000e+01 0.000000e+00 37s
[...]

Simplex Log 412980


413737
414379
1.8708334e+03
1.8710001e+03
1.8710001e+03
1.382705e+02
1.087931e+03
1.228713e+02
0.000000e+00
0.000000e+00
0.000000e+00
516s
521s
527s
415021 1.8710001e+03 2.513642e+01 0.000000e+00 530s
415481 1.8710001e+03 4.105177e+01 0.000000e+00 538s
415921 1.8710001e+03 1.100249e+02 0.000000e+00 545s
• Dual feasible -> Dual Simplex 416261 1.8710001e+03 9.163224e+02 0.000000e+00 553s
416621 1.8710001e+03 5.824055e+00 0.000000e+00 560s
• Degeneracy is real 416881 1.8710001e+03 5.413714e+00 0.000000e+00 568s
• No more progress in objective 417121 1.8710001e+03 1.704219e+01 0.000000e+00 577s
• Gurobi removes degeneracy by perturbing 417351 1.8710001e+03 3.007301e+00 0.000000e+00 585s
Perturb objective with value 0.0006 at iteration 417581
• Gets out of degeneracy and solves the 417581 1.8710001e+03 3.461285e-01 0.000000e+00 594s
perturbed model 417816 1.8710001e+03 0.000000e+00 0.000000e+00 601s
• But needs to solve unperturbed model using Perturb rhs with value 1e-05
primal Simplex 419742 1.8710000e+03 0.000000e+00 1.960654e-01 608s
420384 1.8710000e+03 0.000000e+00 2.572093e-01 613s
• Primal also runs into degeneracy and [...]
perturbs the problem (but less) 443092 1.8710000e+03 0.000000e+00 2.761620e-01 807s
• Solves primal perturbed model Perturbation ends
• Final basis happens to be primal and dual 443913 1.8710000e+03 0.000000e+00 0.000000e+00 812s
Violations(dual): const 0.000000e+00, bound 0.000000e+00, rc 3.254286e-12
feasible for unperturbed problem
Scaled violations: const 0.000000e+00, bound 0.000000e+00, rc 5.206857e-11

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 26


Barrier Algorithm

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 27


objective
Barrier Algorithm

• Start with an interior point


• Move along central path:
• Predictor: Take a step of a certain size along Central
the tangent direction of the central path
path
• Corrector: Move back to central path
• Iterate until close enough
• Converges to analytic center of optimal face
• How do we know when we are done?
Degeneracy
does no harm

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 28


objective
Crossover

• Basic solutions are often desired since they


are much sparser (more variables are at their
bounds)
• Crossover:
• Move from interior point solution to a vertex
solution
• In theory o(n) pivot operations
• In practice numerical inaccuracies (of barrier
solution) may require cleanup with Simplex

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 29


objective
Barrier Algorithm

• Dikin’s Algorithm:
apply affine transformation to stay away
from the boundary at each iteration
• Karmarkar’s Algorithm: Central
apply projective transformation to re-center path
the solution at each iteration
• Logarithmic Barrier Algorithm:
use a logarithmic penalty function on the
variable bounds to stay centered Degeneracy
does no harm

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 30


Foundation: Duality

• Primal Linear Program:

• Weighted combination of constraints (y) and bounds (z):


Obvious: Weak Duality
𝑦 𝑇 𝐴𝑥 + 𝑧 𝑇 𝑥 ≥ 𝑦 𝑇 𝑏 (with 𝑧 ≥ 0)
𝑐𝑇 𝑥 ∗ ≥ 𝑦 ∗ 𝑇 𝑏
(𝑦 𝑇𝐴 + 𝑧 𝑇 )𝑥 ≥ 𝑦 𝑇 𝑏
(if primal and dual are both feasible)
• Dual Linear Program:
Strong Duality Theorem:

𝑐𝑇 𝑥 ∗ = 𝑦 ∗ 𝑇 𝑏
(if primal and dual are both feasible)

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 31


2 Interpretations of the Barrier Algorithm

#1: Interior Point Method that follows the central path


• Start with linear equations that define the optimality conditions for the primal and dual LPs

𝐴𝑥 = 𝑏, 𝑥 ≥ 0 (primal feasibility)
𝐴𝑇 𝑦 + 𝑧 = 𝑐, z ≥ 0 (dual feasibility)
𝑐 𝑇 𝑥 - 𝑏𝑇 𝑦 = 0 ⟺ 𝑐 𝑇 𝑥 - (𝐴𝑥)𝑇 𝑦 = 0 ⟺ (𝑐 𝑇 - 𝑦 𝑇 𝐴)𝑥 ⟺ 𝑧 𝑇 𝑥 = 0

strong duality complementary


slackness
𝑋 = 𝐷𝑖𝑎𝑔 𝑥 ; 𝑍 = 𝐷𝑖𝑎𝑔 𝑧 ; 𝑒 = (1, 1, … , 1)

Can be solved with Newton’s method,


but iterates need not be interior points yet…

32
2 Interpretations of the Barrier Algorithm

#1: Interior Point Method that follows the central path


• Adjust complementary slackness conditions to consider only interior point solutions

𝑥𝑗 ∗ 𝑧𝑗 = 𝜇 > 0,
𝑗 = 1, … , 𝑛

• Start with 𝜇 > 0, systematically reduce it to 0 to converge to optimal primal dual pair to of LP
• Now we have an interior point method, but what makes it a barrier method?

33
2 Interpretations of the Barrier Algorithm

#2: Logarithmic Barrier Algorithm


• Primal Barrier Algorithm = -∞ if xj= 0

min 𝑐 𝑇 𝑥 min 𝑐 𝑇 𝑥 − μ σ𝑛𝑗=1 log 𝑥𝑗 min 𝑐 𝑇 𝑥 − μ σ𝑛𝑗=1 log 𝑥𝑗 − 𝑦𝑇 (𝐴𝑥 − 𝑏)


𝑠. 𝑡. 𝐴𝑥 = 𝑏 𝑠. 𝑡. 𝐴𝑥 = 𝑏
𝑥≥0
Introduce barrier Lagrange to
function to force create unconstrained
Interior point optimization

• Dual Barrier Algorithm


max 𝑏 𝑇 𝑦 max 𝑏 𝑇 𝑦 + μ σ𝑛𝑗=1 log 𝑧𝑗
𝑠. 𝑡. 𝐴𝑇 𝑦 + 𝑧 = 𝑐 𝑠. 𝑡. 𝐴𝑇 𝑦 + 𝑧 = 𝑐 max 𝑏 𝑇 𝑦 + μ σ𝑛𝑗=1 log 𝑧𝑗 − 𝑥 𝑇 (𝐴𝑇 𝑦 + 𝑧 − 𝑐)
z≥0

• In both cases, differentiate the unconstrained optimization and apply Newton’s method
34
2 Interpretations of the Barrier Algorithm

#2: Logarithmic Barrier Algorithm


• Primal Dual Barrier Algorithm
• Optimality conditions for min 𝑐 𝑇 𝑥 − μ σ𝑛𝑗=1 log 𝑥𝑗 − 𝑦𝑇 (𝐴𝑥 − 𝑏):
c − 𝑦𝑇 𝐴 − μ𝑋 −1 = 0 ∇𝑥 (1) (3+5)
𝐴𝑥 − 𝑏 =0 ∇𝑦 (2)

• Optimality conditions for max 𝑏 𝑇 𝑦 + μ σ𝑛𝑗=1 log 𝑧𝑗 − 𝑥 𝑇 (𝐴𝑇 𝑦 + 𝑧 − 𝑐) :


c − 𝑦𝑇 𝐴 − 𝑧 =0 ∇𝑥 (3)
𝑥 𝑇 𝐴𝑇 − 𝑏 𝑇 =0 ∇𝑦 (4) (2)
μ𝑍 −1 − 𝑋 =0 ∇𝑧 (5)

• These are duals of each other. From either one can derive (e.g. multiply (5) by 𝑍)
Does not matter
Look how we got here;
familiar? we can use
Newton’s Method
35
Applying Newton’s Method

Newton’s method: Apply to:

𝜕 𝑓𝑖
𝑥𝑘+1= 𝑥𝑘 - [J 𝑥𝑘 ]−1𝑓(𝑥𝑘 ), where 𝐽(𝑥)𝑖𝑗 = (𝑥)
𝜕𝑥𝑗 𝜕𝑥𝑖
∆𝑥

Starting point

∇𝑥 ∇𝑦 ∇𝑧

At each iteration

J 𝑥0 ∆𝑥 𝑓(𝑥0 )

36
Applying Newton’s Method

At each iteration

Most of the work


At each iteration:
1. Solve 𝑨𝒁−𝟏 𝑻 −1 −1 𝑇
𝟎 𝑿𝟎 𝑨 ∆𝒚 = 𝑏 − 𝑍0 𝜇𝑒 − 𝐴𝑍0 𝑋0(𝐴 𝑦0 + 𝑧0 − 𝑐) Everything else cheap:
2. Compute ∆𝑧 = 𝐴𝑇 𝑦0 + 𝑧0 − 𝑐 - 𝐴𝑇 ∆𝑦 • Matrix vector mult.
• Vector additions
3. Compute ∆𝑥 = 𝑍0−1( 𝑋0𝑍0 𝑒 − 𝜇𝑒 − 𝑋0 ∆𝑧 ) • Vector subtractions
4. Update 𝑥1 = 𝑥0 − ∆𝑥 , 𝑦1 = 𝑥0 − ∆𝑦 , 𝑧1 = 𝑧0 − ∆𝑧 ; reduce 𝜇

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 37


Termination of the Barrier Algorithm

• Recall
𝐴𝑥 = 𝑏, 𝑥 ≥ 0 (primal feasibility)
𝐴𝑇 𝑦 + 𝑧 = 𝑐, z ≥ 0 (dual feasibility)
𝑐 𝑇 𝑥 − 𝑏𝑇 𝑦 = 0 ⟺ 𝑧 𝑇 𝑥 = 0 (duality gap ⟺ complementary slackness)

• At each iteration
• Duality gap can be shown to reduce
• Albeit neither primal nor dual objective needs change monotonicaly

• Terminate when normalized duality gap and complementary slackness within tolerance:

𝐴𝑥−𝑏 𝐴𝑇 𝑦+𝑧−𝑐 Gurobi BarConvTol


• 𝑥
<𝜀 and 𝑦
<𝜀 parameter, default 1e-8
𝑐 𝑇 𝑥−𝑏 𝑇 𝑦 𝑧 𝑇𝑥
• 1+𝑏 𝑇 𝑦
< 𝜀 and 1+𝑏 𝑇 𝑦
<𝜀 Qualitatively different
than Simplex tolerances

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 38


Computational Steps of the Barrier Algorithm

Compute fill-reducing ordering of 𝐴𝑇 𝐴 Cheap Expensive

Compute starting point 𝑥0, 𝑧0 > 0 (*) Nonzero structure unchanged


throughout the solve

Form 𝐴𝑍𝑘−1𝑋𝑘 𝐴𝑇 Gurobi BarOrder,


GURO_PAR_BARDENSETHRESH
Factor 𝐴𝑍𝑘−1𝑋𝑘 𝐴𝑇 = 𝐿𝐷𝐿𝑇 parameters

Solve 𝐿𝐷𝐿𝑇 Δ𝑥 = w 𝑍𝑘−1, 𝑋𝑘 are diagonal matrices


Compute Δ𝑧 and Δ𝑥

Update x, y, and z

Gurobi Threads parameter

* Lustig, I.J. (1990). "Feasibility issues in a primal-dual interior point method for linear programming,” Mathematical Programming, 49(2), 145-162
© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 39
Ordering time: 0.00s

Barrier statistics:
Dense cols : 1
AA' NZ : 1.056e+03
Factor NZ : 4.200e+03 (roughly 1 MB of memory)
Factor Ops : 8.603e+04 (less than 1 second per iteration)
Threads : 1

Barrier Log 𝑐𝑇𝑥


Objective
𝑏𝑇 𝑦
Residual
| 𝐴𝑥 − 𝑏 | ||𝐴𝑇 𝑦 + 𝑧 − 𝑐|| |𝑧 𝑇 𝑥|
Iter Primal Dual Primal Dual Compl Time
0 -2.88769591e+03 -0.00000000e+00 5.74e+03 0.00e+00 2.93e+01 0s
1 -2.93201530e+03 -2.32719862e+02 6.17e+02 2.33e-02 7.21e+00 0s
• Statistics about most expensive operation 2 -1.20511641e+03 -2.09346815e+02 0.00e+00 2.08e-17 7.34e-01 0s
3 -3.12189924e+02 -2.58701130e+02 0.00e+00 5.55e-17 3.94e-02 0s
• Termination when complementarity is small 4 -3.00241585e+02 -2.99695549e+02 0.00e+00 5.55e-17 4.02e-04 0s
5 -3.00000246e+02 -2.99999695e+02 0.00e+00 2.08e-17 4.06e-07 0s
enough 6 -3.00000000e+02 -3.00000000e+02 0.00e+00 1.14e-16 4.06e-10 0s
7 -3.00000000e+02 -3.00000000e+02 7.11e-15 1.44e-15 4.06e-16 0s

Barrier solved model in 7 iterations and 0.36 seconds (0.83 work units)
• Continue with crossover Optimal objective -3.00000000e+02

• Finish up with Simplex Crossover log...

24 DPushes remaining with DInf 0.0000000e+00 0s


0 DPushes remaining with DInf 0.0000000e+00 0s

1 PPushes remaining with PInf 0.0000000e+00 0s


0 PPushes remaining with PInf 0.0000000e+00 0s

Push phase complete: Pinf 0.0000000e+00, Dinf 0.0000000e+00 0s

Iteration Objective Primal Inf. Dual Inf. Time


28 -3.0000000e+02 0.000000e+00 0.000000e+00 0s

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 40


Essential Differences – Simplex vs Barrier

Simplex Barrier
• Thousand/millions of iterations on extremely sparse • Dozens of iterations on denser matrices
matrices
• Each iteration extremely cheap • Each iteration is expensive

• Few opportunities to exploit parallelism • Multiple opportunities to exploit parallelism


• Nonzero structure of matrices changes with • Nonzero structure of matrices is unchanged
every iteration • Computational effort focused in two or three
• Computational effort spread out over several procedures that dominate the overall run time
procedures, none of which typically dominate
the work in an iteration

• Can be warm-started • Barrier warm-start still an open research topic


• Effectively handles problem modifications

• Primal or Dual degeneracy can be problematic • No issues with degenerate extreme points

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 41


LP Performance 2.65

Performance results:
1.83
• Gurobi 9.5, Intel(R) Xeon (R) E3-1240 v5 1.68
(4 core at 3.5GHz)
• Simplex on 1 core, Barrier on 4 cores 1.21
1.09
• Concurrent with 1 thread dual, 3 threads 1
barrier
• Result for models that take >1s

LP ALGORITHMS
Primal Dual Barrier Concurrent Det. Concurrent Det. Concurrent Simplex

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 42


• Typically, defaults are well suited
• Uses concurrent LP if problem large enough
• Using 8-16 threads is often the sweet spot for
performance, but you may want to play with that
Improving Default
Performance • If your models are reliably best solved by one of
the concurrent algorithms only use that algorithm
• Avoid contention memory bus
How can I use my newfound
• Avoid synchronization (for deterministic case)
knowledge about Gurobi’s
algorithmic features to get • Mostly performance issues are due to degeneracy
better LP performance? or numerical difficulties – consider your model!

• For problems with special structure:


• Sifting when cols >> rows:
Parameter Sifting=1,2
• Network Algorithm (new in 10.0.0):
Parameter NetworkAlg=1
• Gurobi heuristically will chose those by default
© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 43 © 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 43
Discrete Optimization

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 44


Upper Bound

G
LP based Branch-and- A
P
Bound Root:
𝑣 = 3.5

• Solve LP relaxation of the problem (root) 𝑥 = 2.3 Lower Bound


• Gives rise to first lower bound
• v = 3.5 (fractional)
• Branch on v: 𝑣 ≤ 3 and 𝑣 ≥ 4 Integer 𝑦 = 0.7
• Solve child node:
• 𝑥 = 2.3
• Branch on 𝑥: 𝑥 ≤ 2 and 𝑥 ≥ 3
• Solve child node:
• Solution is integer feasible
• Gives rise to first upper bound
• Gives rise to optimality gap
• Solve child node:
• 𝑦 = 1.7
• Branch on 𝑦: 𝑦 ≤ 0 and 𝑦 ≥ 1
© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 45
Upper Bound

G
LP based Branch-and- A
P
Bound Root:
𝑣 = 3.5

• Solve child node: 𝑥 = 2.3 𝑧 = 0.3 Lower Bound


• Node is infeasible
• Solve child node:
• 𝑧 = 0.4 Integer 𝑦 = 0.7 Infeas
• Branch on 𝑧: z ≤ 0 and z ≥ 1
• Solve child node:
• Cutoff the node:
Node objective exceeds current upper bound 𝑧 = 0.4 Infeas
• Solve child node:
• 𝑧 = 0.3
• Branch on 𝑧: z ≤ 0 and z ≥ 1 Cutoff
• Solve child node:
• Node is infeasible

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 46


Upper Bound

G
A
G
LP based Branch-and- P
A
P
Bound Root:
𝑣 = 3.5

• Solve more nodes: 𝑥 = 2.3 𝑧 = 0.3 Lower Bound


• Minimum of the objective all active node
relaxations gives rise to new lower bound

Integer 𝑦 = 0.7 Infeas


• When gap reaches 0 current incumbent
solution is proven to be optimal
• In practice, „good“ solutions are good enough
• Since proving optimality may take a long time 𝑧 = 0.4 Infeas

Cutoff

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 47


MIP Building Blocks

Presolve Presolve, PrePasses, AggFill, Aggregate, DualReductions, PreSparsify, ...

• Tighten formulation and reduce problem size


Node selection BranchDir, Var Hints Presolve Node Selection
• Select next subproblem to process
Node presolve Symmetry

• Additional presolve for subproblem Conflict Analysis Node Presolve


Solve continuous relaxations Method, NodeMethod, DegenMoves

• Gives a bound on the optimal integral objective


Conflict analysis LP Relaxation
• Learn from infeasible subproblems
Cutting planes Cuts, CutPasses, GomoryPasses, CliqueCuts, ... Cutting Planes Heuristics
• Tighten relaxation by cutting relaxation solutions
Primal heuristics Heuristics, MinRelNodes, PumpPasses, RINS, SubMIPNodes, ZeroObjNodes, Var Hints, (NoRel)

• Find integer feasible solutions


Branching variable selection VarBranch, BranchDir

• Crucial for limiting search tree size


Branching

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 48


MIP Building Blocks

Each box represents a giant bag of tricks


• To cover everything would take weeks Presolving Node Selection
A sampling of techniques instead [1] [2]
[2]

• One from each of the most important boxes


Conflict Analysis Node Presolve
[1]

LP Relaxation
[3]

Cutting Planes
[1] Heuristics

[1] Achterberg and Wunderling: "Mixed Integer Programming: Analyzing 12 Years of Progress" (2013) [1]

[2] Achterberg: "Constraint Integer Programming" (2007)

[3] [Link] Branching


[1]

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 49


MIP Building Blocks

Presolve
• Tighten formulation and reduce problem size
Node selection Presolving Node Selection
• Select next subproblem to process
Node presolve
• Additional presolve for subproblem Conflict Analysis Node Presolve
Solve continuous relaxations
• Gives a bound on the optimal integral objective
Conflict analysis LP Relaxation
• Learn from infeasible subproblems
Cutting planes Cutting Planes
• Cut off relaxation solutions Heuristics
Primal heuristics
• Find integer feasible solutions
Branching variable selection
• Crucial for limiting search tree size
Branching

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 50


MIP Building Blocks

Presolve
• Tighten formulation and reduce problem size
Node selection Presolving Node Selection
• Select next subproblem to process
Node presolve
• Additional presolve for subproblem Conflict Analysis Node Presolve
Solve continuous relaxations
• Gives a bound on the optimal integral objective
Conflict analysis LP Relaxation
• Learn from infeasible subproblems
Cutting planes Cutting Planes
• Cut off relaxation solutions Heuristics
Primal heuristics
• Find integer feasible solutions
Branching variable selection
• Crucial for limiting search tree size
Branching

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 51


Cutting Planes The cut menu

Gomory Cover
• A cut (cutting plane) is a Mixed Integer Rounding (MIR) Implied bound
constraint that reduces the StrongCG cuts Projected Implied bound
feasible region of the continuous Lift and Project Clique
relaxation but not its integer hull Infeasibility cuts GUB Cover
• Separation of cuts: Flow cover Zero-half
Given an x that is feasible for the Flow path Mod-K
relaxation, find a cut for which x is Network RLT
infeasible and add it to the MIP separation cuts BQP
relaxation Relax and Lift SubMIP
• Thus, the relaxation more closely User Cuts Outer Approximation
approximates the integer hull
© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 52
Chvatal-Gomory Cuts

Given 𝐴 ∈ ℚ𝑚×𝑛 , 𝑏 ∈ ℚ𝑚, consider the rational polyhedron


𝑃 = 𝑥 ∈ ℝ𝑛 𝐴𝑥 ≤ 𝑏, 𝑥 ≥ 0

We want to find the integer hull


𝑃𝐼 = conv 𝑥 ∈ ℤ𝑛 𝐴𝑥 ≤ 𝑏, 𝑥 ≥ 0

Chvatal-Gomory procedure:
• Choose non-negative multipliers 𝝀 ∈ ℝ𝒎
≥𝟎
• Aggregated inequality 𝜆𝑇 𝐴𝑥 ≤ 𝜆𝑇 𝑏 is valid for 𝑃 because 𝜆 ≥ 0
• Relaxed inequality 𝜆𝑇 𝐴 𝑥 ≤ 𝜆𝑇 𝑏 is still valid for 𝑃 because 𝑥 ≥ 0
• Rounded Inequality 𝜆𝑇 𝐴 𝑥 ≤ 𝜆𝑇 𝑏 is still valid for 𝑃𝐼 because 𝑥 ∈ ℤ𝑛

CG procedure suffices to generate all non-dominated valid inequalities for 𝑃𝐼 in a finite number of
iterations!
• P(0) = P, P(k) = P(k-1) ∩ {CG cuts for P(k-1)}: k-th CG closure of P - is a polyhedron!
• CG rank of a valid inequality for PI: minimum k s.t. inequality is valid for P(k)
• Higher rank cuts get more and more dense and numerically unstable

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 53


Knapsack Cover Cuts

A (binary) knapsack is a constraint 𝑎𝑥 ≤ 𝑏 with

• 𝑎𝑖 ≥ 0 the weight of item 𝑖, 𝑖 = 1, … , 𝑛


• 𝑏 ≥ 0 the capacity of the knapsack

An index set 𝐶 ⊆ 1, … , 𝑛 is called a cover, if σ𝑖∈𝐶 𝑎𝑖 > 𝑏


• You can’t fit all items from 𝐶 in the knapsack

A cover C implies a cover inequality: σ𝑖∈𝐶 𝑥𝑖 ≤ 𝐶 − 1


• You must leave out at least one of them

Interesting for cuts: minimal covers


• σ𝑖∈𝐶 𝑎𝑖 > 𝑏 and σ𝑖∈𝐶 ′ 𝑎𝑖 ≤ 𝑏 for all 𝐶′ ⊂ 𝐶, 𝐶′ ≠ 𝐶

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 54


Knapsack Cover Cuts – Example

Consider knapsack 3x1 + 5x2 + 8x3 + 10x4 + 17x5 ≤ 24, x ∈ {0,1}5


A minimal cover is C = {1,2,3,4}
Resulting cover inequality: x1 + x2 + x3 + x4 ≤ 3

Lifting
• If x5 = 1, then x1 + x2 + x3 + x4 ≤ 1
Consider (1,1,1,0,1/17)
• Hence, x1 + x2 + x3 + x4 + 2x5 ≤ 3 is valid
• Need to solve knapsack problem αj := d0 - max{dx | ax ≤ b - aj} to find lifting
coefficient for variable xj
• Use dynamic programming to solve knapsack problem

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 55


Cutting Planes – Performance

48%
0.5

0.4

28%
0.3

0.2
14%

8% 7%
0.1 6%
4% 4% 3% 2%

Achterberg and Wunderling: "Mixed Integer Programming: Analyzing 12 Years of Progress" (2013)
benchmark data based on CPLEX 12.5

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 56


Cuts Statistics Log

• At the end of a MIP log we usually find statistics about which cuts have been added to the LP
relaxation:

43287 0 cutoff 56 4.7096e+07 4.7092e+07 0.01% 375 1366s

Cutting planes:
Gomory: 37
Lift-and-project: 3
Cover: 8
Implied bound: 19
MIR: 326
StrongCG: 14
Flow cover: 624
Inf proof: 4
Zero half: 19
Mod-K: 1

Explored 44197 nodes (16447802 simplex iterations) in 1366.22 seconds


(2785.50 work units)
Thread count was 8 (of 16 available processors)

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 57


MIP Building Blocks

Presolve
• Tighten formulation and reduce problem size
Node selection Presolving Node Selection
• Select next subproblem to process
Node presolve
• Additional presolve for subproblem Conflict Analysis Node Presolve
Solve continuous relaxations
• Gives a bound on the optimal integral objective
Conflict analysis LP Relaxation
• Learn from infeasible subproblems
Cutting planes Cutting Planes
• Cut off relaxation solutions Heuristics
Primal heuristics
• Find integer feasible solutions
Branching variable selection
• Crucial for limiting search tree size
Branching

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 58


What’s a good branching variable?
• Superb: fractional variable infeasible in both
Branching variable branch directions
selection • Immediately prune the node as infeasible
• Great: infeasible in one direction
Given node relaxation solution 𝑥 ∗ • Good: both directions move the objective
• Any integer variable with fractional
𝑥𝑗∗can be branched on
• Branching on 𝑗 creates two child Expensive to predict which branches lead to
nodes with infeasibility or big objective moves
• 𝑥𝑗 ≤ 𝑥𝑗∗
• Strong branching
• 𝑥𝑗 ≥ 𝑥𝑗∗
• Truncated LP solve for every possible branch
• How to choose 𝑗? at every node
• Choice has a dramatic impact on the
size of the search tree • Rarely cost effective
• Need a quick estimate

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 59


Pseudo-Costs

Use historical data to predict


c*=13 x* = 2.7
impact of a branch:
• Record cost(xj) = Δobj / Δxj for
each branch in a pseudo-cost c*=20 c*=19
table
• Two entries per integer variable
• Average down cost
• Average up cost

• Use table to predict cost of a


future branch
© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 60
Pseudo-Costs

Use historical data to predict


c*=13 x* = 2.7
impact of a branch:
• Record cost(xj) = Δobj / Δxj for
each branch in a pseudo-cost c*=20 c*=19
table down pseudo-cost update: up pseudo-cost update:
• Two entries per integer variable ∆obj/∆x = 7/0.7 = 10 ∆obj/∆x = 6/0.3 = 20

• Average down cost


• Average up cost

• Use table to predict cost of a


future branch
© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 61
Pseudo-Costs
pseudo costs:
downcost(x) = 10
upcost(x) = 20
Use historical data to predict
c*=13 x* = 2.7 c*=17 x* = 5.4
impact of a branch:
• Record cost(xj) = Δobj / Δxj for
each branch in a pseudo-cost c*=20 c*=19
table
down estimate: up estimate:
• Two entries per integer variable c' = 17 + 0.4 ⋅ 10 = 21 c' = 17 + 0.6 ⋅ 20 = 29
• Average down cost
• Average up cost

• Use table to predict cost of a


future branch
© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 62
Pseudo-Costs

What do you do when there is no history?


• E.g., at the root node
Initialize pseudo-costs [Linderoth & Savelsbergh, 1999]
• Always compute up/down cost (using strong branching) for new fractional
variables
• Initialize pseudo-costs for every fractional variable at root
Reliability branching [Achterberg, Koch & Martin, 2005]
• Do not rely on historical data until pseudo-cost for a variable has been recomputed
r times

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 63


Branching Rules – Performance

736%

7 548%

2
0% 2%
1

0
most fractional random pseudo-costs with SB init reliability
(baseline) Achterberg and Wunderling: "Mixed Integer Programming: Analyzing 12 Years of Progress" (2013)
benchmark data based on CPLEX 12.5
Achterberg, Koch, and Martin: "Branching Rules Revisited" (2005)

© 2022 Gurobi Optim ization, LLC . C onfidential, A ll R ights R eserved | 64


MIP Building Blocks

Presolve
• Tighten formulation and reduce problem size
Node selection Presolving Node Selection
• Select next subproblem to process
Node presolve
• Additional presolve for subproblem Conflict Analysis Node Presolve
Solve continuous relaxations
• Gives a bound on the optimal integral objective
Conflict analysis LP Relaxation
• Learn from infeasible subproblems
Cutting planes Cutting Planes
• Cut off relaxation solutions Heuristics
Primal heuristics
• Find integer feasible solutions
Branching variable selection
• Crucial for limiting search tree size
Branching

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 65


MIP Heuristics

MIP solvers find new feasible solutions in two ways


• Branching
• Primal heuristics
Properties of a good heuristic
• Quick
• Finds solutions earlier than branching
• Captures problem structure
• Exploits structure more effectively than branching
• General
• Finds solutions for lots of models
Gurobi has more than 30 heuristic types
• Adaptive strategies decide when to apply each

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 66


Types of MIP Heuristics

Constructive heuristics
• No knowledge about other solutions needed
• Goal is to find solution early and to define “starting” point for improvement
heuristics
• May produce poor quality solutions
• Typically fast (but not always, e.g. NoRel, ZeroObj)
Improvement heuristics
• Can be more expensive
• Need at least one known to solution to work on
• High quality solutions
• Provide better cutoff bound to prune tree
• Can be effective even on low quality solutions

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 67


Example MIP Heuristic - Rounding

Rounding Heuristic
• Start with:
• Solution of relaxation
• Round integer variables

Quick?
• Very quick
Captures problem structure?
• No
General?
• Finds solutions to lots of easy models

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 68


Example MIP Heuristic - RINS

Relaxation Induced Neighborhood Search


• Start with:
• Node relaxation solution
• Best known integer feasible solution
• Fix integer variables whose values agree in both
• Solve a MIP on the rest
Quick?
• No – solves a MIP
• Integer infeasibilities at a node where RINS is called is the minimum number of unfixed variables

Captures problem structure?


• Yes – searches a neighborhood of the relaxation
• No – neighborhood considers mathematical calculations, not problem structure
General?
• Yes – effective on a variety of models
© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 69
Example MIP Heuristic - NoRel

No Relaxation Heuristic
• Start from some (feasible or infeasible) vector
• Constructed by quick heuristic
• Solve smaller sub-MIPs (with fixed variables) to decrease infeasibility or objective value
• Use multiple threads to solve sub-MIPs in parallel
• Various neighborhood strategies
• adaptive to spend more time on more successful ones

Quick?
• No – runs “forever”, until work (NoRelHeurWork) or time (NoRelHeurTime) limit is reached
Captures problem structure?
• No – ad-hoc generation of sub-MIPs
General?
• No – main use when relaxations solve too slowly
• Yes – can be successful to find good solution in limited time
© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 70
Heuristic Log Lines

Found heuristic solution: objective 1.377283e+07


Presolve removed 918 rows and 26705 columns
Presolve time: 2.50s
Presolved: 1269 rows, 21712 columns, 398388 nonzeros
Found heuristic solution: objective 2770423.8600
Variable types: 0 continuous, 21712 integer (21709 binary)
Found heuristic solution: objective 2762983.8600

Root relaxation: objective 1.010525e+06, 1504 iterations, 0.06 seconds (0.13 work units)

Nodes | Current Node | Objective Bounds | Work


Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time

0 0 1010524.73 0 269 2762983.86 1010524.73 63.4% - 3s


H 0 0 2736029.7100 1010524.73 63.1% - 3s
H 0 0 2727449.4800 1010524.73 62.9% - 3s
H 0 0 2693681.7100 1011189.12 62.5% - 3s
0 0 1038921.12 0 523 2693681.71 1038921.12 61.4% - 3s
H 0 0 2670522.0600 1038921.12 61.1% - 4s
Heuristic found better Lower bound on # unfixed
incumbent solution integer variables in RINS heuristic
© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 71
Examine node logs to identify likely areas of performance
bottlenecks
• Does the lack of progress involve the upper bound, lower
bound or both?
Improving Default • Assess whether node LP solve time is the primary bottleneck
• Do not: adjust branching parameters if Gurobi is in the root cut
Performance loop
• Do: consider using Gurobi’s No Relaxation Heuristic (NoRelHeur
parameter) if node LP solve times are highly problematic.
How can I use my newfound
knowledge about Gurobi’s Distinguish parameters that primarily help upper bound
from those that primarily help lower bound
algorithmic features to get • Example: Don’t raise intensity of heuristics when log indicates
better MIP performance? lack of progress in the lower bound is the performance
problem
• Aggressive cuts, MIPFocus = 2 or 3 more likely to help

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 72


Sometimes reducing or disabling completely default
parameter settings can help
• Some features may not succeed, so don’t spend time
Improving Default on them
Performance • Some features may be better covered by others
• Turning off heuristics (or reducing their intensity) even
though node log indicates they are effectively finding
How can I use my newfound good solutions.
• Branching may be able to find equally good solutions, resulting
knowledge about Gurobi’s faster node throughput will give branching more opportunity to
algorithmic features to get succeed.
• Turning off cuts (or reducing their intensity) when finding
better MIP performance? good solutions is important and progress in the lower
bound is modest
• Gurobi’s concurrentMIP feature can help
• 2 or more runs in parallel on the same model with different
parameter settings

Consider the fundamental tradeoff between node


processing rate and computation in each node

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 73


Thank You

For more information: [Link]

Roland Wunderling
Senior Developer

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 74


Parallelization
Parallelization opportunities
• Parallel probing during presolve
• Almost no improvement
• Use barrier or concurrent LP for initial LP relaxation solve
• Only helps for large models
• Run heuristics or other potentially useful algorithms in parallel to the
root cutting plane loop
• Moderate performance improvements: 20-25%
• Does not scale beyond a few threads
• Solve branch-and-bound nodes in parallel
• Main speed-up for parallel MIP
• Performance improvement depends a lot on shape of search tree
• Typically scales relatively well up to 8 to 16 threads

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 75


Parallelization
Parallelization issues
• Determinism
• Load balancing
• CPU heat and memory bandwidth
• Additional threads slow down main thread
• Root node does not parallelize well
• Sequential runtime of root node imposes limits on parallelization speed-up
• Amdahl's law
• A dive in the search tree cannot be parallelized
• Parallelization only helps if significant number of dives necessary to solve
model

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 76


Parallel MIP – Performance
194%
2.00 186%
178%
1.80

1.60 144%

1.40
113%
1.20

1.00

0.80
54%
0.60 39%
37%
31%
0.40 25%
19%
0.20 5%

0.00
2 threads 4 threads 6 threads 8 threads 10 threads 12 threads

node count speed-up

Achterberg and Wunderling: "Mixed Integer Programming: Analyzing 12 Years of Progress" (2013)
benchmark data based on CPLEX 12.5, models with ≥ 10 seconds solve time

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 77


Presolve - Coefficient Strengthening
Strictly Stronger?

Consider:
• 5 b1 + 3 b2 + 3 b3 + 3 b4 + 8 b5 ≤ 8
Stronger…?
• 4 b1 + 4 b2 + 4 b3 + 4 b4 + 8 b5 ≤ 8
Probably, but…
• Doesn't strictly dominate original
• (1, 0, 0, 0, 0.5) satisfies second, but not first
• Could weaken relaxation
• No definitive metrics
• Hippocratic oath of presolve
• "First, do no harm"
• Lots of cases where it hurts

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 78


Presolve - Cost Versus Benefit
Trade off cost of reduction vs benefit
Why worry about cost?
• Only one presolve for each MIP model
A few reasons:
• One presolve can still be expensive
• Aggressive use of sub-MIP heuristics (RINS)
• Lots of “truncated” MIP solves
• Multiple presolves, on smaller models
• Presolve can be dominant cost on each
• Strengthening as a cut separator

© 2022 Gurobi Optimization, LLC. Confidential, All Rights Reserved | 79

You might also like