Adaptive Large Neighborhood Search For The Commodity Constrained Split Delivery VRP
Adaptive Large Neighborhood Search For The Commodity Constrained Split Delivery VRP
a r t i c l e i n f o a b s t r a c t
Article history: This paper addresses the commodity constrained split delivery vehicle routing problem (C-SDVRP) where
Received 29 October 2018 customers require multiple commodities. This problem arises when customers accept to be delivered
Revised 26 July 2019
separately. All commodities can be mixed in a vehicle as long as the vehicle capacity is satisfied. Multiple
Accepted 26 July 2019
visits to a customer are allowed, but a given commodity must be delivered in one delivery.
Available online 26 July 2019
In this paper, we propose a heuristic based on the adaptive large neighborhood search (ALNS) to solve
Keywords: the C-SDVRP, with the objective of efficiently tackling medium and large sized instances. We take into
Vehicle routing problem
account the distinctive features of the C-SDVRP and adapt several local search moves to improve a solu-
Multiple commodities
Adaptive large neighborhood search
tion. Moreover, a mathematical programming based operator (MPO) that reassigns commodities to routes
Local search is used to improve a new global best solution.
Computational experiments have been performed on benchmark instances from the literature. The results
assess the efficiency of the algorithm, which can provide a large number of new best-known solutions in
short computational times.
© 2019 Published by Elsevier Ltd.
[Link]
0305-0548/© 2019 Published by Elsevier Ltd.
2 W. Gu, D. Cattaruzza and M. Ogier et al. / Computers and Operations Research 112 (2019) 104761
programming (MIP) based operator is developed. Finally, we pro- 3. Adaptive large neighborhood search
vide a large number of new best-known solutions for medium and
large sized C-SDVRP instances up to 100 customers and 3 com- In order to tackle the C-SDVRP for medium and large instances,
modities per customer within about 10 min of computing time. we propose a heuristic method based on the ALNS framework of
The rest of this paper is organized as follows. Section 2 de- Ropke and Pisinger (2006). We make use of local search, and we
fines the C-SDVRP. The proposed algorithm for the C-SDVRP is de- develop a mathematical programming based operator to improve
scribed in Section 3. Section 4 reports the computational results. the quality of solutions.
Section 5 concludes the paper and suggests new directions for fu- Due to the characteristics of the problem under study, one node
ture research. can be duplicated as many times as the number of commodities
required by the associated customer (Archetti et al., 2014). With
2. Problem definition each duplicated node, we then associate the demand of the cus-
tomer for the corresponding commodity. However, the simple du-
The C-SDVRP is defined on a directed graph G = (V, A ) in which plication of customers without further consideration of the cus-
V = {0} VC is the set of vertices, and A is the set of arcs. More tomer location can produce several equivalent solutions as illus-
precisely, VC = {1, . . . , n} represents the set of customer vertices, trated previously in the paper. To enhance the performance of the
and 0 is the depot. A cost cij is associated with each arc (i, j ) ∈ A algorithm, we explicitly consider customer replications for each
and represents the non-negative cost of travel from i to j. Let M commodity and the customer as a single entity associated with its
= {1, . . . , M} be the set of available commodities. Each customer total demand. To this intent and for the sake of clarity, in the fol-
i ∈ VC requires a quantity dim ≥ 0 of commodity m ∈ M. Note that lowing, we will call the duplicated nodes customer-commodity, and
a customer i ∈ VC may request any subset of commodities, namely we will use the term customer to refer to the customer associated
dim may be equal to zero for some m ∈ M. with the total demand.
A fleet of identical vehicles with capacity Q is based at the de- We represent a solution of C-SDVRP as the set of routes needed
pot and is able to deliver any subset of commodities. to serve all customers. In order to take into account the specific
The problem is to find a solution that minimizes the total trans- features of the C-SDVRP, a route can be represented by (1) a se-
portation cost, and that involves two related decisions such as quence of customers, or by (2) a sequence of customer-commodities.
finding a set of vehicle routes that serve all customers and select- Note that, because a customer can be delivered by several vehicles,
ing the commodities that are delivered by a vehicle route to each in the representation with customers, it is possible that a customer
customer. Moreover, each solution must be such that: appears in several routes (with different commodities).
The second representation gives more flexibility but increases in
1. Each route starts and ends at the depot; complexity. To clarify this, we consider the case of removing a cus-
2. The total quantity of commodities delivered by each vehicle tomer from a solution. In the first case, when removing a customer
does not exceed the vehicle capacity; from a route, the customer associated with all the commodities de-
3. Each commodity requested by each customer must be delivered livered by the route is removed. In the second case, it is possible
by a single vehicle; to remove only one commodity. We illustrate the two represen-
4. The demands of all customers need to be satisfied. tations in Figs. 4 and 5. We use the example presented in Fig. 1.
We use the example presented in Fig. 1 to illustrate an optimal Fig. 4 shows a set of routes represented as two sequences of cus-
solution of a C-SDVRP instance. The solution is provided in Fig. 3. tomers. If customer 1 is removed from route 1, then customer 1
In the C-SDVRP case, two vehicle routes are required to deliver all with all the three commodities is removed. If customer 3 in route 2
the commodities required by the customers. One route (black line) is removed, then customer 3 with commodity 2 is removed. Once
delivers all the commodities of customer 1 (6 units) and delivers a customer is removed, the remaining capacity of this route in-
commodities 1 and 3 of customer 3. The cost of this route is 13. creases and the cost decreases. Note that even if customer 3 has
The other route (purple line) delivers all the commodities of cus- been removed from route 2, he is still present in route 1.
tomer 2 (8 units) and commodity 2 of customer 3 (2 units). The Fig. 5 shows a set of routes represented as two sequences of
cost of this route is 11.5. The total cost of the solution is 24.5. customer-commodities. In order to better understand the feature of
For the solutions obtained using the other three delivery strategies C-SDVRP, we hide the circle which represents the customer. In fact,
(separate routing, mixed routing and split delivery mixed routing), the two routes imply the same solution as shown in Fig. 4. If com-
the interested reader is referred to Archetti et al. (2014). modity 2 of customer 3 in route 2 is removed, then the remaining
capacity of this route increases and the cost decreases. However, if
one commodity (like commodity 1) of customer 1 is removed, then
the remaining capacity of this route increases but the cost of this
route does not change.
As mentioned above, in this paper we present operators for
customers and customer-commodities that work based on the spe-
cific need. It is important to note that to deal with both cus-
tomers and customer-commodities, each route in the solution has
two concurrent representations. In the first representation, each
route contains a sequence of customers, and a set of commodi-
ties is associated with each customer (see Fig. 4 for an example).
In the second representation, each route contains a sequence of
customer-commodities (see Fig. 5 for an example). When using an
operator, the corresponding representation (customers or customer-
commodities) is used. The following considerations are then taken
into account when translating one representation to the other:
is always optimal to group them (since there is zero cost to A removal heuristic hrem and an insertion heuristic hins are ap-
travel between these customer-commodities). Thus, in the first plied to the current solution s. We indicate by srem and sins the in-
representation, a customer will not appear twice on the same termediate solutions obtained after applying hrem and hins respec-
route. tively.
• When dealing with the second representation with customer- hrem removes and hins inserts ρ customers or customer-
commodities, it is possible that a customer appears in different commodities, where ρ is a parameter that varies between ρ min and
routes (with different commodities). Hence, it is possible in the ρ max . We adapt the strategy proposed by François et al. (2016) to
first representation with customers that a customer appears in set the value of ρ : we slightly destroy the current solution when
several routes (e.g., customer 3 in Fig. 4(a)). a new solution has just been accepted (small values of ρ ), and we
• When dealing with the first representation with customers, increase the value of ρ proportionally to the number of iterations
moving a customer means to move that customer with the as- without improvements. The probabilities of selecting a removal or
sociated commodities in the current route. These commodities an insertion heuristic are dynamically adjusted during the search
may be a subset of the commodities required by this customer. (Section 3.8).
Once the heuristics hrem and hins have been applied, the solu-
3.1. General framework tion obtained sins is possibly improved by applying a local search.
The resulting solution is denoted by s (Section 3.3).
The basic idea of ALNS is to improve the current solution We allow the insertion heuristics to propose solutions that vio-
by destroying it and rebuilding it. It relies on a set of removal late the capacity constraint. This is done with the aim of reducing
and insertion heuristics which iteratively destroy and repair so- the number of routes in the solution. Infeasibility is then penalized
lutions. The removal and insertion heuristics are selected using a in the objective function by adding a factor proportional to the vi-
roulette wheel mechanism. The probability of selecting a heuris- olation. Details on the penalization of the load violation are given
tic is dynamically influenced by its performance in past iterations in Section 3.9. We then try to recover feasibility by applying the
(Pisinger and Ropke, 2010). A sketch of the method is outlined in local search.
Algorithm 1. Whenever a new best solution is obtained, a mathematical pro-
In Algorithm 1, sbest represents the best solution found during gramming based operator (MPO) is applied to further improve the
the search, while s is the current solution at the beginning of an new best solution (Section 3.7). This can be seen as an intensifica-
iteration. The cost of a solution s is denoted by f(s). tion phase of the algorithm.
W. Gu, D. Cattaruzza and M. Ogier et al. / Computers and Operations Research 112 (2019) 104761 5
The new solution s is then subject to an acceptance rule. If ac- 3.3. Local search
cepted, the new solution becomes the current solution. Otherwise,
the current solution does not change. This is repeated until a stop- In order to improve a solution, a local search procedure (LS) is
ping criterion is met and the best solution found sbest is returned. applied. LS is based on a set of classical operators that work on
In the following, each component of this algorithm is explained customers and on customer-commodities.
in detail. Let us first introduce some notation that will be useful in the
remaining part of the section. Let u and v be two different nodes.
They are associated with a customer or a customer-commodity, or
one of them may be the depot depending on the operator. These
3.2. Initial solution
nodes may belong to the same route or different routes. Let x and y
Let ncc be the number of customer-commodities. A feasible ini- be the successors of u and v in their respective routes. R(u) denotes
tial solution is constructed as follows. First, we randomly deter- the route that visits node u.
mine a sequence of customer-commodities and we construct a giant
tour S = (S0 , S1 , . . . , Sncc ), where S0 represents the depot and Si is
the ith customer-commodity in the sequence. Then, we apply a split Operators on customers
procedure to get a feasible solution. This procedure is inspired by Here we present the operators that are defined for customers.
the works of Beasley (1983) and Prins (2004). It works on an aux- These operators consider a customer together with all the com-
iliary graph H = (X , Acc , Z ), where X contains ncc + 1 nodes in- modities delivered to this customer in a given route. The differ-
dexed from 0 to ncc , where 0 is a dummy node and node i, i > 0, ent operators are illustrated in Fig. 6 (where we only represent the
represents customer-commodity Si . Acc contains one arc (i, j), i < j, intra-route cases).
if a route serving customer-commodities Si+1 to Sj is feasible with Insert customer: this operator removes a customer u and in-
respect to the capacity Q of the vehicle. The weight zij of arc (i, j) serts it after customer v.
is equal to the cost of the trip that serves (Si+1 , Si+2 , . . . , S j ) in this Swap customers: this operator swaps the positions of customer
same order. The optimal splitting of the giant tour S corresponds u and customer v.
to a minimum cost path from 0 to ncc in H. Finally, this feasible 2-opt on customers: if R(u ) = R(v ), this operator replaces (u, x)
solution is improved by applying local search (Algorithm 2). and (v, y) by (u, v) and (x, y).
Operators on customer-commodities when the commodities of some customers are delivered in sev-
Here, we present the operators that are defined for customer- eral routes, swapping customer-commodities may be feasible with
commodities. The different operators are illustrated in Fig. 7. respect to the capacity and decreases the solution cost. As an ex-
Insert customer-commodity: this operator removes a customer- ample, in Fig. 7, case (b), v and w are two commodities for the
commodity u then inserts it after customer-commodity v. same customer, while u is a commodity for another customer who
Swap customer-commodities: this operator swaps customer- requires two commodities. Swapping u with both v and w may not
commodity u and customer-commodity v. be possible because of vehicle capacity while swapping customer-
We can notice that the insert and swap operators for customer- commodities u and v decreases the cost of the solution.
commodities allow some moves that are not feasible when we con- Note that we do not consider a 2-opt operator based on
sider only customers. In most cases, the insertion of a customer or customer-commodities. Since this operator works on elements of
of only one of the commodities to be delivered has the same cost. the same route, it is never beneficial to split apart customer-
However, during the search, we allow the violation of the vehi- commodities associated to the same customer. It follows that its
cle capacity, and the infeasibility is penalized (see Section 3.9). In behavior would be the same as the operator 2-opt on customers.
an infeasible solution with some overcapacity in a route, it may LS is applied when the split procedure has generated an initial
be possible that inserting customers in other routes also leads solution or when the removal and insertion heuristics have mod-
to an infeasibility; while inserting only a customer-commodity de- ified the solution. In the first case, we do not allow the solution
creases the infeasibility or leads to a feasible solution. Similarly, a to be infeasible with respect to the vehicle capacity. In the second
swap of customers may not be feasible (or lead to an increase in case, the insertion heuristic can produce infeasible solutions. As a
the cost) due to the vehicle capacity restriction. At the opposite, consequence, LS (and thus the operators) may have to address the
W. Gu, D. Cattaruzza and M. Ogier et al. / Computers and Operations Research 112 (2019) 104761 7
infeasibility of the current solution. LS is depicted in Algorithm 2. serted at its best position. After each iteration, the insertion costs
Given a solution, first, four local search operators are applied: in- of the remaining removed customer-commodities are recomputed.
sert customer, insert customer-commodity, swap customers and swap This process stops when all customer-commodities have been in-
customer-commodities. They are invoked iteratively until there is no serted.
further improvement. Then the operator 2-opt on customers is ap- Regret insertion: this insertion heuristic chooses, at each itera-
plied. If it improves the solution, we reiterate with the first four tion, the removed customer-commodity which produces the biggest
operators. After exploring the neighbor defined by each operator, regret if it is not inserted at its best position at the current itera-
the move that improves the most is implemented. When all the tion. In regret-k heuristic, at each iteration, we first calculate, for
local search operators fail, routes of the current solution are con- each removed customer-commodity i, fi1 the cost of inserting i at
η
catenated and split algorithm is applied again. This strategy is in- its best position, and fi (η ∈ {2; . . . ; k} ) the cost of inserting i
spired by Prins (2009). If this provides a better solution, the whole at its ηth best position. Then, for each customer-commodity i the
procedure is repeated. η
regret value is computed as: regi = kη=2 ( fi − fi1 ). This rep-
resents the extra cost if i is not inserted at the current iteration
3.4. Removal heuristics in its best position. Finally, the customer-commodity with highest
regret value regi is inserted at its best position into the solution.
This section describes the set of removal heuristics we propose The heuristic continues until all customer-commodities have been
to destroy the current solution. Heuristics Shaw removal and worst inserted. For regret-k insertion heuristics in this work, we have
removal use the representation of a solution with customers, while considered the values of k to be two and three.
random removal can be applied to both representations with cus- Random insertion: this insertion heuristic randomly chooses a
tomers and customer-commodities. Another operator that randomly removed customer-commodity, and randomly chooses the insertion
removes one route is also considered. position in the solution.
Shaw removal: this heuristic aims to remove a set of customers Note that when inserting customer-commodities, violations of
which are similar based on a specified criterion (e.g., location or vehicle capacity are allowed and penalized in the cost function (see
demand). When customers with really different characteristics are Section 3.9). However, we also impose a maximum capacity viola-
removed, it is likely that each customer is then reinserted at the tion on each route. Hence, it is possible that a customer-commodity
same position in the solution. Hence, by removing similar cus- cannot be inserted in any route of the current solution. In this
tomers, Shaw removal aims to provide a different solution once an case, we create one additional route which includes this customer-
insertion heuristic has been applied. commodity.
Here, we define similarity between two customers as the dis-
tance between these two customers. The heuristic works as fol- 3.6. Acceptance and stopping criterion
lows: a first customer i is randomly selected and removed. We
then compute the similarity between customer i and the other cus- When the removal, insertion and LS steps have been applied,
tomers (here, it is the distance), and sort the customers in a list we use a simulated annealing criterion to determine if the new
L, according to the similarity with customer i. A determinism pa- solution obtained s is accepted. However, a deterministic deci-
rameter pd (pd ≥ 1) is used to have some randomness in the cus- sion rule is applied in two cases. At each iteration of the algo-
tomer selection to be removed in L, with a higher probability for rithm, if s has a lower cost than the current solution s (f(s ) < f(s)),
the first customer. The removed customer then plays the role of then s is accepted. The solution s is rejected if the costs f(s ) and
customer i and the procedure is repeated until ρ customers have f(s) are equal. We reject solutions with the same cost in order
been removed. The interested reader can find a detailed Shaw re- to avoid working with equivalent solutions where some customer-
moval pseudocode in Ropke and Pisinger (2006). commodities belonging to the same customer have been exchanged.
Worst removal: this heuristic aims at removing the customers When f(s ) > f(s), then s is accepted with probability
who induce a high cost in the solution. More precisely, at each it- e−( f (s )− f (s ))/T , where T > 0 is the temperature. As proposed in
eration, we first calculate for each customer the cost decrease if Ropke and Pisinger (2006), the initial temperature is set such
it is removed from the solution. Then customers are sorted in de- that a solution which is w% worst than the initial solution sinit is
creasing order according to these values. As in Shaw removal, a de- accepted with a probability paccept . More formally, T is chosen such
terminism parameter pd controls the randomization in the choice that e−(w· f (sinit ))/T = paccept . Then, at each iteration of the ALNS,
of the worst customer to remove. the temperature T is decreased using the formula T = T · γ , where
Random removal: this removal heuristic randomly chooses ρ γ ∈ [0, 1] is the cooling factor.
customers and removes them from the current solution. It can also The stopping criterion for the whole procedure is a fixed num-
be applied with customer-commodities, by randomly removing ρ ber of ALNS iterations.
customer-commodities.
Route removal: in this removal heuristic, an entire route from 3.7. Mathematical programming based operator to reassign
the current solution is randomly selected, and all the customer- commodities
commodities on this route are removed.
When a new best solution is identified, we intensify the search
3.5. Insertion heuristics by applying a mathematical programming based operator (MPO).
The main purpose is to assign the visits to a specific customer
In this section, we describe the insertion heuristics imple- among the solution routes in a different way by solving a capac-
mented in the proposed ALNS algorithm. In this work, all insertion itated facility location problem.
heuristics consider customer-commodities. We use Figs. 8(a) and (b) to explain the idea behind MPO. In the
Greedy insertion: in this insertion heuristic, at each iteration, example, we assume the vehicle capacity is 10 units. The number
for each removed customer-commodity i, we first compute the best dim in the ellipse reflects the demand of commodity m required
insertion cost fi1 : cost of inserting the customer-commodity at by customer i. We focus on the commodities of customer 2: the
its best position into the solution (i.e., the insertion that mini- ellipses with a solid line in Fig. 8. We assume that Fig. 8(a) is a
mizes the increase of the cost of the solution). Then, the customer- solution obtained after LS. Customer 2 has two commodities: the
commodity with the minimum insertion cost is selected to be in- first is delivered on route 2, and the second is delivered on route 3.
8 W. Gu, D. Cattaruzza and M. Ogier et al. / Computers and Operations Research 112 (2019) 104761
Inserting or swapping one of these two customer-commodities does 3.8. Adaptive weight adjustment
not provide a better solution. However, we can notice that the de-
liveries to customer 2 can be reassigned to the first route and the A roulette wheel is used to give more or less importance to
total cost decreases, as shown in Fig. 8(b). The reader can notice the removal and insertion heuristics to be used. The procedure
that the operators implemented in LS do not consider this kind of implemented is based on the principles described in Ropke and
movements. Pisinger (2006) and François et al. (2016). This last variant in-
Let us introduce the notation that we need to formally present cludes a normalization process for the score. The main difference
MPO. First, we assume that customer i is considered. We indicate in our approach is that removal and insertion heuristics are not
by: selected independently. A pair of removal and insertion heuristics
• Mi the set of commodities required by customer i; h p = {hrem ; hins } is chosen in each iteration. We denote by H p the
• si the solution obtained from the current solution by removing set of pair of heuristics to be used. Each pair hp is associated with
all the visits to customer i; a weight ωh p , a score πh p , and θh p the number of times that pair
Ri the set of routes in si ; hp has been used. Initially, all pairs of heuristics have the same
•
• cri the cost to insert (best insertion) customer i in route r ∈ Ri ; weight and p∈H p ωh p = 1.
• Qri the remaining capacity in route r ∈ Ri . We define a segment as a fixed number of ALNS iterations
in the proposed algorithm. During a segment, the weights of all
Then, we introduce the following binary decision variables:
pairs are kept constant. Before starting a new segment, for each
1 if the delivery of commodity m of customer i is pair h p ∈ H p , the score πh p and the number of times the pair is
ximr = assigned to route r ∈ Ri ; used θh p are reset to 0. During a segment, each time a pair hp
0 otherwise. is used, θh p is increased by 1, and if the new solution s is ac-
cepted after using pair hp , the score πh p is updated according to:
1 if at least the delivery of one commodity required
πh p ← πh p + σμ , where σ μ (μ ∈ {1; 2; 3}) reflects different cases
xir = by customer i is assigned to route r ∈ Ri ;
regarding the score change πh p . That is, the score πh p is increased
0 otherwise.
by σ 1 when s is a new best solution, or σ 2 when s is a new
The mathematical program that we solve in order to apply MPO improved solution (f(s ) < f(s)) but not a new best solution, or σ 3
is the following: when s is not an improved solution but has been accepted accord-
(IPMPO ) min cri xir (1) ing to the simulated annealing criterion. The values σ μ (σ μ ∈ [0,
r∈Ri 1]) are normalized to satisfy σ1 + σ2 + σ3 = 1.
At the end of each segment, we update all the weights of the
s.t. ximr = 1, ∀ m ∈ Mi (2) pairs of heuristics based on the recorded scores. First, the score
r∈Ri πh p
πh p is updated as: πh p ← θh p , where θh p is the number of times
dim ximr ≤ Qri xir , ∀ r ∈ Ri (3) that pair hp was used in this segment. If θh p = 0, we set πh p to
m∈Mi
the same value as in the previous segment. Then, the scores of all
ximr ∈ {0, 1}, ∀ m ∈ Mi , r ∈ Ri (4) pairs of heuristics are normalized:
πh p
π̄h p = . (6)
xir ∈ {0, 1}, ∀ r ∈ Ri (5) h∈H p πh
This mathematical program corresponds to a capacitated facil- Let ωh p ,q be the weight of pair hp used in segment q, and
ity location problem, where only the costs related to the inclusion λ ∈ [0, 1] a factor which determines the change rate in the per-
of a new route in the solution (the fixed cost) are taken into formance of the pair of heuristics. At the end of segment q, the
account. The objective function (1) aims to minimize the total weight of all pairs of heuristics hp to be applied in segment q + 1
insertion cost. Constraints (2) require that the delivery of each is updated as:
commodity (i.e., the delivery of each customer-commodity) must be
assigned to one route. Constraints (3) impose that the total quan- ωh p ,q+1 = (1 − λ )ωh p ,q + λπ̄h p . (7)
tity of commodities assigned to a selected vehicle does not exceed
its remaining capacity. Constraints (4)–(5) define the decision 3.9. Infeasibility penalization scheme
variables.
(IPMPO ) is solved for each i ∈ VC , but only the reassignment of In our implementation of the ALNS, we allow some violations
visits associated with the highest cost reduction is effectively im- of the vehicle capacity in order to reduce the number of routes.
plemented. Let Kinit be the number of vehicles used in the initial solution sinit .
W. Gu, D. Cattaruzza and M. Ogier et al. / Computers and Operations Research 112 (2019) 104761 9
Then, the vehicle capacity Q can be extended by an amount of When n = 15, 64 instances are available, one for each com-
Qextra = KQ . If a vehicle delivers more than Q units of products, bination of parameters in set P. These are referred as small in-
init
we penalize the infeasibility by adding to the solution cost a term stances. When n ∈ {20, 40, 60, 80}, 80 mid-size instances are avail-
proportional to the load violation which is β l(s), where l(s) is the able: 20 instances for each value of n. For the mid-size instances,
total load violation of the solution s and β is the penalty rate. the combination of parameters in P is restricted to M = 3, α = 1.5
The penalty rate β is related to capacity violations. Initially, β and = [1, 100]. Hence I and p can take two values each, lead-
is set equal to a minimum value βmin computed as ing four combinations of parameters. For each combination, five
instances have been generated, leading to 20 instances in total.
f (sinit ) When n = 100, 320 large instances are available, that means five
βmin = 10 · ,
i∈VC ,m∈M dim instances for each combination of parameters in set P.
In the following sections, we present the parameter setting for
where f(sinit ) is the cost of the initial solution obtained after apply-
our algorithm and the results obtained. We provide detailed results
ing the split procedure and LS. This ratio approximates an average
on each of the 464 instances in the Appendix A. Due to the high
transportation cost per unit of demand.
number of instances, we only report average results. In particular,
The penalty rate β is dynamically modified during the search
we present results for each group of instances, where a group is
since a high rate may eliminate infeasibility quickly, but may also
defined by a quartet (n, I, M, p). Results for a group are averaged
prevent searching other promising regions. We keep track of the
on the values of α and .
number of infeasible solutions obtained during the last consecutive
We summarize all the notations used to present the results in
I iterations of the ALNS algorithm. If Iinf infeasible solutions are ob-
Table 1.
tained consecutively, the value of β is increased to 2β . Similarly, if
Ifeas feasible solutions are generated consecutively, the value of β
4.2. ALNS parameters
is decreased to max {βmin ; β /2}.
Table 1
Notations for computational results.
Symbol Meaning
Table 2
LNS configurations compared to ALNS configurations.
conf. Shaw RandC RandCC Worst Route Greedy Regret-2 Regret-3 Rand avg.B avg.t(s) nbNBK
heuristic, which corresponds to designing a large neighborhood umn enumerates the configurations that we tested. Columns 3 to 7
search (LNS). This permits to emphasize the efficiency of individual indicate which removal heuristics are used in a specific configura-
insertion and removal heuristics, and also to point out the benefit tion. Among them, ‘RandC’ and ‘RandCC’ represent the random re-
of the adaptive approach included in the design of the ALNS when moval heuristic considering customers and customer-commodities,
choosing the removal and insertion heuristics. respectively. Columns 8 to 11 indicate which insertion heuristics
This analysis is performed to tune the proposed algorithm. We are used. In columns 12 to 14 we report the following average
only use the 20 instances with n = 80 customers. A summary of statistics for each configuration: the gap with the best-known solu-
these experiments is reported in Table 2. The first column shows tions, the computational time in seconds, the number of new best-
which configuration is considered (LNS or ALNS). The second col- known solutions (see Table 1).
W. Gu, D. Cattaruzza and M. Ogier et al. / Computers and Operations Research 112 (2019) 104761 11
Table 3
Impact of the number of iterations on small instances.
Instances ALNS (100 iterations) ALNS (10 0 0 iterations) ALNS (30 0 0 iterations) ALNS (50 0 0 iterations)
n M p [Link] nbIns avg.O avg.t(s) nbOPT avg.O avg.t(s) nbOPT avg.O avg.t(s) nbOPT avg.O avg.t(s) nbOPT
C101 15 2 0.6 22 8 0.00 0.93 8 0.00 2.82 8 0.00 7.24 8 0.00 11.65 8
C101 15 2 1 30 8 0.01 1.64 7 0.00 4.45 8 0.00 10.63 8 0.00 16.86 8
C101 15 3 0.6 28 8 0.00 1.13 8 0.00 3.92 8 0.00 10.40 8 0.00 17.12 8
C101 15 3 1 45 8 0.47 3.72 4 0.33 9.81 7 0.21 22.26 7 0.21 34.62 7
R101 15 2 0.6 22 8 0.00 1.07 8 0.00 2.89 8 0.00 7.02 8 0.00 11.05 8
R101 15 2 1 30 8 0.00 2.10 8 0.00 4.75 8 0.00 10.76 8 0.00 16.99 8
R101 15 3 0.6 28 8 0.00 1.26 8 0.00 3.88 8 0.00 9.65 8 0.00 15.59 8
R101 15 3 1 45 8 0.35 3.36 6 0.00 8.63 8 0.00 20.21 8 0.00 31.57 8
Table 4
Impact of the number of iterations on mid-20 instances.
Instances ALNS (100 iterations) ALNS (10 0 0 iterations) ALNS (30 0 0 iterations) ALNS (50 0 0 iterations)
n M p [Link] nbIns avg.O/B avg.t(s) nbOPT avg.O/B avg.t(s) nbOPT avg.O/B avg.t(s) nbOPT avg.O/B avg.t(s) nbOPT
C101 20 3 0.6 37.4 5 0.26 2.41 3 0.10 8.11 4 0.10 20.66 4 0.00 33.38 5
C101 20 3 1 60 5 0.66 5.75 1 0.11 21.13 2 0.04 49.66 4 0.00 77.14 5
R101 20 3 0.6 37.4 5 0.01 3.87 4 0.00 9.60 5 0.00 22.09 5 0.00 34.87 5
R101 20 3 1 60 5 0.82 10.68 1 0.16 25.66 2 0.01 54.17 3 0.01 81.72 3
In the first five configurations, we study the influence of the re- Results over the testbed are indicated in bold. Detailed results
moval heuristic. To this intent, the insertion heuristic is kept fixed. are provided in Appendix A.
Using the Shaw removal heuristic, the LNS is able to improve all According to Table 3, the proposed algorithm solves to op-
the best-known solutions. timality 57 out of 64 small instances when 100 iterations are
Configurations 6 to 8 point out the impact of the insertion allowed, within an average computational time of 2 s. When
heuristic. We fixed the removal heuristic to the Shaw removal since we increase the number of iterations to 30 0 0, the number
it provided the best results in the previous experiments. These of optimal solutions reaches 63 with an average computational
tests show that all insertion heuristics perform well except the time of 12 s. The average gap with optimal solutions obtained
random insertion. To further evaluate the behavior of the random by Archetti et al. (2015) varies from 0.10% to 0.03%. Increasing the
insertion heuristic we consider configuration 9 where we modify number of iterations to 50 0 0 does not improve the quality of the
the removal heuristic. It can be observed that the results do not results.
improve. Among the LNS configurations, combining the Shaw re- When n = 20, results reported in Table 4 indicate that our
moval with the regret-3 insertion provides the best results. method identifies an optimal solution in 50 0 0 iterations for 18 out
In the second part of Table 2, we consider different configura- of the 19 instances for which Archetti et al. (2015) provided op-
tions for the ALNS framework, with several removal and insertion timal values. The average gap with the best-known values is less
heuristics. Configuration 10 shows the performance of the ALNS al- than 0.01%. Over the whole set, the average CPU time is less than
gorithm using all proposed heuristics. In other configurations, only 1 min.
a subset of them is invoked in the ALNS. As we can see, using Since large instances usually require more iterations to ob-
all the proposed removal and insertion heuristics does not provide tain high-quality results, we compare the ALNS algorithm behavior
the best results. The best configuration is configuration number 13: with 50 0 0 and 10,0 0 0 iterations on mid-80 instances. In Table 5,
an ALNS with two removal heuristics: Shaw and random removal we report average results and the gap with respect to the best-
of customers, and three insertion heuristics: greedy, regret-2 and known values. Detailed results are provided in Appendix A. ALNS
regret-3. In the following sections, all reported computational ex- can provide best-known solutions for the 20 mid-80 instances in
periments have been conducted using this configuration. the benchmark. Within 50 0 0 iterations, the best-known solutions
can be improved by about 0.79% on average and the CPU times
4.4. Analysis with respect to the number of iterations is less than 9 min. When 10,0 0 0 iterations are executed, the so-
lutions are averagely 0.92% better than the best-known values. On
We examine the impact of the number of iterations for differ- the other side, the average computational times are almost dou-
ent instance sizes. Preliminary computational experiments showed bled.
that the algorithm converges after a certain number of iterations.
Thus, we aim to determine the number of iterations that would be 4.5. Computational experiments on the whole testbed
a good compromise between the solution quality and the compu-
tational time. In this section, we consider the results obtained on the whole
For small instances and medium instances with 20 customers set of instances for the C-SDVRP with the designed ALNS algo-
(mid-20), we run our algorithm with the number of iteration iter rithm. As determined in the previous sections, the removal and in-
limited to 100, 1000, 3000 and 5000. We compare the results with sertion heuristics are the ones of configuration 13, and the number
those reported in Archetti et al. (2014) and Archetti et al. (2015). of iterations is equal to 30 0 0 (resp. 50 0 0) on small (resp. medium
In Tables 3 and 4 we report average statistics for different values and large) instances. The algorithm is run once on each instance.
of iter: the gap with respect to the optimal values, the computa- The comparison is made with the results reported in
tional time in seconds, the number of optimal solutions obtained Archetti et al. (2014) and Archetti et al. (2015). Archetti et al.
(see Table 1). (2015) propose a branch-and-price algorithm that can solve to
12 W. Gu, D. Cattaruzza and M. Ogier et al. / Computers and Operations Research 112 (2019) 104761
Table 5
Impact of the number of iterations on mid-80 instances.
Table 6
Summary of results on mid-40 sized instances.
Table 7
Summary of results on mid-60 sized instances.
optimality instances with up to 40 customers but that can system- For instances with n ≥ 60, no optimal solution is available. For
atically close instances with up to 20 customers. When the optimal the mid-60 instances (n = 60) (see Table 7), the ALNS finds solu-
value is not available we compare to the best known solution (BKS) tions that are on average 0.49% better than the BKS. Among them,
given either by Archetti et al. (2014) or Archetti et al. (2015). 15 new best-known values are obtained. The CPU times are less
For the 64 small instances, an optimal solution is known. Re- than 5 min on average.
sults reported in Table 3 indicate that our algorithm finds an opti- As presented in Table 5 on mid-80 instances (n = 80), the ALNS
mal solution for 63 out of 64 instances. The average optimality gap identifies solutions that are 0.79% better than the BKS, and new
is 0.03%. best-known values are found for all instances.
For the mid-20 instances (n = 20), 19 out of 20 instances have Table 8 reports the results on the 320 large instances (n = 100).
a known optimal value. The proposed algorithm can find 18 out of The ALNS finds 300 new best-known values with an average im-
19 optimal values. On average, the gap with respect to BKS (either provement of 0.70%. The computational time is around 10 min
optimal or feasible) is lower than 0.01%. which is very reasonable when considering the instance size.
For the mid-40 instances (n = 40), average results are reported
in Table 6. Only 5 optimal values are known. On average, the pro- 4.6. Effectiveness of MPO operator in the ALNS algorithm
posed ALNS finds solutions that are 0.26% better than the best-
known solutions (either optimal or feasible). Our algorithm finds 4 As described in Section 3.7 we developed a mathematical pro-
out of the 5 known optimal solution while 9 new best-known so- gramming based operator (MPO) to re-assign commodities for one
lutions have been identified. The algorithm runs in less than 2 min customer to the routes of a solution. Due to the increase observed
on average. in computational time consumption, we decided to use it only to
Table 8
Summary of results on large sized instances.
Instances Results In this section, we evaluate the importance of the local search
n M p [Link] nbIns [Link] [Link] in our ALNS algorithm. We run, on mid-80 instances, the ALNS al-
gorithm without LS and MPO with a limit of 40 0,0 0 0 iterations.
C101 20 3 0.6 37.4 5 2.60 0.00
C101 20 3 1 60 5 7.00 0.20
We then compare the results obtained by the ALNS with LS and
R101 20 3 0.6 37.4 5 4.00 0.20 MPO on 50 0 0 iterations. This choice is made to have a compari-
R101 20 3 1 60 5 9.80 0.20 son fair: we remove two optimization components, but we allow a
large number of iterations.
C101 40 3 0.6 76.2 5 8.00 0.20
C101 40 3 1 120 5 16.40 2.00 We compare the results obtained by the proposed algorithm,
R101 40 3 0.6 76.2 5 12.40 0.00 indicated by ALNS+LS+MPO, with the same algorithm when LS and
R101 40 3 1 120 5 18.60 1.80 MPO are deactivated. This last version is indicated as ALNS-LS-
C101 60 3 0.6 110 5 20.80 0.20 MPO. The performance comparison of both variants is reported in
C101 60 3 1 180 5 30.80 1.60 Table 10 for mid-80 and large instances.
R101 60 3 0.6 110 5 18.00 0.80 It is clear from Table 10 that after 40 0,0 0 0 iterations the re-
R101 60 3 1 180 5 24.60 1.60 sults obtained by the ALNS without LS and MPO are of lower qual-
C101 80 3 0.6 150.4 5 29.00 1.60 ity than those obtained by the original ALNS. We only obtain 15
C101 80 3 1 240 5 32.00 6.40 new best-known values for the 20 mid-80 instances. Moreover, the
R101 80 3 0.6 150.4 5 23.80 1.00 average improvement (0.39%) does not compete with the improve-
R101 80 3 1 240 5 29.60 2.40
ment obtained with the proposed algorithm (ALNS+LS+MPO) while
C101 100 2 0.6 136.4 40 27.05 0.85 the computational times are similar. The same observations can be
C101 100 2 1 200 40 29.65 2.00 made from the results for large instances. We then conclude on the
C101 100 3 0.6 188.4 40 31.60 1.33
importance of the LS and MPO in the proposed ALNS algorithm.
C101 100 3 1 300 40 37.28 3.33
R101 100 2 0.6 136.4 40 27.15 0.53
R101 100 2 1 200 40 30.15 2.10 4.8. Trend between instance size and computational time
R101 100 3 0.6 188.4 40 32.40 2.23
R101 100 3 1 300 40 34.48 3.58 Last, we examine how the CPU time required by the ALNS
varies according to the instance size. We consider the results ob-
tained with 50 0 0 iterations (even for the small instances) to per-
improve a new global best solution further. In this section, we an- form the analysis. We sort the 464 (small, medium and large) in-
alyze the results on medium and large instances to prove the ef- stances according to the number of customer-commodities instead
fectiveness of MPO in our algorithm. In Table 9, we indicate the of the number of customers. The variation of the computational
average number of times that MPO is called for each group of in- time avg.t(s) according to the number of customer-commodities
stances as well as the average number of times that MPO improves [Link] is depicted on Fig. 9.
the new best solution for each group of instances. Detailed results When the size of the instances increases, the average compu-
are reported in Appendix A (Tables 13 and 14). tational time significantly increases (not in a linear fashion). This
For medium instances (namely for n = 20, 40, 60, 80), the im- behavior is not surprising since when the size of the instances in-
pact of MPO on the quality of the solution increases as the in- creases, it takes more time to operate the LS operators, which com-
stance size increases. Especially, when p = 1, MPO has a greater putational complexity is O(n2cc ). Nevertheless, the average compu-
impact than when p = 0.6. The main reason is that, when p = 1, tational time of our ALNS algorithm for the large instances (ncc =
all customers require all the commodities while a smaller num- 300) remains reasonable with less than 19 min.
ber of commodities are required when p = 0.6. As a consequence,
for the same number of customers in the instance, the number 4.9. Characteristics of split customers
of customer-commodities is larger. Therefore, when p = 1, the dif-
ficulty of solving the problem increases as well as the possibilities As mentioned above, considering customer-commodity makes
to reassign commodities with MPO. For large instances (n = 100), the problem more complex with equivalent solutions since sev-
the same conclusions on the performance of MPO can be drawn. eral commodities are related to the same customer and then have
Table 10
Comparison between two ALNS variants on mid-80 and large instances.
n M p [Link] nbIns avg.B min B max B avg.t(s) nbNBK avg.B min B max B avg.t(s) nbNBK
C101 80 3 0.6 150.4 5 −0.77 −1.48 −0.01 373.36 5 −0.70 −1.50 0.11 330.77 4
C101 80 3 1 240 5 −0.75 −1.36 −0.24 704.20 5 −0.49 −1.08 0.02 686.68 4
R101 80 3 0.6 150.4 5 −0.84 −1.24 −0.05 319.83 5 −0.49 −1.03 0.57 348.14 4
R101 80 3 1 240 5 −0.81 −1.39 −0.50 646.61 5 0.11 −0.83 1.71 712.20 3
C101 100 2 0.6 136.4 40 −0.66 −1.69 0.24 330.02 38 −0.23 −1.52 1.21 360.30 28
C101 100 2 1 200 40 −0.77 −1.64 0.03 569.11 39 −0.30 −1.58 1.30 606.03 26
C101 100 3 0.6 188.4 40 −0.77 −2.04 0.73 557.46 39 −0.28 −1.66 1.66 576.18 25
C101 100 3 1 300 40 −1.17 −2.88 0.32 1106.23 37 −0.57 −2.47 1.20 1256.61 26
R101 100 2 0.6 136.4 40 −0.53 −1.50 1.05 323.48 36 0.00 −0.75 1.12 366.70 20
R101 100 2 1 200 40 −0.55 −1.28 0.29 557.74 36 −0.08 −1.24 1.83 614.35 25
R101 100 3 0.6 188.4 40 −0.53 −1.23 0.02 548.02 39 0.21 −0.94 1.61 576.27 16
R101 100 3 1 300 40 −0.62 −1.72 0.25 1078.70 36 0.10 −1.78 2.67 1211.77 21
Total 320 −0.70 −2.88 1.05 633.85 300 −0.14 −2.47 2.67 696.03 187
14 W. Gu, D. Cattaruzza and M. Ogier et al. / Computers and Operations Research 112 (2019) 104761
1200
1000
800
avg.t(s)
600
400
200
0
0 50 100 150 200 250 300
[Link]
Fig. 9. The avg.t(s) of the ALNS with respect to [Link] .
Table 11
Characteristics of split customers in the best solutions of mid-80 instances (50 0 0 iterations).
Instances
C101 80 0.6 1 7 7 0 2 3 7
2 5 5 0 3 1 5
3 7 7 0 4 2 5
4 9 8 1 3 5 7
5 10 10 0 1 7 6
1 1 20 19 1 5 5 18
2 14 14 0 4 4 14
3 18 18 0 6 4 13
4 14 14 0 4 4 14
5 18 18 0 6 2 16
R101 80 0.6 1 6 6 0 4 6 4
2 5 5 0 3 2 2
3 7 7 0 2 3 4
4 5 5 0 3 1 3
5 6 6 0 6 3 3
1 1 13 13 0 3 3 7
2 14 13 1 7 1 4
3 13 13 0 4 2 6
4 14 14 0 4 4 6
5 17 17 0 6 5 7
the same location. Identifying the customers who are good can- of the instance. Column nbSplit reports the number of split cus-
didates for being split into customer-commodities can be beneficial tomers (out of 80), and columns nb2-split and nb3-split indicate
for decision-makers. In this section, we propose to study the char- the number of customers delivered by respectively 2 and 3 vehi-
acteristics of customers who are delivered by more than one vehi- cles. Columns nbNearDepot and nbLargeDemand report the number
cle. We name these customers split customers. We provide detailed of split customers located near the depot and the number of split
results on the 20 mid-80 instances with 50 0 0 iterations for the customers with a large demand respectively. As Nagy et al. (2015),
ALNS algorithm. we consider that a customer is located near the depot if he or she
Table 11 reports these detailed results for several characteristics is one of the 25% of customers closest to the depot; and we con-
about the split customers in the best solution obtained on each in- sider that a customer has a large demand if he or she is one of
stance. The first three columns report the characteristics of the in- the 25% of customers with the largest demand. Note that the de-
stance, and the fourth column indicates the identification number mand of a customer is the sum of the demands for the commodi-
W. Gu, D. Cattaruzza and M. Ogier et al. / Computers and Operations Research 112 (2019) 104761 15
ties they need. Column nbCluster reports the number of split cus- local search moves to consider either with a customer (i.e., a cus-
tomers located inside a cluster of customers. As Nagy et al. (2015), tomer and all its commodities) or a customer-commodity (namely,
we consider a customer to be in a cluster if at least five other cus- a single commodity required by a customer). We developed a
tomers are inside its neighborhood. Two customers are neighbors if mathematical programming based operator to intensify the search
the distance between these two customers is less than 30% of the and further improve the best solutions. The results show that our
average distance between the depot and all the customers in the ALNS algorithm is very effective in finding high-quality solutions
instance. Using this definition, in clustered instances (C101) more on large size instances. In particular, our method outperforms the
than 70% of customers are in a cluster, while in random instance algorithms proposed in Archetti et al. (2014) and in Archetti et al.
(R101) less than 30% of customers are in a cluster. (2015).
From the results in Table 11, it is clear that there are more The proposed ALNS algorithm could then be adapted to tackle
split customers when customers require more commodities. p de- other variants of routing problems with split deliveries. One of
notes the probability that a customer requires a commodity. When these variants is the case with multiple depots and available quan-
p = 0.6, the average number of split customers is 6.7, while when tities at each depot. In this case, because of the limited quantities
p = 1, there are 15.5 split customers on average. Moreover, very available for each commodity at each depot, it is worth consider-
few split customers are delivered by three vehicles, i.e., one vehi- ing splitting deliveries to find feasible solutions. Another interest-
cle for each of the required commodities. In the 20 instances, there ing variant of the problem is the VRP with divisible deliveries and
are only three split customers in this case. In addition, one impor- pickup (Gribkovskaia et al., 2007, Nagy et al., 2015): in this case,
tant feature of split customers is to be inside a cluster. Indeed, for delivery and pickup naturally represent two different commodities,
C101 instances, 86% of split customers are inside a cluster. For R101 but a pickup operation increases the use of the vehicle capacity,
instances, 46% of split customers are inside a cluster, while less and it could then be optimal to visit twice the same customer in
than 30% of customers are in a cluster in these instances. Proximity one route.
to the depot is also an important feature for split customers since
36% of split customers are near the depot. It is less obvious to link Acknowledgments
large demands to split customers since 30% of split customers have
a large demand. This work is partially supported by the CSC (China Scholar-
ship Council) and by the ELSAT 2020 project. The authors thank
5. Conclusions C. Archetti and N. Bianchessi for providing the benchmark in-
stances. Thanks are also due to the referees for their valuable
In this paper, we presented a dedicated heuristic algorithm for comments.
the C-SDVRP. The proposed algorithm is based on an adaptive
large neighborhood search framework introduced by Ropke and Appendix A. Detailed results on the benchmark instances
Pisinger (2006). This is the first heuristic specifically designed with
the aim to provide high-quality solutions for the medium and large Tables 12–14 report the detailed results for the small, medium
size instances. According to the main feature of the C-SDVRP, i.e. and large instances respectively. We report values in bold when-
the requirement of different commodities, we adapt some classical ever we improve (or optimize or equal to) the respective instances.
Table 12
Detailed computational results for the small sized instances (n = 15).
Table 12 (continued)
Table 13
Detailed computational results for the mid sized instances (n ∈ {20; 40; 60; 80}).
n p ncc CC1 CC2 CC3 id OPT/BKS Cost O / B t(s) nbMPO nbMPOimp nbR
Table 13 (continued)
n p ncc CC1 CC2 CC3 id OPT/BKS Cost O / B t(s) nbMPO nbMPOimp nbR
Table 14
Detailed computational results for the large sized instances (n = 100).
M p ncc CC1 CC2 CC3 α BKS Cost B t(s) nbMPO nbMPOimp nbR
Table 14 (continued)
M p ncc CC1 CC2 CC3 α BKS Cost B t(s) nbMPO nbMPOimp nbR
C101 2 1 200 100 100 [40,60] 1.5 3913.4546 3849.0963 −1.64 507.87 29 1 62
200 100 100 3909.5761 3867.0017 −1.09 482.07 28 2 63
200 100 100 3976.2127 3959.2347 −0.43 502.91 27 1 64
200 100 100 3942.3417 3898.8312 −1.10 583.69 47 0 63
200 100 100 3867.9698 3839.8436 −0.73 598.34 44 1 62
C101 2 1 200 100 100 [40,60] 2 3343.1458 3328.8877 −0.43 613.39 30 0 45
200 100 100 3446.6946 3391.5541 −1.60 609.40 27 2 45
200 100 100 3402.9488 3402.1518 −0.02 593.52 25 1 47
200 100 100 3459.2005 3446.1003 −0.38 604.41 29 1 47
200 100 100 3400.1668 3384.4928 −0.46 628.07 30 1 45
C101 2 1 200 100 100 [40,60] 2.5 2734.4477 2717.2918 −0.63 680.71 33 2 35
200 100 100 2788.0824 2776.1339 −0.43 596.97 22 0 36
200 100 100 2780.4184 2761.2009 −0.69 616.17 23 3 36
200 100 100 2815.0422 2786.8485 −1.00 589.10 25 1 36
200 100 100 2767.8571 2751.4713 −0.59 680.74 33 0 35
C101 3 0.6 179 65 50 64 [1,100] 1.1 2233.2625 2199.3967 −1.52 447.71 38 2 34
194 67 63 64 2406.5116 2357.4534 −2.04 509.65 32 3 36
186 62 58 66 2499.3959 2487.8220 −0.46 505.21 45 1 37
193 69 57 67 2266.5132 2283.0043 0.73 448.84 27 0 36
190 61 56 73 2529.7072 2509.7369 −0.79 471.85 39 4 37
C101 3 0.6 179 65 50 64 [1,100] 1.5 1724.7339 1698.9351 −1.50 449.14 32 7 25
194 67 63 64 1812.1988 1807.9754 −0.23 579.95 28 3 27
186 62 58 66 1908.2163 1893.4346 −0.77 521.23 38 2 27
193 69 57 67 1763.0602 1745.0116 −1.02 568.39 47 1 26
190 61 56 73 1938.0044 1930.7389 −0.37 576.69 26 4 28
C101 3 0.6 179 65 50 64 [1,100] 2 1651.6968 1626.3891 −1.53 633.94 38 1 19
194 67 63 64 1654.5853 1652.1369 −0.15 677.73 30 2 20
186 62 58 66 1712.4417 1684.7036 −1.62 648.54 40 2 20
193 69 57 67 1702.0681 1677.4419 −1.45 584.07 22 0 20
190 61 56 73 1719.8367 1714.1925 −0.33 600.65 25 1 21
C101 3 0.6 179 65 50 64 [1,100] 2.5 1413.2271 1411.6091 −0.11 681.11 34 0 15
194 67 63 64 1442.7602 1424.9209 −1.24 810.87 45 0 16
186 62 58 66 1452.9146 1428.4270 −1.69 712.86 42 2 16
193 69 57 67 1458.4966 1442.4131 −1.10 780.48 36 0 16
190 61 56 73 1446.4444 1444.4045 −0.14 756.13 31 1 17
C101 3 0.6 179 65 50 64 [40,60] 1.1 3318.3873 3257.0030 −1.85 401.09 32 0 53
194 67 63 64 3509.4788 3489.2080 −0.58 435.08 26 1 56
186 62 58 66 3603.3698 3582.1768 −0.59 399.24 25 0 55
193 69 57 67 3518.1872 3484.3878 −0.96 446.43 32 0 56
190 61 56 73 3649.5231 3615.4109 −0.93 496.11 48 0 57
C101 3 0.6 179 65 50 64 [40,60] 1.5 2424.2712 2407.4264 −0.69 419.67 21 0 37
194 67 63 64 2525.9423 2523.9967 −0.08 558.51 28 0 40
186 62 58 66 2610.6052 2600.9273 −0.37 452.04 26 2 39
193 69 57 67 2535.7810 2525.7262 −0.40 468.16 32 0 39
190 61 56 73 2673.1778 2640.8231 −1.21 479.02 25 0 40
C101 3 0.6 179 65 50 64 [40,60] 2 2270.3526 2265.3978 −0.22 482.40 25 1 28
194 67 63 64 2346.2918 2326.5017 −0.84 643.35 26 1 30
186 62 58 66 2316.4770 2313.2239 −0.14 540.93 21 0 29
193 69 57 67 2367.4327 2352.4767 −0.63 574.32 27 4 30
190 61 56 73 2339.7488 2321.6905 −0.77 560.01 38 3 30
C101 3 0.6 179 65 50 64 [40,60] 2.5 1875.4922 1857.8147 −0.94 501.48 23 1 22
194 67 63 64 1971.8409 1948.0161 −1.21 643.45 28 2 24
186 62 58 66 1927.4449 1917.8823 −0.50 606.26 40 1 23
193 69 57 67 1974.0424 1972.7209 −0.07 618.31 30 0 24
190 61 56 73 1929.9052 1918.9720 −0.57 607.40 16 1 24
C101 3 1 300 100 100 100 [1,100] 1.1 3458.3101 3378.7279 −2.30 952.16 50 5 53
300 100 100 100 3314.9537 3245.1446 −2.11 914.82 43 6 51
300 100 100 100 3294.7994 3265.0504 −0.90 865.51 33 8 52
300 100 100 100 3430.0465 3370.6534 −1.73 825.87 34 0 55
300 100 100 100 3214.8478 3162.0426 −1.64 900.59 39 7 51
C101 3 1 300 100 100 100 [1,100] 1.5 2610.2355 2561.2706 −1.88 972.71 30 6 39
300 100 100 100 2514.8470 2459.3592 −2.21 1024.42 35 3 38
300 100 100 100 2532.1347 2476.9681 −2.18 1083.84 38 6 38
300 100 100 100 2641.7530 2565.6834 −2.88 923.07 37 2 40
300 100 100 100 2455.7585 2404.8723 −2.07 968.00 28 5 37
C101 3 1 300 100 100 100 [1,100] 2 2300.4077 2280.2733 −0.88 1211.44 42 5 29
300 100 100 100 2192.9203 2193.6158 0.03 1153.59 43 3 28
300 100 100 100 2260.6841 2253.2807 −0.33 1142.33 28 3 28
(continued on next page)
20 W. Gu, D. Cattaruzza and M. Ogier et al. / Computers and Operations Research 112 (2019) 104761
Table 14 (continued)
M p ncc CC1 CC2 CC3 α BKS Cost B t(s) nbMPO nbMPOimp nbR
Table 14 (continued)
M p ncc CC1 CC2 CC3 α BKS Cost B t(s) nbMPO nbMPOimp nbR
R101 2 1 200 100 100 [1,100] 1.1 2771.6431 2756.7887 −0.54 531.18 38 6 49
200 100 100 2834.4928 2827.3512 −0.25 426.29 19 2 49
200 100 100 3016.8985 2985.2441 −1.05 507.52 35 3 53
200 100 100 3048.9983 3031.0145 −0.59 533.16 35 3 53
200 100 100 2800.6961 2784.0475 −0.59 496.69 29 4 49
R101 2 1 200 100 100 [1,100] 1.5 2128.3373 2122.9233 −0.25 530.35 29 2 35
200 100 100 2194.4059 2183.9167 −0.48 578.51 42 3 36
200 100 100 2299.5101 2295.0886 −0.19 501.07 25 3 39
200 100 100 2307.0220 2297.4593 −0.41 582.93 39 3 38
200 100 100 2149.6086 2137.8987 −0.54 507.42 24 1 36
R101 2 1 200 100 100 [1,100] 2 1715.9710 1698.3738 −1.03 551.73 25 1 27
200 100 100 1730.2647 1722.1094 −0.47 589.96 33 3 27
200 100 100 1813.5162 1816.3983 0.16 547.78 31 4 29
200 100 100 1827.3575 1832.7354 0.29 582.61 32 7 29
200 100 100 1717.4555 1704.9316 −0.73 512.40 16 1 27
R101 2 1 200 100 100 [1,100] 2.5 1461.4911 1452.8434 −0.59 682.70 32 2 21
200 100 100 1466.1825 1460.9983 −0.35 705.62 43 9 22
200 100 100 1533.0524 1534.9745 0.13 633.60 33 4 23
200 100 100 1545.9473 1541.2999 −0.30 702.58 34 4 23
200 100 100 1431.3942 1432.8548 0.10 631.85 38 0 21
R101 2 1 200 100 100 [40,60] 1.1 4878.3554 4815.8420 −1.28 477.78 28 1 93
200 100 100 4885.1949 4854.3103 −0.63 533.46 46 0 95
200 100 100 4954.1386 4928.0506 −0.53 396.24 18 0 97
200 100 100 4906.3430 4886.1829 −0.41 357.43 15 0 95
200 100 100 4812.6715 4778.5328 −0.71 390.36 16 0 93
R101 2 1 200 100 100 [40,60] 1.5 3515.9354 3489.7450 −0.74 602.15 39 3 62
200 100 100 3584.7753 3544.2844 −1.13 554.11 32 0 64
200 100 100 3592.7577 3551.5178 −1.15 609.51 45 0 64
200 100 100 3583.5791 3546.3662 −1.04 512.74 25 3 63
200 100 100 3495.4743 3472.3544 −0.66 586.66 41 3 62
R101 2 1 200 100 100 [40,60] 2 2640.2882 2622.6570 −0.67 636.79 37 2 45
200 100 100 2669.1453 2657.6846 −0.43 485.80 15 1 46
200 100 100 2690.6244 2665.3118 −0.94 578.84 29 1 46
200 100 100 2704.5457 2695.0898 −0.35 620.18 28 0 47
200 100 100 2655.6363 2641.4628 −0.53 550.42 24 1 46
R101 2 1 200 100 100 [40,60] 2.5 2153.4354 2141.4426 −0.56 644.84 25 0 35
200 100 100 2213.9592 2189.4736 −1.11 571.24 23 0 36
200 100 100 2222.5353 2202.7464 −0.89 675.78 38 1 36
200 100 100 2204.7443 2203.0546 −0.08 650.99 34 3 36
200 100 100 2159.9779 2149.3260 −0.49 538.39 16 0 35
R101 3 0.6 179 65 50 64 [1,100] 1.1 2091.9773 2082.4706 −0.45 448.50 37 2 34
194 67 63 64 2248.9343 2221.3245 −1.23 534.88 43 2 36
186 62 58 66 2137.6592 2134.4081 −0.15 447.11 27 2 37
193 69 57 67 2131.7891 2131.5677 −0.01 458.80 24 1 36
190 61 56 73 2205.0653 2183.8628 −0.96 478.30 31 0 38
R101 3 0.6 179 65 50 64 [1,100] 1.5 1638.1808 1628.8003 −0.57 507.92 39 8 25
194 67 63 64 1735.8727 1726.5884 −0.53 564.75 38 5 26
186 62 58 66 1688.8930 1678.2108 −0.63 563.18 48 7 27
193 69 57 67 1695.4234 1683.4501 −0.71 502.74 27 3 26
190 61 56 73 1708.9017 1698.7518 −0.59 508.70 24 0 27
R101 3 0.6 179 65 50 64 [1,100] 2 1344.1715 1333.3018 −0.81 579.85 38 4 19
194 67 63 64 1417.4800 1409.0886 −0.59 638.09 33 4 20
186 62 58 66 1382.2839 1366.0804 −1.17 554.83 24 2 20
193 69 57 67 1385.4181 1380.4631 −0.36 628.49 23 3 20
190 61 56 73 1377.7779 1372.9908 −0.35 612.43 32 3 21
R101 3 0.6 179 65 50 64 [1,100] 2.5 1146.9121 1144.9686 −0.17 703.87 39 4 15
194 67 63 64 1214.4350 1211.4598 −0.24 739.74 37 7 16
186 62 58 66 1188.0089 1183.2168 −0.40 707.54 47 2 16
193 69 57 67 1190.1513 1188.4567 −0.14 750.52 30 0 16
190 61 56 73 1203.4458 1199.0058 −0.37 676.18 27 1 17
R101 3 0.6 179 65 50 64 [40,60] 1.1 3048.1305 3027.5465 −0.68 455.96 37 2 54
194 67 63 64 3278.5334 3244.0645 −1.05 444.34 25 2 57
186 62 58 66 3035.4818 3030.9153 −0.15 451.00 25 0 55
193 69 57 67 3126.1265 3115.1466 −0.35 432.35 24 1 56
190 61 56 73 3144.6563 3112.9713 −1.01 469.27 37 1 57
(continued on next page)
22 W. Gu, D. Cattaruzza and M. Ogier et al. / Computers and Operations Research 112 (2019) 104761
Table 14 (continued)
M p ncc CC1 CC2 CC3 α BKS Cost B t(s) nbMPO nbMPOimp nbR
References Pisinger, D., Ropke, S., 2010. Large neighborhood search. In: Handbook of Meta-
heuristics. Springer, pp. 399–419.
Archetti, C., Bianchessi, N., Speranza, M.G., 2015. A branch-price-and-cut algorithm Prins, C., 2004. A simple and effective evolutionary algorithm for the vehicle routing
for the commodity constrained split delivery vehicle routing problem. Comput. problem. Comput. Oper. Res. 31 (12), 1985–2002.
Oper. Res. 64, 1–10. Prins, C., 2009. A GRASP × evolutionary local search hybrid for the vehicle rout-
Archetti, C., Campbell, A.M., Speranza, M.G., 2014. Multicommodity vs. single-com- ing problem. In: Bio-inspired Algorithms for the Vehicle Routing Problem,
modity routing. Transp. Sci. 50 (2), 461–472. pp. 35–53.
Archetti, C., Savelsbergh, M.W., Speranza, M.G., 2008. To split or not to split: that is Ropke, S., Pisinger, D., 2006. An adaptive large neighborhood search heuristic for the
the question. Transp. Res. Part E 44 (1), 114–123. pickup and delivery problem with time windows. Transp. Sci. 40 (4), 455–472.
Archetti, C., Speranza, M.G., 2012. Vehicle routing problems with split deliveries. Int. Shaw, P., 1997. A new local search algorithm providing high quality solutions to
Tran. Oper. Res. 19 (1–2), 3–22. vehicle routing problems. APES Group, Dept of Computer Science, University of
Azi, N., Gendreau, M., Potvin, J.-Y., 2014. An adaptive large neighborhood search for Strathclyde, Glasgow, Scotland, UK.
a vehicle routing problem with multiple routes. Comput. Oper. Res. 41, 167–173. Shaw, P., 1998. Using constraint programming and local search methods to solve ve-
Beasley, J., 1983. Route-first cluster-second methods for vehicle routing. Omega 11, hicle routing problems. In: International Conference on Principles and Practice
403–408. of Constraint Programming. Springer, pp. 417–431.
Dror, M., Trudeau, P., 1989. Savings by split delivery routing. Transp. Sci. 23 (2), Solomon, M., 1987. Algorithms for the vehicle routing and scheduling problems with
141–145. time window constraints. Oper. Res. 35 (2), 254–265.
Dror, M., Trudeau, P., 1990. Split delivery routing. Nav. Res. Logist. 37 (3), 383–402. Sze, J.F., Salhi, S., Wassan, N., 2016. A hybridisation of adaptive variable neighbour-
François, V., Arda, Y., Crama, Y., Laporte, G., 2016. Large neighborhood search for hood search and large neighbourhood search: application to the vehicle routing
multi-trip vehicle routing. Eur. J. Oper. Res. 255 (2), 422–441. problem. Expert Syst. Appl. 65, 383–397.
Gribkovskaia, I., Halskau, Ø., Laporte, G., Vlček, M., 2007. General solutions to the Sze, J.F., Salhi, S., Wassan, N., 2017. The cumulative capacitated vehicle routing prob-
single vehicle routing problem with pickups and deliveries. Eur. J. Oper. Res. lem with min-sum and min-max objectives: an effective hybridisation of adap-
180 (2), 568–584. tive variable neighbourhood search and large neighbourhood search. Transp.
Masson, R., Lehuédé, F., Péton, O., 2013. An adaptive large neighborhood search for Res. Part B 101, 162–184.
the pickup and delivery problem with transfers. Transp. Sci. 47 (3), 344–355. Toth, P., Vigo, D., 2014. Vehicle routing: problems, methods, and applications.
Nagy, G., Wassan, N.A., Speranza, M.G., Archetti, C., 2015. The vehicle routing prob- MOS-SIAM Series on Optimization. Philadelphia: SIAM. 2nd ed.
lem with divisible deliveries and pickups. Transp. Sci. 49 (2), 271–294. doi:10.
1287/trsc.2013.0501.