MPLS Lab Setup
MPLS Lab Setup
• 1 MPLS Overview
• 2 Example network
• 3 Prerequisites for MPLS
o 3.1 "Loopback" IP address
o 3.2 IP connectivity
• 4 Configuring LDP
• 5 Using traceroute in MPLS networks
• 6 Drawbacks of using traceroute in MPLS network
o 6.1 Label switching ICMP errors
o 6.2 Penultimate hop popping and traceroute source address
• 7 Configuring VPLS
o 7.1 Configuring VPLS interfaces
o 7.2 Penultimate hop popping effects on VPLS tunnels
o 7.3 Bridging ethernet segments with VPLS
o 7.4 Split horizon bridging
• 8 Optimizing label distribution
o 8.1 Label binding filtering
o 8.2 Effects of label binding filtering on data forwarding in network
• 9 See also
MPLS Overview
For MPLS overview and MPLS features that RouterOS supports see MPLS Overview
Example network
Consider network service provider that is connecting 3 remote sites of Customer A (A1,A2 and
A3) and 2 remote sites of Customer B (B1 and B2) using its routed IP network core, consisting
of routers (R1-R5):
Customers require transparent ethernet segment connection between sites. So far it has been
implemented by means of bridging EoIP tunnels with physical ethernet interfaces.
Note that there are no IP addresses configured on R1, R4 and R5 interfaces that face customer
networks.
Enabling MPLS forwarding can speed up packet forwarding process in such network. Using one
of MPLS applications - VPLS can further increase efficency of ethernet frame forwarding by not
having to encapsulate ethernet frames in IP frames, thus removing IP header overhead.
This guide gives step by step instructions that will lead to implementation of VPLS to achieve
necessary service.
• as there is only one LDP session between any 2 routers, no matter how many links
connect them, loopback IP address ensures that LDP session is not affected by interface
state or address changes
• use of loopback address as LDP transport address ensures proper penultimate hop
popping behaviour when multiple labels are attached to packet as in case of VPLS
IP connectivity
As LDP distributes labels for active routes, essential requirement is properly configured IP
routing. LDP by default distributes labels for active IGP routes (that is - connected, static, and
routing protocol learned routes, except BGP).
In given example setup OSPF is used to distribute routes. For example, on R5 OSPF is
configured with the following commands:
Configuring LDP
In order to distribute labels for routes, LDP should get enabled. On R1 this is done by commands
(interface ether3 is facing network 1.1.1.0/24):
Note that transport-address gets set to 9.9.9.1. This makes router originate LDP session
connections with this address and also advertise this address as transport address to LDP
neighbors.
Other routers are configured in similar way - LDP is enabled on interfaces that connect routers
and not enabled on interfaces that connect customer networks. For example, on R5:
[admin@R5] > /ip address print
Flags: X - disabled, I - invalid, D - dynamic
# ADDRESS NETWORK BROADCAST INTERFACE
0 4.4.4.5/24 4.4.4.0 4.4.4.255 ether1
1 5.5.5.5/24 5.5.5.0 5.5.5.255 ether2
2 9.9.9.5/32 9.9.9.5 9.9.9.5 lobridge
[admin@R5] > /mpls ldp interface print
Flags: I - invalid, X - disabled
# INTERFACE HELLO-INTERVAL HOLD-TIME
0 ether1 5s 15s
1 ether2 5s 15s
/mpls local-bindings shows labels that this router has assigned to routes and peers it has
distributed the label to. It shows that R5 has distributed labels for all its routes to both of its
neighbors - R3 and R4
/mpls remote-bindings shows labels that are allocated for routes by neighboring routers and
advertised to this router:
Here we can observe that R5 has received label bindings for all routes from both its neighbors -
R3 and R4, but only the ones for whom particular neighbor is next hop are active. For example:
From the above we see that R3, which is next hop for network 9.9.9.1/32 from R5 perspective,
has assigned label 17 for traffic going to 9.9.9.1/32. This implies that when R5 will be routing
traffic to this network, will impose label 17.
Label switching rules can be seen in /mpls forwarding-table. For example, on R3 it looks like
this:
This rule says that R3 receiving packet with label 17 will change it to label 17 assigned by R2 for
network 9.9.9.1/32 (R2 is next hop for 9.9.9.1/32 from R3 perspective):
Notice, that forwarding rule does not have any out-labels. The reason for this is that R2 is doing
penultimate hop popping for this network. R1 does not assign any real label for 9.9.9.1/32
network, because it is known that R1 is egress point for 9.9.9.1/32 network (router is egress point
for networks that are directly connected to it, because next hop for traffic is not MPLS router),
therefore is advertises "implicit null" label for this route:
This tells R2 to forward traffic for 9.9.9.1/32 to R1 unlabelled, which is exactly what R2 mpls
forwarding-table entry tells. Penultimate hop popping ensures that routers do not have to do
unnecessary label lookup when it is known in advance that router will have to route packet.
MPLS label carries not only label value, but also TTL field. When imposing label on IP packet,
MPLS TTL is set to value in IP header, when last label is removed from IP packet, IP TTL is set
to value in MPLS TTL. Therefore MPLS switching network can be diagnosed by means of
traceroute tool that supports MPLS extension.
Traceroute results show MPLS labels on packet when it produced ICMP Time Exceeded. The
above means: when R3 received packet with MPLS TTL 1, it had label 17 on. This matches label
advertised by R3 for 9.9.9.1/32. The same way R2 observed label 17 on packet on next
traceroute iteration - R3 switched label 17 to label 17, as explaned above. R1 received packet
without labels - R2 did penultimate hop popping as explaned above.
One of drawbacks of using traceroute in MPLS networks is the way MPLS handles produced
ICMP errors. In IP networks ICMP errors are simply routed back to source of packet that caused
the error. In MPLS network it is possible that router that produces error message does not even
have route to source of IP packet (for example in case of assymetric label switching paths or
some kind of MPLS tunneling, e.g. to transport MPLS VPN traffic).
Due to this produced ICMP errors are not routed to the source of packet that caused the error, but
switched further along label switching path, assuming that when label switching path endpoint
will receive ICMP error, it will know how to properly route it back to source.
This causes the situation, that traceroute in MPLS network can not be used the same way as in IP
network - to determine failure point in the network. If label switched path is broken anywhere in
the middle, no ICMP replies will come back, because they will not make it to the far endpoint of
label switching path.
Thorough understanding of pen ultimate hop behaviour and routing is necessary to understand
and avoid problems that penultimate hop popping causes to traceroute.
In the example setup, regular traceroute from R5 to R1 would yield the following results:
compared to:
[admin@R5] > /tool traceroute 9.9.9.1 src-address=9.9.9.5
ADDRESS STATUS
1 4.4.4.3 15ms 5ms 5ms
mpls-label=17
2 2.2.2.2 5ms 3ms 6ms
mpls-label=17
3 9.9.9.1 6ms 3ms 3ms
The reason why first traceroute does not get response from R3 is that by default traceroute on R5
uses source address 4.4.4.5 for its probes, because it is preferred source for route over which next
hop to 9.9.9.1/32 is reachable:
When first traceroute probe is transmitted (source: 4.4.4.5, destination 9.9.9.1), R3 drops it and
produces ICMP error message (source 4.4.4.3 destination 4.4.4.5) that is switched all the way to
R1. R1 then sends ICMP error back - it gets switched along label switching path to 4.4.4.5.
R2 is penultimate hop popping router for network 4.4.4.0/24, because 4.4.4.0/24 is directly
connected to R3. Therefore R2 removes last label and sends ICMP error to R3 unlabelled:
R3 drops received IP packet, because it receives packet with its own address as source address.
ICMP errors produced by following probes come back correctly, because R3 receives unlabelled
packets with source addresses 2.2.2.2 and 9.9.9.1, which are acceptable to route.
Command:
Configuring VPLS
Configuring VPLS interfaces
VPLS interface can be considered tunnel interface just like EoIP interface. To achieve
transparent ethernet segment forwarding between customer sites the following tunnels need to be
established:
• R1-R5 (customer A)
• R1-R4 (customer A)
• R4-R5 (customer A)
• R1-R5 (customer B)
Note that each tunnel setup involves creating VPLS interfaces on both endpoints of tunnel.
Negotiation of VPLS tunnels is done by LDP protocol - both endpoints of tunnel exchange labels
they are going to use for tunnel. Data forwarding in tunnel then happens by imposing 2 labels on
packets: tunnel label and transport label - label that ensures traffic delivery to the other endpoint
of tunnel.
VPLS tunnels are configured in /interface vpls menu. vpls-id parameter identifies VPLS tunnel
and must be unique for every tunnel between this and remote peer.
• on R1:
• on R4:
Configuring VPLS tunnel causes dynamic LDP neighbor to be created and "targeted" LDP
session to be established. Targeted LDP session is session that is established between two routers
that are not direct neighbors. After this setup R1 LDP neighbors are:
2.2.2.2
9.9.9.2
1 DOTV 9.9.9.5 9.9.9.1 9.9.9.5:0 yes
4.4.4.5
5.5.5.5
9.9.9.5
2 DOTV 9.9.9.4 9.9.9.1 9.9.9.4:0 yes
3.3.3.4
5.5.5.4
9.9.9.4
Note that labels for IP routes are also exchanged between VPLS peers, although there is no
chance any of them will be used. For example, without adding additional links, R4 will not
become next hop for any route on R1, so labels learned from R4 are not likely to be ever used.
Still routers maintain all labels exchanged so that they are ready for use immediately if needed.
This default behaviour can be overridden by filtering which is discussed later.
igp-prefix shows route that is used to get to remote endpoint of tunnel. This implies that when
forwarding traffic to remote endpoint of tunnel this router will impose transport label - label
distributed by next hop (which is shown as igp-nexthop) to 9.9.9.4/32 for 9.9.9.4/32 route. This
can be confirmed on R2:
Tunnel label imposed on packets will be as assigned by remote router (R4) for this tunnel.
imposed-labels reflect this setup: packets produced by tunnel will have 2 labels on them: 21 and
24.
Penultimate hop popping of transport label causes packets to arrive at VPLS tunnel endpoint
with just one tag - tunnel tag. This makes VPLS tunnel endpoint do just one label lookup to find
out what to do with packet. Transport label behaviour can be observed by traceroute tool
between tunnel endpoints. For example, traceroute from R1 to R4 looks like this:
The requirement to deliver packet with tunnel label to endpoint of tunnel explains configuration
advice to use "loopback" IP addresses as tunnel endpoints. If in this case R4 was establishing
LDP sessions with its address 3.3.3.4, penultimate hop popping would happen not at R3, but at
R2, because R3 has network 3.3.3.0/24 as its connected network (and therefore advertises
implicit null label). This would cause R3 (and not R4) to receive packet with just tunnel label on,
yielding unpredicted results - either dropping frame if R3 does not recognize the packet or
forwarding it the wrong way.
Another issue is having VPLS tunnel endpoints directly connected, as in case of R4 and R5.
There are no transport labels they can use between themselves, because they both instruct other
one to be penultimate hop popping router for their tunnel endpoint address. For example on R5:
This causes VPLS tunnel to use only tunnel label when sending packets:
VPLS tunnels provide virtual ethernet link between routers. To transparrently connect two
physical ethernet segments, they must be bridged with VPLS tunnel. In general it gets done the
same way as with EoIP interfaces.
and on R5:
Note that there is not need to run (R)STP protocol on bridge as there are links between segments
B1 and B2 except single VPLS tunnel between R1 and R5.
In the example setup there are 3 tunnels set up to connect segments A1, A2 and A3, establishing
so called "full mesh" of tunnels between involved segments. If bridging without (R)STP was
enabled, traffic loop would occur. There are a few solutions to this:
• enabling (R)STP to eliminate the loop. This approach has a drawback - (R)STP protocol
would disable forwarding through one of tunnels and keep it just for backup purposes.
That way traffic between 2 segments would have to go through 2 tunnels, making setup
inefficent
• using bridge firewall to make sure that traffic does not get looped - involves firewall rule
setup making bridging less efficent
• using bridge horizon feature
The basic idea of split horizon bridging is to make traffic arriving over some port never be sent
out some set of ports. For VPLS purposes this would mean never sending packet that arrived
over one VPLS tunnel over to other VPLS tunnel, as it is known in advance, that sender of
packet has connection to target network itself.
For example, if device in A1 sent packet to broadcast or unknown MAC address (which causes
bridges to flood all interfaces), it would get sent to both, R5 and R4 over VPLS tunnels. In
regular setup e.g. R5 when receiving such packet over VPLS tunnel would send it in A2
connected to it and also over VPLS tunnel to R4. This way R4 would get 2 copies of the same
packet and further cause traffic to loop.
Bridge horizon feature allows to configure bridge ports with horizon setting so that packet
received over port with horizon value X is not forwarded or flooded to any port with the same
horizon value X. So in case of full mesh of VPLS tunnels, each router must be configured with
the same horizon value for VPLS tunnels that are bridged together.
For example, configuration commands for R1 to enable bridging for customer A are:
In similar way bridge should be configured on R4 and R5. Note that physical ethernet port is not
configured with horizon value. If it was, that would disabled bridge forwarding data at all.
Note that horizon value has meaning only locally - it does not get transmitted over network,
therefore it does not matter if the same value is configured in all routers participating in bridged
network.
During implementation of given example setup, it has become clear that not all label bindings are
necessary. For example, there is no need to exchange IP route label bindings between R1 and R5
or R1 and R4, as there is no chance they will ever be used. Also, if given network core is
providing connectivity only for mentioned customer ethernet segments, there is no real use to
distribute labels for networks that connect routers between themselves, the only routes that
matter are /32 routes to endpoints of VPLS tunnels.
Label binding filtering can be used to distribute only specified sets of labels to reduce resource
usage and network load.
• which label bindings should be advertised to LDP neighbors, configured in /mpls ldp
advertise-filter
• which label bindings should be accepted from LDP neighbors, configured in /mpls ldp
accept-filter
Filters are organized in ordered list, specifying prefix that must include the prefix that is tested
against filter and neighbor (or wildcard).
In given example setup all routers can be configured so that they advertise labels only for routes
that allow to reach endpoints of tunnels. For this 2 advertise filters need to be configured on all
routers:
This filter causes routers to advertise only bindings for routes that are included by 9.9.9.0/24
prefix which covers tunnel endpoints (9.9.9.1/32, 9.9.9.4/32, 9.9.9.5/32). The second rule is
necessary because default filter result, when no rule matches is to allow action in question.
In given setup there is no need to set up accept filter, because by convention introduced by 2
abovementioned rules no LDP router will distribute unnecessary bindings.
Note that filter changes do not affect existing mappings, so to take filter into effect, connections
between neighbors need to be reset. This can get done by removing them:
[admin@R1] /mpls ldp neighbor> print
Flags: X - disabled, D - dynamic, O - operational, T - sending-targeted-
hello, V - vpls
# TRANSPORT LOCAL-TRANSPORT PEER SEND-
TARGETED ADDRESSES
0 DO 9.9.9.2 9.9.9.1 9.9.9.2:0 no
1.1.1.2
2.2.2.2
9.9.9.2
1 DOTV 9.9.9.5 9.9.9.1 9.9.9.5:0 yes
4.4.4.5
5.5.5.5
9.9.9.5
2 DOTV 9.9.9.4 9.9.9.1 9.9.9.4:0 yes
3.3.3.4
5.5.5.4
9.9.9.4
[admin@R1] /mpls ldp neighbor> remove [find]
There still are unnecessary bindings, this time - the bindings distributed due to establishing
targeted LDP session with remote endpoints of VPLS tunnels (bindings from 9.9.9.5 and 9.9.9.4)
To filter out those, we configure routers to not distribute any IP bindings to any of tunnel
endpoint routers. For example on R1, filter table should look like this:
This causes routers to have minimal label binding tables, for example on R1:
Note that IP binding distribution should not be disabled between R4 and R5 although they are
tunnel endpoints. Doing so would not harm regular case, because R4 and R5 does not need IP
bindings to VPLS tunnel data, but in case link between R3 and R5 would go down, all traffic to
R5 from R1 would have to be rerouted through R4. In such case R5 not distributing IP bindings
to R4 would cause R4 to not be able to forward MPLS traffic to R5.
Note the traceroute results after these changes. Traceroute from R1 to R5 using R1 loopback
address as source address still behaves the same - each hop reports received labels:
There is no label switching involved doing this traceroute and it works just like in network
without MPLS at all.
See also
• BGP Based VPLS
• EXP_bit_behaviour
• 1 Introduction
• 2 Lab Setup
o 2.1 Network Diagram
o 2.2 Router Setup
2.2.1 Loopback Interface
2.2.2 IP Addressing
2.2.3 Dynamic Routing Setup
Introduction
This page is an attempt to put together a lab setup for the testing of MPLS / VPLS as well as
Traffic Engineering. This is not an attempt to explain how MPLS works, rather it is to promote
discussion around the operation of MPLS. Before working through this lab you should first
familiarize yourself with the concepts in this WIKI article
http://wiki.mikrotik.com/wiki/MPLSVPLS as most of the setup has been based around those
concepts. As my understanding of MPLS is also rather limited please feel free to edit and correct
where required. If you want the original network diagram (in Visio format) please email me on
david [at] mikrotiksa dot com. I can also export to some other formats. There is also a discussion
on the forum about this wiki. Please check for updates.
Lab Setup
Network Diagram
The setup was created using 6 RB532's, but anything with 3 network interfaces and 32MB
memory should be able to do the job. P1 - P3 are the Provider (MPLS Backbone) routers. PE1 -
PE3 are the Provider Edge routers which do the Label Popping
Router Setup
Loopback Interface
Each router is setup with a loopback adapter lobridge which holds the loopback address. From
http://wiki.mikrotik.com/wiki/MPLSVPLS we can see this serves 2 purposes:
• as there is only one LDP session between any 2 routers, no matter how many links
connect them, loopback IP address ensures that the LDP session is not affected by
interface state or address changes
• use of loopback address as LDP transport address ensures proper penultimate hop
popping behavior when multiple labels are attached to packet as in case of VPLS
P1
The other routers are setup with 10.255.255.2-6 as per the diagram above
P2
P3
PE1
PE2
PE3
IP Addressing
We then setup the links between the core routers and the core-edge routers as per the diagram:
P1
/ip address
add address=10.0.255.1/30 interface=ether1
add address=10.0.255.5/30 interface=ether2
add address=10.1.0.254/24 interface=ether3
P2
/ip address
add address=10.0.255.6/30 interface=ether1
add address=10.0.255.9/30 interface=ether2
add address=10.2.0.254/24 interface=ether3
P3
/ip address
add address=10.0.255.10/30 interface=ether1
add address=10.0.255.2/30 interface=ether2
add address=10.3.0.254/24 interface=ether3
PE1
/ip address
add address=10.1.0.1/24 interface=ether1
PE2
/ip address
add address=10.2.0.1/24 interface=ether1
PE3
/ip address
add address=10.3.0.1/24 interface=ether1
P1
/routing ospf
set distribute-default=never redistribute-connected=as-type-1 router-
id=10.255.255.1
/routing ospf network
add area=backbone network=10.0.255.0/30
add area=backbone network=10.0.255.4/30
add area=backbone network=10.1.0.0/24
P2
/routing ospf
set distribute-default=never redistribute-connected=as-type-1 router-
id=10.255.255.2
/routing ospf network
add area=backbone network=10.0.255.8/30
add area=backbone network=10.0.255.4/30
add area=backbone network=10.2.0.0/24
P3
/routing ospf
set distribute-default=never redistribute-connected=as-type-1 router-
id=10.255.255.3
/routing ospf network
add area=backbone network=10.0.255.0/30
add area=backbone network=10.0.255.8/30
add area=backbone network=10.1.0.0/24
PE1
/routing ospf
set distribute-default=never redistribute-connected=as-type-1 router-
id=10.255.255.4
/routing ospf network
add area=backbone network=10.1.0.0/24
PE2
/routing ospf
set distribute-default=never redistribute-connected=as-type-1 router-
id=10.255.255.5
/routing ospf network
add area=backbone network=10.2.0.0/24
PE3
/routing ospf
set distribute-default=never redistribute-connected=as-type-1 router-
id=10.255.255.6
/routing ospf network
add area=backbone network=10.3.0.0/24
MPLS Setup
The next step is to add and configure the MPLS system. In order to distribute labels for routes,
LDP needs to be enabled. Then all interfaces that participate in MPLS need to be added.
P1
/mpls ldp
set enabled=yes lsr-id=10.255.255.1 transport-address=10.255.255.1
/mpls ldp interface
add interface=ether1
add interface=ether2
add interface=ether3
P2
/mpls ldp
set enabled=yes lsr-id=10.255.255.2 transport-address=10.255.255.2
/mpls ldp interface
add interface=ether1
add interface=ether2
add interface=ether3
P3
/mpls ldp
set enabled=yes lsr-id=10.255.255.3 transport-address=10.255.255.3
/mpls ldp interface
add interface=ether1
add interface=ether2
add interface=ether3
PE1
/mpls ldp
set enabled=yes lsr-id=10.255.255.4 transport-address=10.255.255.4
/mpls ldp interface
add interface=ether1
PE2
/mpls ldp
set enabled=yes lsr-id=10.255.255.5 transport-address=10.255.255.5
/mpls ldp interface
add interface=ether1
PE3
/mpls ldp
set enabled=yes lsr-id=10.255.255.6 transport-address=10.255.255.6
/mpls ldp interface
add interface=ether1