Sample
Sample
Cisco Press
Hoboken, New Jersey
ii Segment Routing for Service Provider and Enterprise Networks
Published by:
Cisco Press
All rights reserved. This publication is protected by copyright, and permission must be obtained from the
publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form
or by any means, electronic, mechanical, photocopying, recording, or likewise. For information regarding
permissions, request forms, and the appropriate contacts within the Pearson Education Global Rights &
Permissions Department, please visit www.pearson.com/global-permission-granting.html.
No patent liability is assumed with respect to the use of the information contained herein. Although
every precaution has been taken in the preparation of this book, the publisher and author assume no
responsibility for errors or omissions. Nor is any liability assumed for damages resulting from the use of
the information contained herein.
$PrintCode
Library of Congress Control Number: 2024945941
ISBN-13: 978-0-13-823093-7
ISBN-10: 0-13-823093-5
The information is provided on an “as is” basis. The authors, Cisco Press, and Cisco Systems, Inc. shall
have neither liability nor responsibility to any person or entity with respect to any loss or damages
arising from the information contained in this book or from the use of the discs or programs that may
accompany it.
The opinions expressed in this book belong to the authors and are not necessarily those of
Cisco Systems, Inc.
Trademark Acknowledgments
All terms mentioned in this book that are known to be trademarks or service marks have been
appropriately capitalized. Cisco Press or Cisco Systems, Inc., cannot attest to the accuracy of this
information. Use of a term in this book should not be regarded as affecting the validity of any trademark
or service mark.
iii
Feedback Information
At Cisco Press, our goal is to create in-depth technical books of the highest quality and value. Each book
is crafted with care and precision, undergoing rigorous development that involves the unique expertise of
members from the professional technical community.
Readers’ feedback is a natural continuation of this process. If you have any comments regarding how we
could improve the quality of this book, or otherwise alter it to better suit your needs, you can contact us
through email at [email protected]. Please make sure to include the book title and ISBN in your
message.
GM K12, Early Career and Professional Learning: Technical Editors: Jakub Horn, Christian
Soo Kang Schmutzer, Luc Andrew Burdet, Johan Gustawsson,
Bram Van der Zwet
Alliances Manager, Cisco Press: Caroline Antonio
Editorial Assistant: Cindy Teeters
Director, ITP Product Management: Brett Bartow
Designer: Chuti Prasertsith
Managing Editor: Sandra Schroeder
Composition: codeMantra
Development Editor: Ellie C. Bru
Indexer: Timothy Wright
Senior Project Editor: Mandie Frank
Proofreader: Barbara Mack
Copy Editor: Kitty Wilson
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco Website at www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks,
go to this URL: www.cisco.com/go/trademarks. Third party trademarks mentioned are the property of their respective owners. The use of the word partner does
not imply a partnership relationship between Cisco and any other company. (1110R)
iv Segment Routing for Service Provider and Enterprise Networks
Florian Deragisch, CCIE #47970, is a Technical Leader, working with large service
provider and carrier-grade enterprise customers. He joined Cisco in 2012 as part of a
graduate program, where he discovered his passion for service provider designs and tech-
nologies. After gaining extensive exposure to MPLS-based networks and services, he
embraced the evolution toward segment routing with his first SR-MPLS deployment in
2018. More recently, he has focused on the migration and deployment of L2VPN/L3VPN
SRv6 services to build simple and highly scalable network architectures. He holds a mas-
ter’s degree in electrical engineering and information technology from the Swiss Federal
Institute of Technology in Zurich and a Cisco Internetwork Expert certification (CCIE
#47970). When not busy with work, he enjoys traveling to explore new places, cultures,
and food.
Leonir Hoxha, CCIE #49534, has been with Cisco Systems since 2013, taking on
various roles on the Professional Services team and later on the Pre-sales team—from
troubleshooting to designing and implementing large-scale networks with a focus on
service provider technologies, specifically MPLS services. In his current role as a
Solutions Architect, he supports service providers and enterprise customers by under-
standing their requirements and providing cutting-edge solutions. An active speaker at
Cisco Live conferences, he has delivered numerous sessions on segment routing across
Europe, the United States, and Australia. He holds a bachelor’s degree in computer
science and a Cisco Internetwork Expert certification (CCIE #49534). In his free time,
he enjoys electronic music, a nod to his first job as a DJ during his teenage years.
Rene Minder, CCIE #8003, is a Senior Program Advisor and Solution Architect with
over 25 years of experience in the IT industry. He has been responsible for architecture
and delivery in more than 70 customer engagements, evolving their networks and manage-
ment infrastructures as well as the processes for developing, testing, and deploying them.
He has led end-to-end IT architecture projects encompassing everything from portals
offering service self-administration capabilities, to IT applications that automatically
configure and test changes, to invoicing. His efforts have led to significant improvements
in customer satisfaction, operational efficiency, and overall agility. He holds Lifetime
Emeritus status for his Cisco Internetwork Expert certification (CCIE #8003).
Matthys “Thys” Rabe, CCIE #4237, a Lifetime Emeritus Cisco Certified Internetwork
Expert (CCIE #4237), is a Technical Leader at Cisco Systems and holds a diploma in
electrical engineering (telecommunications). With more than 25 years of experience in IP
and MPLS operations with various service providers in South Africa and Switzerland, he
has spent the past 10 years as a Technical Support Engineer focused on Swisscom. Prior
to working at Cisco Systems, he was part of the Core IP Engineering team with a Swiss-
based mobile provider. When he’s not working, he enjoys fishing with his brothers in
various southern African countries.
v
Kateel Vijayananda, is a solutions architect at Cisco Systems and has more than 30 years
of experience in the networking industry. His expertise includes IPv6, IP services, design
of large-scale networks for enterprise and service provider customers, and QoS assurance
in IP networks. He has been with Cisco Systems since 2001, involved in several projects
for service providers to deploy IP-based services using MPLS and segment routing.
Before joining Cisco, he worked at Swisscom, a service provider in Switzerland, where
he was responsible for developing MPLS VPN services. He is the co-author of the book
Developing IP-Based Services: Solutions for Service Providers and Vendors. He
holds a master’s degree in computer science from the University of Maryland at College
Park and a PhD in computer science from the Swiss Federal Institute of Technology at
Lausanne (EPFL). In his spare time, he enjoys traveling and cooking.
vi Segment Routing for Service Provider and Enterprise Networks
Christian Schmutzer is a Distinguished Engineer at Cisco Systems and has been with the
company since 1998. Early in his career, he primarily worked on the design and deploy-
ment of large service provider backbones. He has been part of a business unit since 2005,
serving as a technical subject matter expert for market-leading routing platforms such as
the Cisco 7600 and ASR 9000. Since 2013, he has focused on packet/optical network
architectures, future product definition, technology innovation, and leading customer
deployments. He is the holder of several patents and the author of a series of IETF
standards documents.
Bram van der Zwet is the Lead Architect for Network & Infrastructure at Swisscom, where
he has been shaping the network architecture and technical strategy for Swisscom’s IP and
optical networks. His responsibilities extend to overseeing the physical infrastructure from
Swisscom’s IT and data centers down to the central offices in regional networks. He holds a
degree from Delft University of Technology and is based in Bern, Switzerland. With more
than 25 years of experience at Swisscom and a history of strategic roles driving innovation
and excellence, he has become a key figure in the telecommunications industry.
Johan Gustawsson is a Senior Director within Cisco Data Center and Service Provider,
focusing on driving the direction and strategy for routing and architectures. He has spent
his entire career operating and building mass-scale networks, pioneering and driving
market disruptions across routing and optical domains. Prior to joining Cisco, Johan was
the Head of Network Architecture, Strategy, and Engineering at Arelion (formerly Telia
Carrier), leading a globally distributed organization at the world’s number-one-ranked
Internet backbone. Johan holds a degree in Engineering from the KTH Royal Institute of
Technology in Stockholm.
Luc André Burdet is a Senior Technical Leader in Engineering at Cisco, where he has
been instrumental in driving innovation and strategic initiatives since May 2012. With
more than 12 years of experience at Cisco, he focuses on advancing the company’s engi-
neering capabilities and leading key technical projects. He holds a master’s degree from
ETH Zürich and is based in Ottawa, Ontario. Luc André’s technical expertise and leader-
ship have established him as a pivotal figure in the networking industry, significantly
contributing to Cisco’s engineering excellence.
vii
Acknowledgments
First and foremost, we would like to thank our main reviewers, Jakub Horn and Christian
Schmutzer, for their meticulous reviews and invaluable feedback. Their dedication and
attention to detail have significantly enhanced the quality of this book.
We also extend our thanks to Luc Andre Burdet for his expertise in the chapter focused
on Layer 2 VPN technologies, and to Bram Van der Zwet and Johan Gustawsson for
reviewing the chapters on business opportunities and organizational considerations. Your
feedback has been instrumental in ensuring the accuracy and relevance of the information
presented.
Special thanks to Marcel Witmer for all the support around PLE and integrated visibility
and to Christian Schmutzer for his solid insight and input on PLE. We also appreciate
Kaela Loffler and Ramiro Nobre for providing an overview on how micro-drops can
influence overall service performance. Similarly, we would like to express our gratitude
to Carmine Scarpitta and Ahmed Abdelsalam for their guidance on FRRouting’s SRv6
implementation.
The authors had the pleasure of collaborating with Swisscom, a leading service provider
based in Switzerland, on several aspects covered in this book. The insights gained from
Swisscom’s exposure to engineering, migrations, and operations have enriched the
content, providing field perspectives that are invaluable for readers.
This book wouldn’t have been possible without the support of many people on the Cisco
Press team. Brett Bartow, Product Line Manager of the Pearson IT professional Group,
was instrumental in sponsoring the book and driving it to execution. Sandra Schroeder,
Managing Editor, was masterful with book graphics. Ellie Bru, Development Editor, has
done a wonderful job in the technical review cycle; it has been a pleasure working with
you. Mandie Frank, Senior Project Editor, thank you for leading the book to success
through the production cycle. Kitty Wilson, Copy Editor, thank you for polishing up the
book and making the content more shiny. Also, many thanks to the numerous Cisco Press
unknown soldiers working behind the scenes to make this book happen.
We would like to express our deepest gratitude to our Cisco management for supporting
and encouraging us in creating this book. Thank you to everyone who has contributed to
this book. Your support and expertise have made this project possible.
Finally, we would like to extend our heartfelt thanks to our families. Your unwavering
support, patience, and understanding have been our pillars of strength throughout the
writing process. The countless hours spent away from you to work on this book have not
gone unnoticed, and we are deeply grateful for your encouragement and understanding.
viii Segment Routing for Service Provider and Enterprise Networks
Contents at a Glance
Introduction xx
Part I Introduction
Chapter 1 MPLS in a Nutshell 1
Index 1115
Online Element:
Reader Services
Register your copy at www.ciscopress.com/title/ISBN for convenient access to
downloads, updates, and corrections as they become available. To start the registration
process, go to ciscopress.com/register and log in or create an account*. Enter the product
ISBN 9780138230937 and click Submit. When the process is complete, you will find
any available bonus content under Registered Products.
*Be sure to check the box that you would like to hear from us to receive exclusive
discounts on future editions of this product.
For access to any available bonus content associated with this title, visit ciscopress.com/sr,
sign in or create a new account, and register ISBN 9780138230937 by December 31, 2027.
x Segment Routing for Service Provider and Enterprise Networks
Contents
Introduction xx
Part I Introduction
Scale 1007
Routed Optical Networks 1008
Benefit 1: Simplified Long-Distance Connectivity 1008
Benefit 2: Easier and Cost-Effective Scaling 1009
Benefit 3: Simplified Redundancy 1009
Private Line Emulation 1011
Integrated Visibility 1017
Intent-Driven Configuration of Visibility Features 1018
Intent-/Model-Based Assurance 1019
High-Precision Probing 1020
Path Tracing 1023
New Hardware Generation 1025
CapEx Savings 1026
OpEx Savings 1030
Business Case Guidance 1032
Summary 1039
References and Additional Reading 1040
Index 1115
Online Element:
VPP Ubuntu
■■ Boldface indicates commands and keywords that are entered literally as shown.
In actual output (not general command syntax), boldface indicates commands that
are manually input by the user (such as a show command).
Introduction
Welcome to the future of MPLS and the realm of advanced networking technologies,
where efficiency, scalability, and reliability are paramount. This book is your gateway to
mastering segment routing (SR), a revolutionary technology that transforms IP data trans-
port and network operations. From the foundational principles of MPLS to state-of-the-
art implementations of SR over MPLS (SR-MPLS) and SR over IPv6 (SRv6), this book
offers a comprehensive guide that bridges the gap between theory and practice.
The chapters cover the entire spectrum of SR, providing a holistic understanding of
the technology. They feature practical examples for SR on both IOS XR and IOS XE
platforms, ensuring that you have the knowledge to implement SR in different network
environments.
This book also goes beyond technical details. It delves into the business opportunities
and organizational implications of adopting SR, offering valuable insights into how SR
can drive growth, improve customer experience, and streamline operations. Dedicated
sections on the SRv6 ecosystem in data centers and cloud environments showcasing
network functions virtualization (NFV) prepare you for the next wave of networking
innovations.
Available online content enables you to gain hands-on practice and reinforce the theories
covered. A business case template provides a tool to legitimize investments in SR
technologies and calculate potential returns.
By the end of this book, you’ll be equipped with the knowledge and tools to implement
and manage SR technologies effectively, helping you stay ahead in this ever-evolving field.
Two reference lab topologies, one for SR-MPLS and one for SRv6, are consistently
referenced throughout the technical chapters. To make the learning process interactive
and engaging, the downloadable lab support material enables you to set up your own lab
so you can replicate and apply the described theory.
■■ Chapter 3, “What Is Segment Routing over IPv6 (SRv6)?”: This chapter explores
SRv6 in the control plane and data plane and provides an overview of the evolution
and simplification of SR-driven networks.
■■ Chapter 8, “Service Assurance”: This chapter presents procedures and processes for
improving customer experience and satisfaction. It includes a discussion of tools and
protocols for SLA monitoring and fault management in the transport network layer,
as well as in the L2VPN and L3VPN service overlays.
■■ Chapter 12, “SRv6 Ecosystem Deployment Use Cases”: This online only chapter
discusses the potential of SRv6 in data centers and cloud environments. It provides
examples and interoperability scenarios involving open-source software.
Downloadable Content
Readers can access downloadable content using the companion website as per the
instructions below:
1. The user enters ciscopress.com/sr in his browser or clicks the hyperlink in the online
book version.
3. The user confirms the already prepopulated ISBN number of the book and answers a
proof-of-purchase challenge question, to access additional content.
4. The user clicks on the desired attachment.
The following attachments can be downloaded for use with this book:
■■ SRv6-Linux-Lab.zip: These files, which are meant to be used with Chapter 12,
include a container-based lab topology definition and initialization script required to
spin up the SRv6 Linux lab topology.
■■ SRv6-VPP-Lab.zip: These files, which are meant to be used with Chapter 12,
include a bash script to spin up the SRv6 VPP topology, VPP instances, and startup
configurations.
■■ SRv6-Interop-Lab.zip: These files, which are meant to be used with Chapter 12,
include a Cisco CML topology definition and a running configuration of PE1
(IOS XR), P (IOS XE), and PE3 (IOS XE), as well as FRR settings and configuration
for PE2 (FRR).
Note This book contains references to the companion website in later chapters which
leverage the previously listed downloadable content.
This page intentionally left blank
Chapter 1
MPLS in a Nutshell
In the late 1990s, the first implementations of MPLS marked a significant advancement,
making it possible to append tags to data packets and route them through a chain of
nodes. MPLS was revolutionary because it enabled constant-length lookups on a 4-byte
label, making packet forwarding much more efficient compared to the variable-prefix-
length IP lookups used in traditional IP routing. Initially, MPLS used Tag Distribution
Protocol (TDP) to attach labels to packets; it later evolved to use Label Distribution
Protocol (LDP) for more sophisticated label management. MPLS can operate on a BGP-
free core, significantly reducing the memory requirements for core routers. A BGP-free
core means only the provider edge (PE) nodes use BGP, and the core (P) nodes are BGP
free. The use of MPLS labels for forwarding enabled a wide variety of data types to be
carried across a single network infrastructure. These innovations made MPLS a powerful
tool for optimizing network performance and scalability.
With traditional MPLS technology, LDP labels need to follow whatever the interior
gateway protocol (IGP) chooses as the best path. The advantage of MPLS is that no IP
lookup is conducted on the transit nodes (core nodes); instead, only label-based forward-
ing is performed. The fact that a router does not have to do an IP lookup in the routing
table but instead forwards the packets based on the label itself brought a lot of possibili-
ties in terms of how packets are processed, resulting in higher performance of the router
2 Chapter 1: MPLS in a Nutshell
itself and, most significantly, offering differentiated services on top of the MPLS-based
network.
MPLS is a shared network infrastructure used for a wide range of services. With it, there
is no need for distinct physical network infrastructure to serve different services for end
customers. Providing various services and solutions using a shared network infrastructure
lowers the overall cost for network operators. Network operators can provide multiple
services over MPLS networks for their end customers, making deployment of these ser-
vices technically independent from the MPLS layer. Speaking of services, MPLS provides
the possibility to offer services such as Internet services for private and public customers
as well as virtual private network (VPN) services. From a high-level point of view, VPN
services can be classified into the following categories:
L3VPN services offer an efficient way to provide any-to-any mesh connectivity at the
IP address level. Each customer edge (CE) device attaches to a provider edge (PE) node,
establishing IP-based neighborship through an IGP, Border Gateway Protocol (BGP), or
static configuration. PE nodes connect to multiple CE devices and maintain separate
routing tables for each customer using virtual routing and forwarding (VRF) instances.
VRF instances ensure local routing table separation, while L3VPN enables deterministic
connectivity among different VRF instances.
In an L3VPN setup, each PE node creates a unique VRF instance for every customer,
ensuring isolation of each customer’s routing information. The PE nodes receive IP pre-
fixes from their connected CE devices and use Multiprotocol BGP (MP-BGP) to distrib-
ute these prefixes to other PE devices in the network. MP-BGP extends BGP to support
multiple address families, enabling the PE nodes to handle both IPv4 and IPv6 prefixes as
well as VPN-IPv4 and VPN-IPv6 prefixes.
The process begins when a CE router advertises its routes to the connected PE router.
The PE router assigns a VPN label to each route and includes this label when advertising
the route to other PE routers using MP-BGP. When a remote PE router receives these
routes, it updates its VRF tables accordingly and distributes the prefixes to the appropri-
ate CE devices.
L3VPN leverages MPLS to ensure that data packets follow the predetermined paths
across the provider’s network, ensuring quality of service (QoS) and optimizing resource
utilization. MPLS labels are used to steer the packets through the provider network,
enabling features such as traffic engineering and fast rerouting in the event of node or
link failure.
MPLS in a Nutshell 3
VRF instances also enable the use of overlapping IP address spaces from different cus-
tomers, eliminating the risk of IP address conflicts. This capability is beneficial for net-
work operators hosting multiple customers with their own private IP address spaces.
L2VPN services extend Layer 2 connectivity across an MPLS network, enabling enter-
prises to link dispersed sites as if they were on the same LAN. Each CE device connects
to a PE node, establishing point-to-point or multipoint Layer 2 connections to remote CE
devices.
mVPN with mLDP, as detailed in RFC 6513, optimizes multicast data distribution in
MPLS networks. Between the PE and CE routers, protocols such as Protocol Independent
Multicast (PIM) and Internet Group Management Protocol (IGMP) are used. Within the
MPLS core, mLDP establishes multicast distribution trees. These trees, constructed as
point-to-multipoint (P2MP) and multipoint-to-multipoint (MP2MP) label-switched paths
(LSPs), enable efficient multicast traffic delivery. mLDP extends LDP to dynamically sig-
nal and create these LSPs, forming a scalable multicast routing framework. This approach
eliminates the need for traditional GRE tunnels. By leveraging MPLS infrastructure, net-
work operators can deliver multicast services within an MPLS VPN. mLDP simplifies the
control plane by utilizing existing LDP mechanisms, seamlessly integrating with current
MPLS deployments, and provides the capability to support large-scale multicast deploy-
ments across MPLS networks.
Apart from the VPN services, other use cases for MPLS are traffic engineering and traf-
fic protection and restoration. Large service providers and enterprise networks need a
scalable and more efficient way to achieve the same result with less effort and lower
operational costs. Traffic engineering in MPLS allows network operators to control the
flow of data through the network by optimizing the path that packets take. MPLS Traffic
Engineering (MPLS TE) enables the creation of LSPs that can be dynamically adjusted
based on current network conditions and policies. MPLS supports fast reroute (FRR)
4 Chapter 1: MPLS in a Nutshell
techniques that can switch traffic to a precalculated backup path in the event of a failure.
This is important for maintaining service continuity, especially in networks that sup-
port critical services. Chapter 9, “High Availability and Fast Convergence,” is dedicated
to a traffic-protection methodology called Topology-Independent Loop-Free Alternate
(TI-LFA) with segment routing. TI-LFA provides link and node protection in any topol-
ogy, which is not the case with other legacy protection mechanisms.
■■ Reduced cost: MPLS eliminates the necessity to operate dedicated routers and
WAN circuits to suit various customers. It provides a more cost-effective way to
manage and provision network services.
■■ Improved scalability: The P nodes do not use BGP because they are only involved
in label forwarding and do not perform any IP lookups. BGP is used on the PE nodes
to exchange customer routers. Therefore, the core devices remain BGP free.
■■ Improved reliability: MPLS provides several mechanisms for fast rerouting of traffic
in the event of link or node failures, which reduces downtime and improves service
availability.
This introductory chapter provides an overview of MPLS technology. For those with
experience in MPLS, it provides a logical path to understanding segment routing technol-
ogy and Segment Routing over IPv6 (SRv6) throughout coming chapters. For those with-
out experience in MPLS technology, it provides a solid foundation to understand MPLS
fundamentals.
Figure 1-1 shows the device roles and their functions within an MPLS network domain.
How MPLS Operates 5
Service Provider
RR
P
Customer PE
Customer
CE CE
PE
P
P
IP
MPLS Domain
MPLS
MPLS networks consist of several types of routers, each with a specific function:
■■ All interfaces are MPLS enabled and participate in MPLS label switching.
P routers are part of the transport across the MPLS backbone.
■■ P node runs an IGP protocol such as OSPF or IS-IS (in the case of segment
routing). With legacy MPLS, the LSR is also LDP enabled for label distribution
and signaling.
■■ P node does not run BGP peering internally or externally (so it has a BGP-free
core), except when the P router also acts as BGP route reflector (RR)
■■ Backbone routes are part of the global routing table and do not participate in
any VRF.
■■ An LSR does not contain any customer routes, but MPLS-encapsulated customer
traffic is label forwarded on LSRs.
■■ Provider edge (PE) router, also known as label edge router (LER):
■■ Customer routes are part of a VRF instance, and each VRF instance creates its
own routing table separately from other VRF instances.
■■ PE routers peer directly with CE routers via BGP, IGPs, or static routing.
■■ BGP full mesh between PEs or PEs with RRs is mandatory to advertise and dis-
tribute customer routes to each other for end-to-end reachability.
6 Chapter 1: MPLS in a Nutshell
■■ BGP RRs simplify the configuration of BGP peering sessions in large MPLS
networks.
■■ RRs act as central points for BGP route reflection in an MPLS network, reducing
the number of BGP peering sessions required between PE routers.
■■ In an MPLS network, BGP RRs are used to distribute IP routes and VPN routes
among the PE routers.
■■ CE routers are located at the customer sites and are responsible for connecting
a customer’s LAN to the MPLS network. A CE typically has multiple Layer 3
interfaces and is connected to the PE router via a Layer 3 interface. A CE router is
responsible for forwarding customer traffic to the PE router, which then encapsu-
lates it to MPLS packets and adds the appropriate labels before forwarding them
to the P router.
Overall, the roles of PE, P, and CE routers are crucial in the functioning of MPLS net-
works. A PE router connects customer sites to the MPLS network, a P router is respon-
sible for transporting labeled packets across the network, and a CE router is responsible
for connecting the customer’s network to the service provider network. By dividing the
responsibilities of these routers in this way, MPLS is able to efficiently transport traffic
across large-scale networks while still maintaining security and privacy for each
customer site.
MPLS enables the effective exchange of VPN routes using Multiprotocol Border
Gateway Protocol (MP-BGP). Within an MPLS network, the core nodes are BGP free,
whereas the PE nodes have BGP deployed, allowing for end-to-end customer routes trans-
port. The provider network PEs can use MP-BGP to communicate and exchange custom-
er routes dynamically, enhancing the scalability of routing and forwarding features of the
underlying network infrastructure. Because MPLS operates on a BGP-free core network,
BGP RRs are essential to support a large number of iBGP peering between PE devices.
An RR reduces the iBGP peerings required for PE full mesh connectivity and IP/VPN/
EVPN/mVPN route propagation over MPLS networks, where PE devices only peer with
the RRs, and no direct peering between PE devices is required.
An MPLS header is a 32-bit value inserted between the Layer 2 and Layer 3 headers of an
IP packet by an ingress LER when the packet enters the MPLS network. The ingress LER
How MPLS Operates 7
adds a label to the packet to identify a specific LSP for packet forwarding across the net-
work. Each LSR along the path looks up the label to determine the next LSR and swaps
the label with a new one before forwarding the packet. This process continues until the
packet reaches the egress LER. The path from the ingress LER, through the LSRs to the
egress LER is known as the LSP.
To help you understand the MPLS header, Figure 1-2 provides a detailed overview, and
the following list describes the fields in the header:
■■ MPLS Label Value: This field consists of 20 bits. The label value range in this field
starts from 0 and goes up to the maximum of 1,048,575, hence the limitation of
~1 million MPLS labels. The first 16 label values, from 0 to 15, are reserved for oper-
ational reasons and have a special meaning to an MPLS-enabled node, as specified in
RFC 3032 and RFC 7274.
■■ Experimental (EXP) bit: This field consists of 3 bits. Initially, those bits were not
used but were specified for later use as MPLS development advanced. Today, this
field is used for QoS treatment on a packet.
■■ Bottom of Stack (BoS): This field consists of only 1 bit and two possible values:
0 or 1. If the value is 0, there is more than one label in the stack, and a node must
continue processing the next label from the stack of labels. If the value is 1, it means
the label is the last one (bottom) in the stack of labels, and the node must do an IP
lookup or L2 Ethernet lookup afterward, but only if there is no additional label, such
as a service label.
■■ Time to Live (TTL): This field provides the same function as a TTL field in an IP
header: It helps avoid endless packet loops. Since intermediate MPLS nodes do not
perform IP lookups, there is no way to check the TTL of an IP packet, so we need
the TTL field in the MPLS header. For each hop a labeled packet travels, the TTL
value decreases by one. The default TTL value is 255.
Packet forwarding using MPLS labels can be as simple as imposing a single label on top
of an incoming packet or as complex as imposing a set of labels, where these set of labels
are part of a label stack. A label stack (see RFC 3032) is a container that keeps MPLS
8 Chapter 1: MPLS in a Nutshell
labels in last-in, first-out (LIFO) order. In this case, a top label from the label stack, as
shown in Figure 1-3, is processed first, and depending on the necessary MPLS label oper-
ation, the MPLS-enabled node processing the packet either forwards the labeled packet
to an outgoing interface without any modification (top label swapped) or a more complex
operation occurs, resulting in either a PUSH (adding an additional label) or POP (remov-
ing a label) operation.
As shown in Figure 1-3, the top label and any label below has a value of 0 in the BoS field,
but the bottom label always has the value 1 and an instruction with the meaning that this
is the last label in the stack. Once the last label from the stack is processed and popped,
an IP lookup is commonly done as the last step to forward the packet to a next hop.
See the section “MPLS Label Operations,” later in this chapter, for a detailed explanation
of MPLS label operations.
We briefly mentioned the reserved label range (0–15), which has a special meaning for an
MPLS-enabled node. As of this writing, only some of the reserved labels are officially
assigned for any use:
■■ IPv4 explicit null label (0): The advantage of the explicit null label is that MPLS
EXP bits are preserved throughout the MPLS network. When a PE node receives a
labeled packet with an explicit null label, the label stack is stripped off, and IPv4 for-
warding is performed. From a PHP (P) node perspective, it performs the swap opera-
tion, essentially replacing the top label with a null label, and the service label remains
untouched. Figure 1-4 shows an example of the Explicit null label.
Control Plane
Explicit Null
PE P PE
IP IP
S: Label S: Label
Data Plane T: Label NULL Label
■■ Router alert label (1): The label value 1 is assigned as the top label or anywhere in
the label stack except as the bottom label. The router alert label does not hold any
forwarding path information, and the primary purpose is to have the node examine
the packet thoroughly before forwarding it to the next hop. A practical use case is an
LSP ping, which intermediate nodes must process up to the destination. Once a node
finishes the packet examination, the next label in the stack is checked for forwarding
information. Before the labeled packet is put to the outgoing interface for forwarding
to the next hop, the RA label (1) is added to the label stack as the top label.
■■ IPv6 explicit null label (2): The label value 2 is assigned to the last MPLS label in the
stack, also known as the bottom label. It signals to the node that the label stack must
be removed, and packet forwarding is now based on the IPv6 header. Much as with
label value 0, the label stack is stripped off when the node receives a labeled packet
with explicit null label value 2.
■■ Implicit null label (3): The label value 3 is assigned by an egress PE device to inform
the directly connected P node to perform a POP operation whenever the P node
forwards traffic to the egress PE device. This method is also called penultimate hop
popping (PHP). Egress PE devices benefit from PHP because the incoming packet
is unlabeled; the egress PE device performs only a simple IP lookup to forward the
packet. Implicit null does not mean the incoming packet is always unlabeled (in the
case of MPLS/VPN); it just means that the top label from the label stack is popped
(removed). The advantage of implicit null is the performance of the PE router, which
does not need to do a label lookup but only a single IP lookup. The disadvantage
of implicit null is mainly related to QoS: If the transport label contains MPLS EXP
bits for QoS, then this information is lost. Figure 1-5 shows an example of an MPLS
VPN packet where the transport label is popped, and the service label remains until
it reaches the egress PE router.
Control Plane
Implicit Null
PE P PE
IP IP
S: Label S: Label
Data Plane T: Label
Note The MPLS reserved labels that are registered with IANA are listed at https://
www.iana.org/assignments/mpls-label-values/mpls-label-values.txt.
10 Chapter 1: MPLS in a Nutshell
The control plane is crucial in establishing and managing a network, performing tasks
such as topology discovery and assigning label values to be used by the network devices.
The control plane of a network device is responsible for managing routing protocols, cal-
culating optimal paths, maintaining routing tables, handling network topology updates,
assigning MPLS labels, enforcing routing policies, and maintaining neighbor relationships.
As shown in Figure 1-6, IGPs are part of the control plane, and network devices running
the same IGP can distribute their IGP database (topology, nodes, links, and prefixes) to
each other so that there is a common view of the network. Next, the network devices
analyze the IGP database, pick all necessary prefixes from it, and populate the Routing
Information Base (RIB) with destination addresses, the outgoing interfaces, and possibly
the next-hop address. In the case of MPLS based on LDP, every routing entry in the RIB
is assigned a label and stored in a separate database called the Label Information Base
(LIB), which is also part of the control plane.
So you have seen that the network devices learn all necessary prefixes (IGP) and labels
(LDP) and store them in their respective databases (RIB and LIB). Let us briefly look into
the data plane. For a network device to forward IP or labeled packets, the forwarding
information is written to the hardware itself. Any fixed router or line card of a modular
router contains a Forwarding Information Base (FIB) table stored locally, which pro-
vides destination IP prefixes, next hops, and outgoing interfaces. Finally, label-based
forwarding is done by looking up the forwarding information in the Label Forwarding
Information Base (LFIB). The LFIB provides the incoming and outgoing label information
with the respective outgoing interfaces to forward any labeled traffic. LDP feeds the LFIB
with all the necessary label information.
Control Plane
OSPF / IS-IS
Data Plane
Incoming Outgoing
■■ Unsolicited downstream: This method is used when a node has label information to
share with its neighbors. In this scenario, the node sends a label mapping message to
its neighbors, which they can then use to update their forwarding tables.
MPLS nodes use LDP to distribute labels along normally routed paths to support MPLS
forwarding. The nodes create a label forwarding table from this information, which maps
each incoming data packet to its corresponding label. MPLS forwarding differs from IP
forwarding, where a device examines the destination address in the IP header and per-
forms a route lookup. In MPLS forwarding, the nodes look at the incoming label, consult
the LFIB, and forward the packet to the next hop. In MPLS networks, a group of packets
assigned to a specific LSP are transported by being associated with a label mapping or
binding that contains a label. This label serves as an identifier for the group of packets,
which are collectively referred to as a forwarding equivalence class (FEC).
LDP was standardized by the Internet Engineering Task Force (IETF) in RFC 5036. As
shown in Figure 1-7, neighbor discovery in LDP uses UDP port 646, and session estab-
lishment with the discovered neighbor uses TCP port 646. Nodes initiate the discovery
phase by transmitting, at regular intervals, hello packets on UDP port 646 to the multi-
cast address 224.0.0.2; they listen to this port for potential hello messages from other LDP
nodes. This process allows all directly connected nodes to learn about each other and
establish hello adjacencies. If two LDP speakers agree on the shared parameters,
they establish neighbor adjacency by using TCP. An LDP ID, a unique identifier assigned
to each node in the network, is typically assigned based on the node’s IP address and
included in all label distribution messages that the node exchanges. LDP neighbors initi-
ate the exchange of label bindings using the already established TCP connection once the
LDP adjacency is up and running—assuming that label allocation has already been com-
pleted independently by each node.
LDP can discover peers that are not directly connected if you provide a node with the IP
address of one or more peers. The node sends targeted hello messages to UDP port 646
on each remote peer. If the targeted peer responds with a targeted hello message to the
12 Chapter 1: MPLS in a Nutshell
initiator, an LDP targeted adjacency is created, and session establishment can proceed.
Targeted LDP sessions are used in specific scenarios such as for traffic protection, traffic
engineering, and L2VPN point-to-point circuits, also known as pseudowires.
10.0.10.1/32 10.0.10.2/32
Loopback0
Loopback0
2) TCP connection created between the nodes. LDP
ID used for unicast messages between the LDP
adjacencies.
10.50.50.0/30
P1 P2
1) LDP Hello sent to multicast address 224.0.0.2 with
UDP port 646 as destination.
The MPLS domain requires end-to-end label switching for any traffic entering until it
reaches the egress PE device. The node generates a label and allocates it to an IP prefix,
and it stores this information locally in the LIB. The LFIB, which is part of the data plane
for MPLS traffic, also stores the same information. The responsibility for generating a
label does not lie with an IGP. Instead, LDP, Resource Reservation Protocol (RSVP), and
BGP are mechanisms that generate and distribute labels to other neighbors internally to
the MPLS domain or external domains.
How MPLS Operates 13
In summary, the label allocation process involves assigning a label for each prefix in the
network domain and distributing these labels to all network nodes.
Label allocation and distribution with LDP was the standard for many years. Figure 1-8
provides a visual representation of this process.
10.0.10.1/32
Loopback
PE1 P1 P2 PE2
use label 19 for use label 22 for use label 3 for
prefix 10.0.10.1/32 prefix 10.0.10.1/32 prefix 10.0.10.1/32
Step 1. PE2 has a loopback interface configured with IP address 10.0.10.1/32. PE2
assigns a local label 3 (implicit null) to IP prefix 10.0.10.1/32. This is called a
label mapping, and it is stored in the LIB (control plane). The same informa-
tion is also stored in the LFIB (data plane).
Step 2. PE2 uses LDP to propagate the label mapping (10.0.10.1 to 3) to all directly
connected peers—in this case, to P2.
Step 3. P2 receives the label mapping from step 2 and stores it in the LIB and LFIB.
From P2’s point of view, label 3 will be used as the outgoing label for 10.0.10.1
Step 4. P2 assigns its own label for 10.0.10.1—in this case, label 22—to advertise it
further to P1.
Step 5. P1 receives label 22 for prefix 10.0.10.1, stores it in the LIB and LFIB as an
outgoing label, and assigns an incoming label for prefix 10.0.10.1—in this case,
label 19—and propagates it to PE1.
Step 6. PE1 receives label 19 from P1 and stores it as an outgoing label in the LIB and
LFIB.
To summarize, label allocation is performed when every node in the MPLS domain
assigns a label to each prefix in the routing table independently of other nodes, making it
locally significant. The node then dynamically propagates label mapping to other directly
connected peers using LDP.
14 Chapter 1: MPLS in a Nutshell
IP IP
IP Label: 25 IP IP Label: 25 IP
■■ PUSH: For any incoming packet, a label is added on top of the packet. If the incom-
ing packet already has a label, an additional label is added on top of that label, thus
creating a label stack. Usually, a PE node performs the PUSH operation. However,
there are instances in which a P node also performs a PUSH operation during a fail-
ure event, when the fast reroute mechanism is activated.
■■ SWAP: For any incoming labeled packet, the node inspects the incoming label and
swaps it with the outgoing label. SWAP operations are usually performed by
P nodes.
■■ POP: For any incoming labeled packet, the node inspects the incoming label, pops
(removes) the top label from the label stack, and forwards the packet. In cases where
the top label is the last label, the node removes the label stack and forwards the pack-
et based on a standard IP lookup—except when the label allocation mode on a PE
node is set to per-prefix or per-CE, in which case no IP lookup is needed. The POP
operation is usually performed by both PE and P nodes.
Apart from the label operations mentioned previously, in certain circumstance, an MPLS-
enabled node can also push two or more labels on top of the label stack. This is espe-
cially important when using MPLS applications such as MPLS/VPN, traffic engineering,
and FRR. In the case of MPLS/VPN, a node imposes one BGP label for the VPN prefix,
also called a service label, and another label (LDP) for the transport prefix, also called
transport label. In case of traffic engineering, where a strict end-to-end path is desired,
the ingress PE node must push all the necessary labels in the label stack. In the case of
FRR, if a node or a link in the path fails, the node must react immediately by rerouting
the traffic through a backup path, and this is achieved by manipulating the original label
stack of the labeled packet, essentially removing and adding more than one label on top
of the packet.
How MPLS Operates 15
Usually, only the destination address field in the header is important. However, in some
instances, other header fields are crucial. Therefore, each router must independently ana-
lyze the header as the packet passes through.
In MPLS label forwarding, the incoming packet’s Layer 3 header analysis is performed
once, at the ingress PE node. The Layer 3 header is transformed into an unstructured,
fixed-length value called a label. Multiple headers can be mapped to the same label if
they lead to the same selection of the next hop. Essentially, a label is assigned to each
forwarding equivalence class (FEC) that includes a group of packets that are indistinguish-
able by the forwarding function.
The initial label choice does not need to be based entirely on the contents of the
Layer 3 packet header. For instance, routing policies can influence forwarding decisions
at subsequent hops. After a label is assigned, a short label header is added to the front of
the Layer 3 packet, which is carried along with the packet across the network. At subse-
quent hops through each MPLS router in the network, labels are swapped, and forward-
ing decisions are made using the MPLS forwarding table lookup for the label carried in
the packet header.
There is no need to reevaluate the packet header as it moves across the network. Because
the label is of a fixed length and unstructured, the MPLS forwarding table lookup pro-
cess is simple and fast.
VPN prefix:
10.0.10.0 /24
PE1 P1 P2 PE2
A PE router normally has at least one CE node directly connected. The CE router adver-
tises its own IP prefixes to the PE router dynamically, using a protocol such as BGP
or OSPF. Open Shortest Path First (OSPF) is a popular IGP used in large enterprise
networks. In contrast, BGP is more prevalent in large service provider networks. When
L3VPN is employed to serve customers, any protocol can be used as a VRF-aware
instance between the PE and CE nodes.
PE1 receives an IP packet from a VPN customer and performs an IP lookup based on
the destination IP address in the IP header of the incoming packet. Because this is a
VPN-based service, PE1 creates a label stack and imposes two labels on the label stack:
The bottom one is a BGP label 150, also called a service label, and the top label is the
transport label created by LDP. No label operation is performed on the bottom label
until it reaches the egress PE node—in this case PE2. The forwarding mechanism in the
MPLS transport is performed using the top label. After the label stack is imposed on the
IP packet, via a PUSH operation, PE1 forwards the data traffic to the next hop, which
is P1. At the ingress port of P1, the data traffic arrives with the top label 19. P1 checks
its own LFIB and recognizes the need to perform the SWAP operation, so it replaces
the label 19 with label 22 and forwards the labeled packet to P2 as the next hop via the
outgoing interface. P2 performs the POP operation to the incoming label 22 due to the
implicit null and forwards it via the outgoing interface to the next hop, PE2. PE2 checks
the incoming service label 150 and recognizes that this label has to be removed, so a POP
operation is performed, and the IP packet is routed normally to the CE node via a non-
labeled outgoing interface.
MPLS VPNs operate at Layer 3 and adopt a peer model, allowing the service provider
and customer to exchange routing information at this layer. This peer model facilitates
data transmission between customer sites through the service provider’s network without
customer intervention. Consequently, when a new site integrates into the MPLS VPN, the
PE to which the new site is connected is the only one that needs an update with the new
configuration for that particular VPN customer.
For completeness, Figure 1-11 shows the end-to-end MPLS VPN architecture from a
high-level point of view.
How MPLS Operates 17
Customer-A
IP GP
CE
Ro MP-iB MP
Customer-A
uti
ng -iB ng CE
RR GP uti
Ro
IP
g PE
utin
Ro P
IP
PE IP R
P outin
Customer-B P g
CE Customer-B
CE
MPLS Backbone
VRF is a technique used in MPLS VPNs to separate the routing tables of different cus-
tomers at the same PE device. VRF maintains a separate routing table for each VPN cus-
tomer, allowing for secure and efficient routing between sites.
■■ VRF instances: Each VRF instance represents a different VPN network. A VRF
instance contains its own routing table, which is separate from the global routing
table (GRT) of the PE router. This separation allows for secure routing between dif-
ferent VPNs.
■■ MP-BGP: This protocol is used for the distribution of VPN routes between the PE
routers. Each PE router maintains a BGP session with other PE routers or a route
reflector in the network. When a new VPN route is learned, it is advertised to other
PE routers using BGP. BGP also allows for the exchange of VPN-related information,
such as the route distinguisher (RD).
■■ VPNv4/VPNv6 Address Family: The VPNv4/VPNv6 address family is used for the
distribution of VPN routes between the PE routers. Each VPN route contains an
RD and a VPNv4/VPNv6 prefix. The VPNv4/VPNv6 prefix is a combination of the
customer’s IP prefix and the RD. The PE routers use the RD to distinguish between
routes belonging to different VPNs. This essentially allows the same customer pri-
vate IP addresses to exist in different VRF instances. In this case, IP address overlap
does not cause an issue due to the different RD values appended.
■■ Route distinguisher (RD): The RD is a 64-bit value that is used to uniquely identify
the VPN or customer prefix. An RD is appended to each IP Prefix which results in a
96-bit VPN route and allows the PE router to distinguish between routes that belong
to different VPNs.
■■ Route target (RT): The RT is a 64-bit BGP extended community that is used to con-
trol the distribution of VPN routes within the MPLS VPN. The RT is assigned to
18 Chapter 1: MPLS in a Nutshell
each VPN route by the PE router at the ingress point of the VPN. The RT is used by
the other PE routers to determine which VRF table to insert the VPN route into.
In summary, VRF is a key component of MPLS VPNs that enables separate and indepen-
dent routing between different VPNs. The use of RDs and VPN-IPv4 addresses allows
for the unique identification of VPN routes, and BGP is used to distribute VPN routes
between PE routers. The RT is used to control the distribution of VPN routes within the
MPLS VPN, ensuring that each VPN route is inserted into the correct VRF table.
Node Protection
Link Protection
ath
up P
PE PE PE Back PE
Backup Path
Each prefix in the network has a backup path calculated by every router in the network.
Let’s briefly look at how routers calculate their backup paths:
Step 1. To calculate a backup path, a router must determine the next-best available
path to reach the destination prefix in the case of a primary path failure. This
is done using the loop-free alternate (LFA) traffic protection technique. LFA
finds a backup path that does not result in a forwarding loop and that is guar-
anteed to be loop free. However, it does not protect all possible topologies.
Step 2. After calculating the backup path, the router generates a new label stack that
represents the backup path and installs it into the router’s forwarding table.
The label stack has one or more labels, with the first label pointing to the next
hop router on the backup path.
Step 3. When a failure occurs on the primary path, the router swaps the primary
path’s label stack with the backup path’s label stack. This action causes the
packets to be forwarded along the backup path instead of the primary path.
The label stack of the backup path includes the necessary node labels (PQ
nodes) along the path to the destination prefix. When the packet reaches the
Challenges and Shortcomings of MPLS 19
next hop router on the backup path, the router pops the top label and for-
wards the packet to the following next hop router based on the label stack
information.
Step 4. The router switches back to the primary path by swapping the label stack of
the backup path with the primary path’s label stack after the primary path is
restored. This process enables the packets to be forwarded along the primary
path again.
In the realm of networking, FRR presents a temporary solution to minimize packet loss.
The computation of backup path, however, may not guarantee sufficient bandwidth, lead-
ing to potential congestion on the alternate routes. It is crucial to note that the ingress
router possesses complete knowledge of LSP policy constraints, making it the only
entity capable of generating appropriate long-term alternate paths.
Apart from MPLS LFA (IGP and LDP), backup paths can also be created through RSVP
sessions and, as with all RSVP sessions, require additional state and network overhead.
Consequently, a node creates at most one backup path for every LSP that has FRR capa-
bility. Formulating multiple backup paths for each LSP leads to unnecessary overhead,
without considerable additional benefits. Also, an important aspect of FRR technology is
that it is considered a temporary solution to an intermittent network problem. A network
operator should not rely permanently on a backup path. When a link or node fails, traf-
fic from the primary path takes a detour toward a backup path, and it might happen that
now the backup path is congested due to heavy traffic. This is the main reason traffic
should always be routed back to the primary path after the primary path has recovered
and links/nodes are stable enough to carry the original traffic.
■■ Inter-AS limitations
■■ LDP–IGP synchronization
Each packet entering the network is assigned a 20-bit number called an MPLS label, and
this assignment process creates LSPs through the network. The ingress router assigns the
labels that the network uses to forward the packets through the network. Once a packet
reaches a core (P) router, the core router uses the label to determine the next hop for the
packet. The core router then swaps the incoming label with the outgoing label and for-
wards the packet to the next hop.
The MPLS label space limitation is a challenge that arises because there is a finite num-
ber of labels that can be used in an MPLS network. The label space is limited to 2^20
(1,048,576) labels, which may seem like a large number, but it can be easily exhausted in
large networks—and this can cause several problems. One of the most significant prob-
lems is that it limits the scalability of the network when there may be requirements for
additional nodes, links, or VPN prefixes. In large networks, the number of labels required
can quickly exceed the available label space. When this happens, no new transport or
service label can be generated, the network may become unstable, and packets may be
dropped, leading to poor network performance.
Another problem caused by the label space limitation is that it can limit the number
of VPN routes (L2VPN/L3VPN services) that can be supported in the network. VPNs
can be created in MPLS networks in two ways: by assigning different labels to different
VPNs (per-VRF label allocation mode) or by assigning different labels per prefix per VPN
(per-prefix label allocation mode). When the label space is exhausted, it may not be pos-
sible to create new VPNs or assign labels to new customer prefixes, limiting the network’s
ability to support new customers or expand its services.
When the ASBR label allocation is on a per-prefix basis, even when the remote PE device
is advertising a label per VRF instance (per-VRF label allocation), the egress ASBR allo-
cates a label for each received prefix, resulting in faster label space depletion in the ASBR.
Similarly, at this writing, the global Internet routing table covers roughly 970,000 IPv4
and 200,000 IPv6 prefixes. Offering IPv4 Internet service as part of a VPN would be
problematic using per-prefix label allocation since it would exhaust more than 90% of the
available label space on the egress PEs providing this service.
Considering the limitations of MPLS label space, there is no real solution to this issue.
However, a couple optimizations can be considered in managing label allocation in large-
scale networks:
■■ Allocating labels on a per-VRF basis instead of per prefix can significantly decrease
the number of required service labels.
■■ With label allocation for host only (/32) routes, LDP assigns labels only for loopback
interfaces instead of for physical links.
Challenges and Shortcomings of MPLS 21
■■ Assessing alternative inter-AS or intra-AS connectivity models can help you reduce
label allocation.
MPLS LSPs face a challenge when summarized routes are advertised instead of more
specific routes, such as /32 host routes. Sometimes network operators may want to adver-
tise summarized routes to the rest of the network, and although this scenario may make
sense in some instances from a network design perspective, it is not recommended in an
MPLS environment. PE and P routers must know each other’s /32 loopback addresses
and potentially their physical interfaces to have a complete view of the underlay network.
This allows each service node (PE node) to create an LSP with remote service node PEs
and ensures appropriate end-to-end label forwarding.
Inter-AS Limitations
Inter-AS routing refers to the exchange of routing information and traffic between
different ASs that are managed by same or different organizations or service providers.
In an MPLS network, communication between ASs is facilitated by BGP. BGP as a stan-
dard protocol is used to exchange routing information between different ASs, while
MPLS VPNs provide a way to extend a private network across multiple ASs using
end-to-end LSPs.
However, MPLS IPv4 and IPv6 networks face several limitations when it comes to inter-
domain routing, which can impact the performance and scalability of the network. The
following sections examine some of these challenges and shortcomings.
In inter-AS scenarios, traffic may traverse multiple ASs, each with its own QoS policies
and capabilities. This can result in inconsistent QoS behavior across the network and
affect the performance of delay-sensitive applications such as voice and video. To address
22 Chapter 1: MPLS in a Nutshell
this limitation, network operators must coordinate with other ASs to ensure that QoS
policies are aligned and consistent across the entire traffic path.
BGP/MPLS VPNs require careful planning and configuration to ensure that the routing
information is exchanged correctly between ASs and that the MPLS tunnels are estab-
lished and maintained properly. Moreover, changes in the network topology or routing
policies may impact the BGP/MPLS VPN configuration, requiring network operators to
constantly monitor and adjust the configuration. During network migrations or mergers,
there is usually a high likelihood that this issue will rise, and a proper transport and ser-
vices layer planning and design are mandatory.
Technical implementation differences might occur, which may result in different out-
come. To address operational complexity, network operators can use automated network
management and orchestration tools that can simplify network design, deployment, and
management. Service providers can use tools such as network controllers, intent-based
networking systems, and software-defined networking (SDN) platforms to automate net-
work operations and reduce the risk of errors.
One of the key features of RSVP-TE is its ability to signal traffic engineering tunnels.
Traffic engineering tunnels are used to optimize network performance and avoid congest-
ed links. RSVP-TE uses tunnel signaling to create these traffic engineering tunnels.
When a traffic engineering tunnel is requested, RSVP-TE initiates a signaling process that
involves the exchange of messages between the nodes in the network along the path of
the tunnel. The nodes along the path reserve the necessary resources for the tunnel, and
the tunnel is established. Once the tunnel is established, traffic is directed along the path
of the tunnel.
RSVP-TE uses several different messages for signaling LSPs and traffic engineering tun-
nels. These messages include the Path message, which is used to advertise the path of
the LSP or tunnel, and the Reservation message, which is used to reserve the necessary
resources for the LSP or tunnel. RSVP-TE also uses other messages for error handling and
other functions.
Challenges and Shortcomings of MPLS 23
In addition to its signaling functions, RSVP-TE also provides support for traffic engineer-
ing metrics such as link utilization. This allows network operators to monitor and opti-
mize network performance based on various traffic engineering criteria.
When a traffic engineering tunnel is established using RSVP-TE, it involves the signaling
of an LSP through the underlay network. This LSP is used as the traffic engineering
tunnel, and traffic is forwarded along this LSP according to the traffic engineering
requirements.
■■ RSVP-TE Path message: The first step in tunnel signaling is the sending of an RSVP-
TE Path message from the headend node of the tunnel to the tailend node. The Path
message carries information about the source and destination of the tunnel, along
with any traffic engineering requirements, such as bandwidth, latency.
■■ Path message processing: When the Path message reaches a node in the network,
it is processed and forwarded to the next node along the path of the tunnel. During
this processing, the node checks if it has enough resources to accommodate the
tunnel, and if it does, it reserves the necessary resources for the tunnel. If there are
insufficient resources, the node may send an RSVP-TE ResvErr message back to the
headend node, indicating that the tunnel cannot be established.
■■ RSVP-TE Resv message: Once the necessary resources have been reserved, the
tailend node sends an RSVP-TE Resv message back to the headend node, indicating
that the tunnel has been established. The Resv message includes information about
the path of the tunnel, along with any labels that have been assigned to the LSP.
■■ Resv message processing: When the Resv message reaches a node in the network,
it is processed and forwarded to the previous node along the path of the tunnel.
During this processing, the node installs any labels that have been assigned to the
LSP and establishes an FEC for the LSP.
■■ Tunnel forwarding: Once the LSP has been established and labeled, traffic can be
forwarded along the path of the tunnel using label swapping and forwarding. Each
node along the path of the tunnel forwards traffic based on the labels assigned to
the LSP, ensuring that traffic is directed along the path of the tunnel according to the
traffic engineering requirements.
Throughout this process, each node in the network is responsible for signaling the next
node in the path of the tunnel. This hop-by-hop signaling ensures that each node has the
necessary resources reserved for the tunnel and that labels are correctly assigned to the
LSP at each hop.
While RSVP-TE has been used in the past for traffic engineering, it has several limitations
and drawbacks, which have led to its reduced popularity in modern networks. These are
some of the limitations of RSVP-TE:
24 Chapter 1: MPLS in a Nutshell
■■ Scalability: RSVP-TE requires maintenance of LSP state information along the path,
which can lead to scalability issues in large networks with a large number of LSPs.
Due to these limitations, RSVP-TE is not always the best choice for implementing traffic
engineering in modern networks.
LDP–IGP Synchronization
MPLS LDP–IGP synchronization is a critical mechanism for ensuring reliable packet for-
warding in an MPLS network. Before any MPLS traffic is forwarded, IGP and LDP must
be in sync to avoid packet loss. Packet loss can occur if a node begins forwarding traffic
using a new IGP adjacency before the LDP label exchange completes between the peers
on that link. Similarly, if an LDP session closes, the device may continue to forward traf-
fic using the link associated with the LDP peer, which can lead to packet loss.
To prevent these issues, MPLS LDP–IGP synchronization ensures that the LDP is fully
established before the IGP path is used for traffic forwarding. This is accomplished by
enabling LDP–IGP synchronization on each interface associated with an IGP OSPF or
IS-IS process. Network operators can then prevent traffic blackholing and ensure reliable
packet forwarding across their MPLS network.
When LDP–IGP synchronization is enabled on an interface, LDP checks whether any peer
connected to the interface is reachable by looking up the peer’s transport address in the
routing table. If there’s a routing entry (including longest match or default routing entry)
for the peer, LDP assumes that LDP–IGP synchronization is required for the interface and
notifies the IGP to wait for LDP convergence. However, LDP–IGP synchronization with
peers requires that the routing table be accurate for the peer’s transport address. If the
routing table shows a summary route, a default route, or a statically configured route for
the peer, it may not be the correct route for the peer. Thus, it’s essential to verify that the
route in the routing table can reach the peer’s transport address to prevent traffic black-
holing due to a missing label for that prefix.
Inaccurate routes in the routing table can cause issues with LDP session establishment,
which causes the IGP to wait for LDP convergence unnecessarily for the sync hold-down
time. This can lead to delays in forwarding traffic and potential packet loss.
ECMP is a routing strategy in which a route is reachable via multiple best paths. As the
name implies, those routes have to be equal to qualify, which in the scope of IGP means
that the cumulative cost or metric along the path is the same. ECMP is common in highly
symmetrical networks such as backbones where this kind of behavior is usually desired.
Aggregating several links into a single virtual interface (LAG or bundle) is another com-
mon strategy in backbone networks to facilitate smooth bandwidth extension. The vir-
tual interface is treated as a large pipe, where each physical bundle member is considered
equal, assuming that the transmission speed is the same. Some service providers refrain
from using bundles because the members are not equal with respect to the distance in
the underlying optical transport network. In order to be able to distinguish the interfaces,
it may be preferrable to have several ECMP links instead of a single large pipe.
ECMP and LAG are often used in parallel to simplify capacity planning, flatten traffic
bursts, and improve network availability in the event of node or link failures. ECMP load
balancing and LAG hashing work identically on most routing platforms. The idea is that
packets are evenly distributed across multiple paths or links. This is done by calculating
an n-tuple hash, where several of the following fields are usually taken into account:
■■ Router ID
The exact fields and number of fields used for hashing depend on the traffic type and the
underlying hardware architecture. It is important that packets belonging to the same flow
are hashed along the same path. If they’re not, per-packet load balancing may be required,
which can lead to issues such as buffering and retransmissions on endpoints due to out-
of-order packets or jitter or latency caused by different distances between available paths
in the optical transport network.
Figure 1-13 shows some sample packets of different L2VPN and L3VPN services with
two transport labels and a service label. It should become evident that extracting the
proper Layer 2, Layer 3, or Layer 4 fields for MPLS services is nontrivial. On top of this,
the presence of MPLS labels may lead to poor hashing diversity or even incorrect hashing
parameters.
26 Chapter 1: MPLS in a Nutshell
■■ RFC 4385: Pseudowire Emulation Edge-to-Edge (PWE3) Control Word for Use over
an MPLS PSN
By default and depending on the platform capability, standard L3VPN provides good
hashing results, except with services where the vast majority of fields are the same and
the customer-specific information is hidden too deeply in the packet. GRE over L3VPN
over BGP-LU over LDP is such a problematic service, where an overlay using GRE is
spanned between two CE nodes over an L3VPN. With a multitude of customer services
being transported through a single GRE tunnel, it would require a deeper packet inspec-
tion to be able to extract all customer-specific information. Lacking this inspection
would result in poor hashing as there would be a single hash for all customers.
RFC 6790 defines the concept of an Entropy label, which is applicable to both L2VPN
and L3VPN services and eliminates the need for deep inspection on transit routers.
Instead, the ingress PE device extracts the relevant field of the service before the MPLS
encapsulation takes place; then the device computes the hash and pushes the result as
an additional label onto the stack. Transit routers no longer need to guess the underlying
packet structure and can instead effectively load balance traffic by relying on the packet’s
MPLS label stack. For this to work, all devices in the path must support the Entropy
label.
Challenges and Shortcomings of MPLS 27
The inspection of L2VPN services is more challenging than the inspection of L3VPN ser-
vices. As presented in the beginning of this chapter, the MPLS header does not specify
the payload that follows after popping the Bottom of Stack (BoS) label. Instead, the
router has to make an educated guess. In order not to sacrifice too much performance or
too many network processor cycles, it is common to inspect the first nibble that follows
the BoS label.
Note that the first nibble after the BoS label is 0100, which refers to an IPv4 header,
whereas a nibble of 0110 would point to an IPv6 header. Other values would be treated as
Layer 2 payload. This pragmatic approach falls apart as soon as MAC addresses start with
0x4 or 0x6, which causes the packet to be misinterpreted as an IP packet. The resulting
load balancing would be nondeterministic and would most likely negatively impact the
end-user experience of the service.
RFC 4385 defines the Ethernet control word, which solves the misinterpretation issue by
inserting a 4-byte control word where the first nibble is 0000. In essence, ensures that
transit routers do not falsely interpret the L2VPN pseudowire payload as IPv4 or IPv6
traffic and possibly extract source and destination IP addresses. For this to work, the
ingress and egress PE devices have to agree on the usage of the control word.
The third improvement relates to multiple flows going over a single pseudowire. In this
case, there may be one or more transport labels and a pseudowire label present (refer to
Figure 1-13). Transit routers may not be able to inspect the Layer 2 information to distin-
guish the different flows. Instead, the P nodes compute the same hash for all flows, pre-
venting proper load balancing. This problem description may sound familiar to the GRE
use case introduced earlier. It can be solved by using the Entropy label as well.
There is yet another L2VPN-exclusive solution to this problem. RFC 6391 introduces an
additional label in the MPLS label stack called the Flow label. The Flow label is imposed
by the ingress PE node based on the relevant fields of the flow, which for an L2VPN
could be source/destination addresses of Layer 2 and Layer 3 (if present). It is important
to note that all packets belonging to the same flow must be transported using the same
Flow label to guarantee that the same path is taken across the network for all packets
belonging to the same flow. For this to work, ingress and egress PE devices have to agree
on the usage of the Flow label.
28 Chapter 1: MPLS in a Nutshell
Note The Entropy label is applicable to both L2VPN and L3VPN services and must be
supported on all nodes in the path, whereas the Flow label is limited to L2VPN services
but requires support on the PE nodes only.
Beyond MPLS
Many of the protocols and technologies that power the Internet today have their origin
in the research and development conducted at the Defense Advanced Research Projects
Agency (DARPA). The ARPANET, an early predecessor of the Internet, became opera-
tional in 1969, and the Internet Protocol suite (TCP/IP) was initially specified in 1973.
Unsurprisingly, the native Internet Protocol (IP) lacks many of the requirements of mod-
ern networks, which evolved gradually over time and grew at an unprecedented pace.
Consequently, existing protocols had to be augmented, and completely new protocols
became necessary to fill gaps. The following use cases are examples:
■■ Solutions: Flow-Aware Transport (FAT) label (RFC 6391), Entropy label (RFC
6790), VXLAN UDP
MPLS is a mature technology and still heavily used in service provider and carrier-grade
enterprise networks more than 20 years after its initial deployment. It overcomes several
of the shortcomings of native IP routing, especially related to VPN services, but has its
fair share of suboptimal traits, as discussed in the previous section. MPLS support in
certain network areas, such as the access or data center network, has traditionally been
rather limited. As of today, there is no de facto standard for a unified underlay data plane
that connects endpoints in the access network, core, and data center.
that delivers a centralized and programmable network that is more flexible and easier to
manage. The brain of the SDN architecture is a controller that enables centralized man-
agement and control, automation, and policy enforcement across both physical and vir-
tual network elements. SDN solutions are often limited to a single network domain. For
instance, Cisco’s SDN portfolio includes the following solutions:
Some third-party SDN solutions, especially in the field of SD-WAN, make bold claims
around the demise of MPLS and position themselves as superior successors. This kind
of marketing talk should be taken with a grain of salt. Comparing SD-WAN and MPLS is
like comparing apples to oranges. One is an SDN solution that usually spans an overlay
network over one or more transport networks, and the other is a packet-forwarding
technique in the underlay transport network. In fact, many SD-WAN deployments rely on
a mix of MPLS and low-cost broadband to interconnect the different sites.
It should become clear that MPLS is not the silver bullet to all network requirements and
use cases. As a wise man once said, “The art of prophecy is very difficult, especially
with respect to the future.” However, looking into the crystal ball, it is almost certain
that MPLS will remain an integral part of any service provider network in the foreseeable
future. At the same time, promising new technology may cause major disruptions in the
networking industry and become the MPLS successor: Segment Routing IPv6 (SRv6).
The details of SRv6 will be introduced Chapter 3, “What Is Segment Routing IPv6
(SRv6)?” but for now, it is just important to understand that SRv6 relies on the IPv6 data
plane and not on MPLS. In a way, it is a step back to the roots (or the OSI model) and
addresses many of the challenges and shortcomings of MPLS:
At the same time, the potential of SRv6 goes much further and does not stop at replacing
MPLS. Recall that MPLS support in some network domains is rather limited, and alterna-
tive overlay techniques such as VXLAN are heavily used (for instance, in the data center).
Due to the IPv6 data plane of SRv6, there is an opportunity to unify the underlay end-
to-end and use a single BGP-based control plane layer to provision services between the
access and the data center network.
30 Chapter 1: MPLS in a Nutshell
The journey to SRv6 is still at an early stage but continues to gain momentum. This book
provides an in-depth approach to both theory and practice to be prepared for the transi-
tion from MPLS to SRv6 with or without an intermediate stop at SR-MPLS. Either way,
the final destination of this network transition should be SRv6.
Summary
MPLS is a protocol that enables efficient forwarding of data packets across a network by
assigning labels to the packets. Labels are used to identify paths through the network so
packets can be quickly routed between nodes. MPLS operates at Layer 2.5, between the
network layer (Layer 3) and the data link layer (Layer 2) of the OSI model.
MPLS has a label structure that consists of a 20-bit label value, a 3-bit Experimental
(EXP) field, a 1-bit Bottom of Stack (BoS) indicator, and an 8-bit Time to Live (TTL) field.
A label is inserted between the Layer 2 and Layer 3 headers of a packet. The Label Value
field contains a unique identifier for the label, the EXP field is used to prioritize packets,
and the TTL field is used to limit the lifespan of the packet, which also helps to break a
routing loop in the core.
The control plane and the data plane are the two planes in MPLS. The control plane sets
up the label-switched paths (LSPs) and manages the label distribution among routers. It
exchanges label information between routers using Label Distribution Protocol (LDP) or
Resource Reservation Protocol (RSVP). The data plane forwards the packets based on the
labels assigned in the control plane.
LDP is used to distribute labels between routers in the network and is therefore a proto-
col that runs between MPLS-enabled routers. When a router receives a packet, it checks
the label assigned to the packet and uses the label to forward the packet along the LSP.
LDP also supports label stacking, where multiple labels can be assigned to a packet,
allowing it to be routed through a more specific path, or follow a specific constraint.
When a packet enters an MPLS network, the router performs a label imposition (with a
PUSH operation). As it traverses through the network, its label is swapped as it passes
through each LSR. When it reaches its final destination, the label is removed (with a POP
operation). Traffic forwarding using labels enables MPLS networks to quickly and effi-
ciently forward traffic based on the labels assigned to each packet. Each node in the net-
work maintains a label forwarding table that maps incoming labels to outgoing labels—
essentially outgoing interfaces.
Traffic protection is a widespread use case for MPLS in the backbone and core networks.
When combined with segment routing, the fast-reroute mechanism called TI-LFA is a key
benefit for network operators using SR MPLS. TI-LFA is built explicitly to cover 100%
traffic protection in the event of a node or link failure in an SR-enabled network. TI-LFA
precalculates the backup path for any failure scenario, from any node in the network, so
if a link failure is detected, the backup path takes over in less than 50 ms.
Summary 31
MPLS VPN services are a popular use case for MPLS, where VPN services are provided
to customers over a shared service provider network. MPLS VPNs allow customers to
securely connect their geographically dispersed sites by using virtual connections that
are separate from the public Internet. The transport label is part of the MPLS forwarding
in the core, whereas the service label is used to represent customer routes that are part of
a VRF instance. Various categories of VPNs are available: L2VPN, L3VPN, and Multicast
VPN (mVPN).
While MPLS has proven to be a reliable and scalable backbone solution for service pro-
viders and large enterprise networks, challenges and shortcomings limit its potential. One
of the challenges of MPLS is its label space limitation. MPLS labels have a fixed length,
which limits the number of labels that can be used in a network to 2^20.
Inter-AS limitations are another challenge of MPLS. MPLS was initially designed to work
within a single administrative domain, which can make it difficult to connect networks
from different domains. More recently, BGP Labeled Unicast was introduced to help with
inter-AS, 6PE, unified MPLS, and CSC scenarios. BGP-LU is a key benefit for extending
MPLS VPN services support over inter-AS domains.
Service chaining complexities are another challenge of MPLS. Service chaining involves
forwarding packets through a sequence of network functions or services, which can be
complex to implement in MPLS networks. SRv6 has a far greater applicability for service
chaining compared to MPLS technology.
In the long run, while MPLS has proven to be a reliable and scalable solution for trans-
port networks for many years, network operators are adopting segment routing and SRv6
as the new standard deployment. Greenfield and brownfield deployments are both appro-
priate candidates for implementing SR MPLS or SRv6. In the case of SRv6, it can be used
not only in traditional service provider architecture deployments but also in other areas,
such as data centers, where it is necessary to push traffic through virtual machines, con-
tainers, and different virtual functions. The only requirement in that case is a plain IPv6
data plane without MPLS.
32 Chapter 1: MPLS in a Nutshell
A area
IS-IS, 222
ABR (area border router), 5, 38, 39, 224 OSPF, 224
acceptance test, 1085, 1087 ASBR (autonomous system border router),
5, 20
active-active mode, 425
assembler code, 34–35
active-backup mode, 426
assurance, intent-/model-based,
address family, BGP, 84 1019–1020. See also service assurance
adjacency, IS-IS, verification, 265–266 automation, 1065, 1097–1098
adjacency segment, 226 pipeline, 1098, 1101–1104
Adjacency SID, 47–49, 226 solution lifecycle, 1067
Adj-SID sub-TLV, 59–61, 68–69 test, 1094
algorithm. See also Flex Algo
operator-defined, 74–75
SPF (Shortest Path First), 56, 73–74, 220
B
Angstrom era, ONLINE backup path, 19
anycast, set, 219 base format, 108
anycast SID, 45, 47, 219 BD (bridge domain), 447
SR-MPLS, 254–255 BFD (Bidirectional Forwarding Detection),
configuration, 254–256 857–859. See also BLB (BFD over
verification, 256–257 Logical Bundle); BoB (BFD over
Bundle)
SRv6, 270–271
1116 BFD (Bidirectional Forwarding Detection)
C ONLINE, ONLINE-ONLINE,
ONLINE-ONLINE, ONLINE-
ONLINE
CCM (Continuity Check Message), 809–
ping ethernet cfm domain service, 815
810
segment-routing mpls sr-prefer, 361
CE (customer edge) router, 5, 6
service vpp status, ONLINE-ONLINE
CFM (Ethernet Connectivity Fault
Management), 808 service vpp stop, ONLINE-ONLINE
CCM (Continuity Check Message), show bfd ipv6 session, 868–869
809–810 show bfd ipv6 session interface bundle-
configuration, 811–812 Ether 1 detail, 869
Linktrace Protocol, 811 show bfd ipv6 session interface TF0/0/0/0
detail, 870–871
Loopback Protocol, 810
show bfd neighbors, 877
MA (maintenance association), 808
show bfd neighbors interface port-
MD (maintenance domain), 808
channel 1 details, 877–878
MEP (maintenance endpoint), 808–809
show bfd session, 875
MIP (maintenance intermediate point),
show bfd session interface bundle-ether 1
809
detail, 875–876
PDUs, 809
show bgp ipv4 labeled-unicast, 347, 348,
verification, 813–816 377–378, 379–381, 713, 740–741
change management, 1104–1106 show bgp ipv4 rt-filter, 771, 774
CI/CD/CT (continuous integration/ show bgp ipv4 rt-filter neighbors,
continuous delivery/continuous 774–775
testing), 1101
show bgp ipv4 unicast, 328, 348,
Cilium, ONLINE 713–714
CML (Cisco Modeling Lab), ONLINE show bgp ipv4 vpn, ONLINE-ONLINE
code, assembler, 34–35 show bgp l2vpn evpn bridge-domain,
coexistence brownfield strategy, 356–357 446, 521–522, 525–527, 537, 540,
collision, label, 43 549–550, 568–569
H
SRv6 headend and endpoint support,
ONLINE-ONLINE
system architecture, ONLINE
hash, n-tuple, 25
vytish, ONLINE-ONLINE
headend behaviors, 132
full-mesh L3VPN
header
over SR-MPLS
IPv6, 104
BGP label allocation modes, 701
Flow Label field, 104–106
configuration, 689–700
Next Header field, 106
verification, 701–719
Traffic Class field, 104
over SRv6
MPLS, 6–7
BGP configuration on PE-2,
624–625 segment routing, 36–37, 106–107,
125–127. See also SRH (segment
configuration, 610–622
routing header)
full-mesh verification, 625–636
fields, 124–125
SRv6 configuration, 623–624
Penultimate Segment Pop,
VRF and interface configuration, 127–128
622
Ultimate Segment Pop, 129
USD (Ultimate Segment
G Decapsulation), 130–132
help sr localsid command, ONLINE-
GRE over L3VPN, 26 ONLINE
greenfield deployment, 354 H.Encaps behavior, 132–133
dual-homed migration strategy, 356 H.Encaps.L2 behavior, 135
interworking migration strategy, 355–356 H.Encaps.L2.Red behavior, 136–137
SR-MPLS, 365 H.Encaps.Red behavior, 133–135
BGP proxy Prefix SID, 383–387 high-availability, 425, 857–858. See
enabling LDP on the border node, also BFD (Bidirectional Forwarding
372–376 Detection); FRR (Fast Reroute);
LFA (Loop-Free Alternate); TI-LFA
IOS XR 1125
N ONLINE
data plane, ONLINE
disaggregation, ONLINE-
neighbor adjacency, LDP, 11
ONLINE
neighbor discovery, LDP (Label
sonic-cfggen, ONLINE
Distribution Protocol), 11–12
overload bit 1131
P End.B6.Encaps.Red behavior,
143–144
End.B6.Insert behavior, 144–145
P router, 6
End.B6.Insert.Red behavior,
path attribute. See also BGP
145–146
BGP Prefix SID, 609–610
endpoint behaviors, 141, 146
MP_REACH_NLRI, 608–609
headend behaviors, 132, 140
path divergence, 788–789
H.Encaps behavior, 132–133
pcap trace command, ONLINE-ONLINE
H.Encaps.L2 behavior, 135
PCE (path computation element), 211
H.Encaps.L2.Red behavior,
PE router, 5, 606, 767–768 136–137
RTC (route target constraint), 768–771 H.Encaps.Red behavior, 133–135
configuration, 771–774 H.Insert behavior, 138–140
memberships, 769 POP operation, 14
verification, 774–780 pps (packets per second), ONLINE-
Penultimate Segment Pop, 127–128 ONLINE
per-ESI Ethernet A-D route, 454–458 Prefix Attribute Flags sub-TLV, 178–179
per-EVI Ethernet A-D route, 458–463 prefix metric, Flex Algo, 303
PHP (penultimate hop popping), 39 Prefix Range TLV, 72–73
PIC (Prefix Independent Convergence), Prefix SID, 76–77, 226
943, 944–945 BGP, 324
PIC Edge configuration, 324–326
multipath verification, SRv6, 981–995 enabling in an SR-MPLS network,
unipath 376–383
SR-MPLS, configuration, 948–951 proxy, 383–387
SR-MPLS, verification, 951–962 verification, 327–328
SRv6, configuration, 962–970 verifying, 235–236
SRv6, verification, 970–981 Prefix SID TLV, 95
ping command, 249, 254, 270, 292, 349, Prefix-SID sub-TLV, 58–59, 70–71
423, 635, 676, 767, ONLINE-ONLINE, primary path, 18
ONLINE-ONLINE, ONLINE-ONLINE,
program counter, 35
ONLINE-ONLINE, ONLINE-ONLINE
protocol ID, BGP, 96
ping ethernet cfm domain service
command, 815 pseudowire, 3, 11–12, 27, 440
pipeline, 1101–1104 P-space, 881–882, 896–897
PLE (private line emulation), 1011–1017 PT (Path Tracing), 784, 787, 798–801,
1023–1025
PLR (point of local repair), 878–879
MCD (Midpoint Compressed Data),
policy
800–801
segment routing, 38–40, 41, 52–53
probe packets, 801–806
SRv6, 126
PUSH operation, 14, 41
End.B6.Encaps behavior, 142–143
RR (route reflector) 1133
show bfd neighbors interface port-channel show bgp vpnv4 unicast vrf command,
1 details command, 877–878 626, 628–630, 646–647, 649–650,
show bfd session command, 875 651–652, 655, 659–661, 663,
667–668, 670, 673–674, 702–703,
show bfd session interface bundle-ether 1
704–705, 705–706, 733–735,
detail command, 875–876
758–759, 760, 951–953, 954,
show bgp ipv4 labeled-unicast command, 958–960, 971–973, 976–977,
347, 348, 377–378, 379–381, 713, 982–984, 989–990
740–741
show bgp vpnv4 unicast vrf local-sids
show bgp ipv4 rt-filter command, 771, command, 627, 647–648, 663–664,
774 671
show bgp ipv4 rt-filter neighbors show bgp vpnv4 unicast vrf received-sids
command, 774–775 command, 627–628, 648, 664, 671
show bgp ipv4 unicast command, 328, show bgp vrf command, 276, 320–321,
348, 713–714 409–410, 420–421
show bgp ipv4 vpn command, ONLINE- show bgp vrf nexthop-set command, 634,
ONLINE 650, 667
show bgp l2vpn evpn bridge-domain show bgp vrf-db table all command,
200-BD command, 527–529 519–521, 594
show bgp l2vpn evpn bridge-domain show bgp vrf-db table command,
200-BD received-sids wide command, 566–567, 656
523–524
show bpg vpnv4 unicast vrf command,
show bgp l2vpn evpn bridge-domain 665–666
command, 446, 521–522, 525–527,
show bundle bundle-Ether 200 command,
537, 540, 549–550, 568–569
478, 484
show bgp l2vpn evpn bridge-domain
show bundle bundle-Ether 250 command,
VPWS:300 command, 594–599
481, 482
show bgp l2vpn evpn rd command,
show bundle bundle-Ether 300 command,
458–461, 464, 467–469, 470, 523,
575–576
601
show cef command, 887–888, 892–893,
show bgp l2vpn evpn route-type
898–899, 919, 926
command, 530
show cef detail command, 714–715, 716,
show bgp l2vpn evpn route-type ethernet-
717–718, 743, 744–745, 747, 748,
segment command, 530–532, 600
749–750
show bgp l2vpn evpn summary command,
show cef ipv6 command, 904–905,
518–519
908–909, 912–913, 936–937,
show bgp segment-routing srv6 937–940, 942
command, ONLINE-ONLINE
show cef ipv6 detail command, 980–981,
show bgp summary command, ONLINE- 993–994
ONLINE
show cef vrf command, 276, 279
show bgp vpnv4 uni vrf command,
show cef vrf detail command, 632–633,
ONLINE-ONLINE
654, 669–670, 675, 709–711, 738,
show bgp vpnv4 unicast command, 764, 766, 955–957, 960–962,
741–742 974–975, 979, 986–988, 991–992
1136 show ethernet cfm local meps domain service command
show ethernet cfm local meps domain show ip ospf database command, 234
service command, 813 show ip route repair-paths command,
show ethernet cfm peer meps domain 888–889, 893, 899–900, 928,
service command, 814–815 954–955
show ethernet sla operations detail profile show ip route vrf command, 737, 762
command, 820–821, 827–828 show ip route vrf repair-paths command,
show ethernet sla statistics brief profile 709
command, 821–823, 829–830 show ipsla statistics 11 command,
show ethernet sla statistics detail profile 843–844
command, 823–825, 831–834 show ipsla statistics 21 command,
show evpn ethernet-segment command, 844–846
494, 580 show ipsla statistics aggregated 11
show evpn ethernet-segment interface command, 846–847
BE200 carving detail command, show ipsla statistics aggregated 21
496–498 command, 847–850
show evpn ethernet-segment interface show ipsla twamp session command, 854
BE300 carving detail command,
show ipv6 cef command, ONLINE-
581–582
ONLINE
show evpn ethernet-segment interface
show ipv6 route isis command, ONLINE-
Bundle-Ether260 detail command,
ONLINE
538–539
show isis adjacency command, 265–266
show evpn evi command, 498, 540–542,
583–584 show isis database command, 233, 247,
257, 265, 272, 288, 291, 299,
show evpn evi ead command, 501,
309–310
584–585
show isis fast-reroute summary command,
show evpn evi inclusive-multicast detail
884
command, 502–503
show isis fast-reroute ti-lfa tunnel
show evpn evi vpn 200 mac command,
command, 894, 900, 929
501–502
show isis flex-algo command, 308
show evpn evi vpn 205 mac command,
560–561 show isis ipv4 fast-reroute detail
command, 887, 892, 898, 904, 918,
show evpn evi vpn-id 200 detail
920–921, 925, 927–928, 933–934
command, 499–500
show isis ipv6 fast-reroute detail
show evpn summary command, 493–494
command, 908, 912
show interfaces brief command, 477–478,
show isis neighbor command, ONLINE-
483, 574–575, ONLINE-ONLINE
ONLINE
show ip cef detail command, 715, 717,
show isis rib command, 929–930
718, 744, 745–746, 749
show isis srv6 locators det command,
show ip cef internal command, 889–890,
ONLINE-ONLINE
894–895, 900–901, 930–931
show l2vpn bridge-domain bd-name 200-
show ip cef vrf command, ONLINE-
BD detail command, 508–510
ONLINE
show l2vpn bridge-domain bd-name detail
show ip cef vrf detail command, 711, 739,
command, 563–565
765, 957
SID (segment identifier) 1137
W-X-Y-Z
ONLINE
IPv6 L3VPN service, ONLINE-
ONLINE
wildcard, *, 1112
point-to-point L2VPN service,
ONLINE-ONLINE