BGP Scaling Strategies and Solutions
BGP Scaling Strategies and Solutions
• Goal
• Scale Challenges
• Memory Utilization
• Update Groups
• Slow Peer
• Deployment
• Multi-Session
• MPLS VPN
• OS Enhancements
• Conclusion
Jaws
4
Goal of This session
• Present causes of scale challenges
• Present solutions for scaling
• There are no scaling numbers
• What you can control the most:
• Buy a bigger box
• Design the network properly
5
Scale Challenges
• BGP is robust, simple and well-known
• We need to overcome the following:
• Newer services: new AFs
• More prefixes
• Larger scale: more (BGP) routers
• More multipath
• More resilience
• PIC Edge, Best External, leading to more prefixes/paths
6
BGP ASN
Growth of BGP 50k
BGP ASN
10k
7
More Services Using BGP
1990 1995 1999 2002 2009 2012 2015
IPv4 IDR IPv4 enterprise MPLS VPN BGP FC PIC BGP flowspec
8
For Your
Service Address Families Reference
IPv4 unicast IPv6 unicast vpnv4 unicast nsap unicast IPv4 Flowspec
IPv4 multicast IPv6 multicast vpnv4 multicast l2vpn vpls IPv6 Flowspec
IPv4 MVPN IPv6 MVPN vpnv6 unicast l2vpn evpn vpnv4 Flowspec
9
Memory Utilization
10
High Memory Utilization - Solutions
partial routing
table
11
High Memory Utilization
soft reconfiguration inbound route refresh
inbound filter
inbound filter
• Filtered prefixes are dropped
• Filtered prefixes are stored: much more memory used
• Support needed on peer, but this a very old feature
• Support only on router itself
• Changed filter: router sends out route refresh request to
• Changed filter: re-apply policy to table with filtered prefixes peer to get the full table from peer again
12
Full Mesh iBGP
13
Is Full Mesh iBGP Scalable?
• Per BGP standard: iBGP needs to be full mesh
• Total iBGP sessions = n * (n-1) / 2
• Sessions per BGP speaker = n - 1
• Two solutions
1. Confederations
2. Route reflectors
14
AS 100
Confederations subAS 65001
subAS 65003
• Create # of sub-AS inside the larger subAS 65002
R1 R8
confederation R2
R13 R9
• Conferation AS looks like normal AS to
R5 R6
the outside R3 R4
R12 R10
R7 subAS 65004
R11
15
Route Reflectors
• A route reflector is an iBGP speaker that reflects routes learned from iBGP RR
peers to other iBGP peers, called RR clients
• iBGP full mesh is turned into hub-and-spoke
• RR is the hub in a hub-and-spoke design
16
Route Reflector
What’s Possible?
iBGP
eBGP
AS 101 AS 100
R5
non-client
R1 R2 R3 R4 clients
cluster cluster
17
Route Reflector - Cluster
• Redundancy needed, min of 2 RRs per cluster
Full mesh between RRs
• Cluster = RR and its clients and non-clients should be
R5
kept small
RR RR RR RR
R1 R2 R3 R4
Tier 2
• There is no limit to the
amount of tiers
Tier 3 19
Route Reflector – Same Cluster-ID or Not?
• RR1 has only 1 path for routes from RRC2
– RR1 and RR2 have the same cluster-ID
• Using the same or different cluster-ID for RRs in one set? RRC1 RRC2
– Different cluster-ID
• Additional memory and processor overhead on RR
– Same cluster-ID
• Less redundant paths
20
Picking RRs
How many? Where? Which kind?
7200 ASR1K
configuration IOS-XR
route-policy block-into-fib router bgp 1
if destination in (...) then
drop address-family ipv4 unicast
else table-policy block-into-fib
pass
end-if
22
Multi-Cluster ID
router bgp 1
no bgp client-to-client reflection intra-cluster cluster-id [Link]
no bgp client-to-client reflection intra-cluster cluster-id [Link]
Policy Control Along outside borders and between sub-AS’ Along outside border
Dampening possible on eBGP confed
Scalability Medium; still requires full iBGP within Very high
each sub-AS
Migration Very difficult (impossible in some situations) Moderately easy
But easy when merging two companies
Deployment of (new) features Decentralized Central on RR
R2 R3 R2 R3
R1 R4 R1 R5 R4
Transparent AS
R6 R5
Next-hop preserved R6 R5
26
Grouping of BGP Neighbors: Optimization
Configuration/administration Performance/scalability
27
Update Group on RR
• Update groups are very usefull on all BGP speakers
– but mostly on RR due to
• # of peers
• equal outbound policy
29
IOS
Update Group Replication
RR
1 BGP update
format replicate
BGP update
BGP update
BGP update
...
RR#show ip bgp replication 2 n BGP update
Current
Next
Index Members Leader MsgFmt MsgRepl Csize Version
Version
2 101 [Link] 2013 24210 0/2000 3201/0
update total # of formatting # of
# of size of
group 2 members according to formatted
replications cache
leader’s policy messages
30
IOS
Adaptive Message Cache Size
• Cache = place to store formatted BGP message, before they are send
• Update message cache size throttles update groups during update generation and controls
transient memory usage
• Is now adaptive
• Variable (change over time) queue depth from 100 to 5000
• Number of peers in an update groups
• Installed system memory
• Type of address family
• Type of peers in an update group
• Benefits
• Update groups with large number of peers get larger update cache
• Allows routers with bigger system memory to have *appropriately* bigger cache size and thereby queue
more update messages
• vpnv4 iBGP update groups have larger cache size
• Old cache sizing scheme could not take advantage of expanded memory available on new platforms
• Results in faster convergence
31
Parallel Processing of Route-refresh (and New Peers)
Refresh Group Re-announcements Transient Updates
32
IOS-XR
Update Groups in IOS XR
RP/0/6/CPU0:router#show bgp vpnv4 unicast update-group
33
Slow Peer
34
IOS
Slow Peer
update group 1
detection phase track peer queue
protection phase “slow” update group
• slow peer = cannot keep up with the
rate at which we are generating
recovery phase update messages over a prolonged
slow update group is period of time (order of minutes)
no longer slow • filled up cache: blocking all peers
RR
Possible causes
• High CPU
convergence • Transport issues (packet loss/loaded
speed of OK links/TCP)
update goup
%BGP-5-SLOWPEER_DETECT: Neighbor IPv4 Unicast [Link] has been detected as a slow peer
Allows for fast and slow peers to proceed at the their own speed
35
Slow Peer CLI
per AF
configuration detection per VRF
per peer
per peer policy template
per AF
static per peer(-group)
protection
per peer policy template
36
Old Slow Peer Solution
Solution before this feature: manual movement
router bgp 1
address-family vpnv4
neighbor [Link] advertisement-interval 1
37
Slow Peer Mechanism Details For Your
Reference
• Identifying Slow Peer
38
Slow Peer Mechanism Details For Your
Reference
39
RR Problems & Solutions
40
Best Path Selection Route Advertisement on RR
P: Z
Path 1: NH: PE1, best
Path 2: NH: PE2
P: Z ingress PE does not
Path 1: NH: PE1, best learn 2nd path
NH: PE1, P: Z
NH: PE1, P: Z
PE1
• The BGP4 protocol specifies the selection and propagation of a single best path for each prefix
• If RRs are used, only the best path will be propagated from RRs to ingress BGP speakers
• Multipath on the RR does not solve the issue of RR only sending best path
• This behavior results in number of disadvantages for new applications and services
41
Why Having Multiple Paths?
• Convergence
• BGP Fast Convergence (multiple paths in local BGP table)
• BGP PIC Edge (backup paths ready in forwarding plane)
• Prevent oscillation
• The additional info on backup paths leads to local recovery as opposed to relying on iBGP
• Stop persistent route oscillations caused by comparison of paths based on MED in
topologies where route reflectors or the confederation structure hide some paths
42
Diverse BGP Path Distribution
Overview
43
Unique RD for MPLS VPN
VRF
P: Z
Path 1: NH: PE1
Path 2: NH: PE2
NH: PE1, P: Z/RD1
RD1 NH: PE1, P: Z/RD1
VRF PE1
44
Shadow Route Reflector (aka RR Topologies)
P: Z
Path 1: NH: PE1, best
Path 2: NH: PE2 P: Z
Path 1: NH: PE1
NH: PE1, P: Z Path 2: NH: PE2
NH: PE1, P: Z
PE1 RR1
PE2 RR2
NH: PE2, P: Z
NH: PE2, P: Z
router bgp 1
P: Z
Path 1: NH: PE1, best
address-family ipv4
Path 2: NH: PE2, 2nd best bgp additional-paths select backup
neighbor [Link] advertise diverse-path backup
• Easy deployment
• One additional “shadow” RR per cluster
• RR2 does announce the 2nd best path, which is different from the primary best path
on RR1 by next hop
45
Shadow Route Reflector Note: primary RRs do not need diverse path code
P: Z
Path 1: NH: PE1, best
equal distance Path 2: NH: PE2 RR and shadow RR are co-located.
They‘re on same vlan with same IGP metric towards
prefix.
P: Z
PE1 RR1 Path 1: NH: PE1, best
Path 2: NH: PE2, 2nd best Note: primary and shadow RRs do not need
P:Z to turn off IGP metric check
P
shadow RR
PE2
RR2
P: Z
all links have the same IGP cost Path 1: NH: PE1, best RR and shadow RR are not co-located.
Path 2: NH: PE2
PE2 RR2
solution RR(config-router-af)#bgp bestpath igp-metric ignore RR2 advertises same path as RR1 !
46
Shadow Session
Note: second session from RR to RR-client (PE3) has diverse-path
command in order to advertise 2nd best path
P: Z P: Z
Path 1: NH: PE1, best Path 1: NH: PE1
Path 2: NH: PE2, 2nd best Path 2: NH: PE2
NH: PE1, P: Z
NH: PE1, P: Z
PE1
P:Z
CE1 RR PE3 CE2
CE1
NH: PE2, P: Z
PE2
NH: PE2, P: Z
• Easy deployment – only RR needs diverse path code and new iBGP session per each
extra path (CLI knob on RR)
48
BGP PIC (Prefix Independent Convergence) Edge
Problem
• Improved convergence
• Reduce packet loss
• Have the same convergence time for all BGP prefixes
(PIC)
49
MPLS VPN Dual Homed CE - No PIC Edge
P: Z
Path 1: NH: PE1, best
Path 2: NH: PE2
NH: PE1, P: Z
P:Z
CE1
50
MPLS VPN Dual-Homed CE - PIC Edge
P: Z
Path 1: NH: PE1, best router bgp 1
Path 2: NH: PE2, backup/repair address-family vpnv4
bgp additional-paths install
NH: PE1, P: Z
P:Z
CE1
PE2
P: Z
Path 1: NH: CE1, localpref 100, external, best
52
No BGP Best External - Changed BGP Policy
P: Z
Path 1: NH: CE1, localpref 200, external, best P: Z
Path 1: NH: PE1, internal, localpref 200, best
no backup/repair
PE1 PE3 CE2 path
P:Z
CE1
NH: PE1,
localpref: 200,
P: Z
PE2
53
BGP Best External - Changed BGP Policy
P: Z P: Z
Path 1: NH: CE1, external, best Path 1: NH: PE1, internal, localpref 200, best
Path 2: NH: PE2, localpref 100, internal, backup/repair Path 2: NH: PE2, localpref 100, internal, backup/repair
router bgp 1
address-family vpnv4
bgp additional-paths install P: Z
bgp additional-paths select best-external Path 1: NH: CE1, external, best backup/repair, advertise-best-external
neighbor x.x.x.x advertise best-external Path 1: NH: PE1, localpref: 200, internal, best
54
ADD Path router bgp 1
address-family ipv4
bgp additional-paths select best 2
bgp additional-paths send
neighbor PE3 advertise additional-paths best 2
P: Z
Path 1: NH: PE1, best
Path 2: NH: PE2, best2 P: Z
Path 1: NH: PE1, best
Path 2: NH: PE2, backup/repair
NH: PE1, P: Z
NH: PE1, P: Z
PE1
P:Z
CE1 RR PE3 CE2
CE1
NH: PE2, P: Z
PE2
NH: PE2, P: Z
router bgp 1
address-family ipv4
• PE routers need to run newer code in bgp additional-paths receive
order to understand second path bgp additional-paths install
• Path-identifier used to track ≠ paths
55
Add Path - Possibilities
add-all-path add-n-path
• RR will do best path computation for up to n paths and send
n paths to the border routers
• RR will do the first best path computation and then send
• This is the only mandatory selection mode
all paths to the border routers
• Pros
• Pros
• less storage used for paths
• all paths are available on border routers
• less BGP info exchanged
• Cons
• Cons
• all paths stored
• more best path computation
• more BGP info is exchanged
• Usecase: Primary + n-1 backup scenario
• Usecase: ECMP, hot potato routing
(n is limited to 3 (IOS) or 2 (IOS-XR), to preserve CPU
power) = fast convergence
multipath
• RR will do the first best path computation and then send all IOS-XR
multipaths to the border routers only
• Use case: load balancing and primary + backup scenario
56
For Your
Add-Path - IOS-XR example config Reference
router bgp 1
address-family vpnv4
• Path selection is configured additional-paths install backup (deprecated)
additional-paths advertise
in a route-policy additional-paths receive
additional-paths selection route-policy apx
• Global command, per
address family, to turn on example RPL config
add-path in BGP route-policy ap1
if community matches-any (1:1) then
• Configuration in VPNv4 set path-selection backup 1 install
elseif destination in ([Link]/16, [Link]/16)then add-n-path
mode applies to all VRF
set path-selection backup 1 advertise install
IPv4-Unicast AF modes endif
unless overridden at
individual VRFs route-policy ap2 add-all-path
set path-selection all advertise
route-policy ap3
set path-selection multipath advertise multipath
needed to have a non-
multipath path as backup path route-policy ap4
set path-selection backup 1 install multipath-protect advertise
57
Hot Potato Routing - No RR
• Hot potato routing = packets are passed on (to next AS) as soon as received
• Shortest path though own AS must be used
• In transit AS: same prefix could be announced many times from many eBGP peers
P: Z
eBGP: P: Z
Path 1: NH: PE1
Path 2: NH: PE2
Path 3: NH: PE3, best
PE3 NH: PE3, P: Z
NH: PE1, P: Z
eBGP: P: Z
PE1
eBGP: P: Z
PE2
PE4
NH: PE2, P: Z
58
Hot Potato Routing - With RR
• Introducing RRs break hot potato routing
• Solutions: Unique RD for MPLS VPN or Add Path
P: Z
Path 1: NH: PE1, best
Path 2: NH: PE2
Path 3: NH: PE3 eBGP: P: Z
PE3
NH: PE1, P: Z
eBGP: P: Z P: Z
PE1 NH: PE3, P: Z Path 1: NH: PE1, best
RR
eBGP: P: Z
PE2
NH: PE1, P: Z PE4
NH: PE2, P: Z
59
Hot Potato Routing in Large Transit SP
BR BR
BR BR
BR
BR BR RR
BR RR
RR RR
RR RR BR BR
BR BR
RR
RR
RR RR BR
RR BR
BR
BR
add-path
• Large transit ISPs with full mesh iBGP • add-all-path could be deployed between
between regional RRs and hub/spoke centralized and regional RR’s
between local BR and RR • Also possible: remove the need for regional
• Full mesh and global hot potato routing RR if all BR routers support add-path
BR
Border Router
60
Deployment
61
BGP Selective Download
RIB – Full Internet Routes
FIB – Full Internet Routes
• Access router RIB holds full Internet routing table,
but fewer routes in FIB
• Example: ME switches, ASR900
ASBR ASBR
• FIB holds default route and selective more
specific routes
iBGP iBGP
ISP ISP
• Enterprise CPE devices will receive full Internet
routes through their BGP peering with the access access router
router(s) ASBR
62
Path MTU Discovery (PMTUD)
• MSS (Max Segment Size) – Limit on the largest segment that can traverse a TCP session
• Anything larger must be fragmented & re-assembled at the TCP layer
• MSS is 536 bytes by default for client BGP without PMTUD
• Enable PMTU for BGP with
• Older command “ip tcp path-mtu-discovery”
• Newer command “bgp transport path-mtu-discovery” (PMTUD now on by default)
• 536 bytes is inefficient for Ethernet (MTU of 1500) or POS (MTU of 4470) networks
• TCP is forced to break large segments into 536 byte chunks
• Adds overheads
• Slows BGP convergence and reduces scalability
63
Session/Timers
• Timers = keepalive and holdtime • Do not use Fast Session Deactivation (FSD)
• Default is ok – Tracks the route to the BGP peer
• Smallest is 3/9 for keepalive/holdtime – A temporary loss of IGP route, will kill off the iBGP sessions
• Scaling <> small timers – Very dangerous for iBGP peers
• IGP may not have a route to a peer for a split second
• Use BFD • FSD would tear down the BGP session
• Built for speed – It is off by default
• When failure occurs, BFD notifies BFD neighbor x.x.x.x fall-over
client (in 10s of msecs) – Next Hop Tracking (NHT), enabed by default, does the job
fine
64
IOS
Dynamic Neighbors
• Remote peers are defined by IP address range
• Less configuration for defining neighbors DMVPN
1
65
IOS-XR
BGP Attribute Download
bgp attribute-download
• Attributes communities, extended
BGP originating AS communities, and AS-path are
communities downloaded to the RIB & FIB
ext communities
RIB
NetFlow
66
Multisession
67
IOS
Multisession
• BGP Multisession = multiple BGP (TCP) sessions between 2 single session
BGP speakers carrying all AFs
• Even if there is only one BGP neighbor statement defined between the R1 R2
BGP speakers in the configuration
R1 R2
68
IOS
Multisession For Your
Reference
BGP: [Link] passive rcvd OPEN w/ optional parameter type 2 (Capability)
len 3
capability BGP: [Link] passive OPEN has CAPABILITY code: 131, length 1
BGP: [Link] passive OPEN has MULTISESSION capability, without grouping
70
MPLS VPN
71
RR-groups
• Use one RR (set of RRs) for a subset of prefixes
• By carving up range of RTs
rr-group 1
RR1
RR1
RR2
72
RR-groups Configuration Example
address-family vpnv4
• Dividing of RTs done by simple ext bgp rr-group 100
community list 1-99 or ext community list address-family vpnv6
with regular expression 100-500 bgp rr-group 100
PE1 PE2
rr-group 2
vpnv4/6 vpnv4/6
73
Route Target Constraint (RTC)
• Current behavior:
• RR sends all vpnv4/6 routes to PE
• PE routers drops vpnv4/6 for which there is no importing VRF
• RTC behavior: RR sends only “wanted” vpnv4/6 routes to PE
• “wanted”: PE has VRF importing the Route Targets for the specific routes
• RFC 4684
• New AF “rtfilter”
• Received RT filters from neighbors are translated into oubound filtering policies for
vpnv4/6 prefixes
74
Route Target Constraint (RTC)
CE1 CE2
BGP capability exchange
OPEN message PE1 RR PE2
CE3 CE4
capability 1/132 (RTFilter)
for vpnv4 & vpnv6
75
Route Target Constraint (RTC)
• Results
• Eliminates the waste of processing power on the PE and the waste of bandwidth
• Number of vpnv4 formatted message is reduced by 75%
• BGP Convergence time is reduced by 20 - 50%
• The more sparse the VPNs (few common VPNs on PEs), the more performance gain
• Note
• RTC clients of RR with different set of importing RTs will be in the same update group on the RR
• In IOS-XR, different filter group under same subgroup
76
Legacy PE RT Filtering
• Problem: If one PE does not support RTC (legacy prefix), then all RRs in one cluster must
store and advertise all vpn prefixes to the PE
• Solution: Legacy PE sends special prefixes to mimic RTC behavior, without RTC code
Legacy PE RR
• Collect import RTs • The presence of the community triggers the RR to
• Create route-filter VRF (same RD for all these VRFs extract the RTs and build RT membership
across all PEs) information
• Originate special route-filter route(s) with • RR only advertises wanted vpn prefixes towards
• the import RTs attached legacy PE
• one of 4 route-filter communties
• NO-ADVERTISE community
4 route-filter communties
0xFFFF0002 ROUTE_FILTER_TRANSLATED_v4
0xFFFF0003 ROUTE_FILTER_v4
0xFFFF0004 ROUTE_FILTER_TRANSLATED_v6
0xFFFF0005 ROUTE_FILTER_v6
77
Legacy PE RT Filtering Legacy
PE Import
Import CE2 RT 1:1
RT 1:1 CE1
AF vpnv4/6 prefixes PE1 sends all its vpnv4/6 RR sends only RED (not green)
exchange prefixes to RR vpnv4/6 prefixes to PE2
78
Legacy PE RT Filtering - Configuration For Your
Reference
Legacy PE config
ip vrf route-filter
RR config
rd 9999:9999
router bgp 1 export map SET_RT
address-family vpnv4
neighbor [Link] route-reflector-client router bgp 1
neighbor [Link] accept-route-legacy-rt address-family vpnv4
neighbor [Link] route-map legacy_PE out
address-family ipv4 vrf route-filter
network [Link] mask [Link]
PRO CON
• Remove Internet routing table from P routers • Increased memory and bandwidth
• Security: move Internet into VPN, out of global consumption
• Added flexibility
• More flexible DDOS mitigation
81
Full Internet in a VRF?
• Considerations
RD 1:1
PE1 PE3
RR
RD 1:2
PE2 PE4
82
Per-CE Label
• One unique label per prefix is always the default
• Per-CE : one MPLS label per next-hop (so per connected CE router) 2 CEs = 2 labels
• No IP lookup needed after label lookup
• Caveats
• No granular load balancing because the bottom label is the same for all prefixes from one CE, if
platform load balances on bottom label
• eBGP load balancing & BGP PIC is not supported (it makes usage of label diversity), unless resilient
per-ce label
• Only single hop eBGP supported, no multihop • Number of prefixes (n) is much larger than
number of CE routers (x) per VPN
NH: PE1, P: Z1, label L1 • Number of MPLS labels used is very low
CE1
83
Per-VRF Label
• Per-VRF : one MPLS label per VRF (all CE routers in the VRF)
- Con: IP lookup needed after label lookup
- Con: No granular load balancing because the bottom label is the same for all prefixes, if platform
load balances on bottom label
- Potential forwarding loop during local traffic diversion to support PIC
- No support for EIBGP multipath
Number of MPLS labels used per VRF is 1 !
CE1
CE
L L L MPLS
CE C C C PE
CE
CE CE
85
OS Enhancements
86
ASR9K: Scaling Enhancement
• BGP RIB Scale enhancement in 5.1.1
• Only for RSP440-SE
• Reload is needed
87
Multi-Instance BGP
multi-instance
• A new IOS-XR BGP architecture to support multiple BGP
BGP instances
• Each BGP instance is a separate process running RR1
on the same or a different RP/DRP node multi-instance BGP
[Link]
BGP vpnv4
• Different prefix tables
• Multiple ASNs are possible PE1 BGP
[Link] IPv4
BGP
• Solves the 32-bit OS virtual memory limit vpnv4 [Link]
OS releases
BGP Keepalive
Enhancements
IOS
• Priority queues for
reading/writing Keepalive/Update BGP PE-CE Scale
BGP Generic Scale messages
Enhancements
IOS
Enhancements • Results = avoid neighbor flaps /
IOS
IOS-XR
• Optimised BGP processing of label
on PE router
• Result = reduced CPU usage BGP PE Enhancements
IOS-XR
BGP PE-CE Scale
Enhancements
IOS
IOS-XR
• Result = considerable memory Enhancement
savings / greater prefix scalability
Measure Prefix instability show bgp all summary show bgp table
Traffic drops
Table Versions show bgp process performance-statistics
Timestamps detail
• This is a forced clear of the slow-peer status; the peer is moved to the original
update group
• Needed when the permanent keyword is configured
• clear bgp AF {unicast|multicast} * slow
• clear bgp AF {unicast|multicast} <AS number> slow
• clear bgp AF {unicast|multicast} peer-group <group-name> slow
• clear bgp AF {unicast|multicast} <neighbor-address> slow
96
Slow Peer Mechanism Details For Your
Reference
Clearing
• CLI to clear
• This is a forced clear of the slow-peer status; the peer is moved to the original update group
• Needed when the permanent keyword is configured
97
Route-Refresh Update Group: When is a For Your
Reference
Route Refresh Request Sent?
• A route refresh request is sent, when:
• a user types clear ip bgp [AF] {*|peer} in
• a user types clear ip bgp [AF] {*|peer} soft in
• adding or changing the inbound filtering on the BGP neighbor
• via route-map
• configuring allowas-in for the BGP neighbor
• configuring soft-configuration inbound on the BGP neighbor
• in MPLS VPN (for AFI/SAFI 1/128)
• a user adds a route-target import to a VRF
• in 6VPE (for AFI/SAFI 2/128)
• a user adds a route-target import to a VRF
98
Route Reflector For Your
Reference
Loop Prevention
• Because we have RRs and same prefix can be advertised multiple times within iBGP
cloud: loop prevention needed in iBGP cloud
• Two BGP attributes, two ways
• Originator ID
• Set to the router ID of the router injecting the route into the AS
• Set by the RR
• Cluster List
• Each route reflector the route passes through adds their cluster-ID to this list
• Cluster-id = Router ID by default
•“bgp cluster-id x.x.x.x” command to set cluster-id
Loop Prevention
RR1 RR2
AS Path: {65000}
AS Path: {65000}
Originator-ID: RRC1
RRC1 Cluster-List:
RRC2 {RR1, RR2}
AS Path: {65000}
loop detected
100
Route Reflector RR & RR client
Route Advertisement RR
iBGP
eBGP
prefix coming from eBGP peer prefix coming from RR client prefix coming from a non-client
RR sends prefix to clients and RR reflects prefix to clients RR reflects prefix to clients
non-clients (and sends to other and sends to non-clients (and (and sends to eBGP peers)
eBGP peers) sends to eBGP peers)
101
IOS-XR
Update Groups in IOS XR
RP/0/6/CPU0:router#show bgp vpnv4 unicast update out update-group 0.2
VRF "default", Address-family "VPNv4 Unicast"
Update-group 0.2
Flags: 0x0010418b
address family Sub-groups: 1 (0 throttled)
Refresh sub-groups: 0 (0 throttled)
Filter-groups: 3
update groups Neighbors: 3 (0 leaving)
Update OutQ: 0 bytes (0 messages)
Update generation recovery pending ? [No]
sub-groups
Last update timer start: Apr 3 [Link].425
Last update timer stop: ---
refresh sub-groups Last update timer expiry: Apr 3 [Link].435 (1w4d ago)
Update timer running ? [No] (0.000 sec remaining; last started for 0.010 sec)
103