Datacenter Networking With Aruba CX: June 17, 2020
Datacenter Networking With Aruba CX: June 17, 2020
Datacenter
Networking With
Aruba CX
HPE IS THE EDGE-TO-CLOUD
PLATFORM AS-A-SERVICE COMPANY
EVERYTHING AS-A-SERVICE
DEVELOPERS LINE OF BUSINESS DATA SCIENTISTS IT OPS
SECURITY DEVICE DETECTION LOCATION IDENTITY IAAS & PAAS COST AND COMPLIANCE
OPEN
CLOUD-NATIVE
ARUBA INTELLIGENT
HPE
INTELLIGENT EDGE HYBRID CLOUD
AUTONOMOUS
SECURE
ARUBA DCN STRATEGY AND PRIORITIES
Invest in Innovation Cloud-Native Aruba CX
Rationalize the portfolio Big bets on ASIC, Cloud, NVMeoF
Deliver pan-HPE solutions Make it easy & rewarding to sell
4
Aruba CX
Unique Value
AOS-CX: Accessible from System, NMS or Cloud
BUILT ON CLOUD-
NATIVE PRINCIPLES Aruba Network Analytics Engine
Time-Series Database
Modularity Programmability
Faster innovation Simplified 100% REST APIs
with independent operations
processes through automation State Database
Microservices
Resiliency Elasticity Architecture
6
AOS-CX SWITCHING FOR THE ENTERPRISE
Deep buffers
Large tables
CX 8400 Carrier-class HA
Modular
High-density access
CX 6400 Core and Agg
7
High Availability with
Virtual Switching
Extension (VSX)
What Makes for a Good High Availability Solution?
10 10
Comparison of Virtualization Solutions
FlexFabric Cisco Nexus
Aruba 8400(1)/8320
Features 129xx/59xx/57xx 3/5/7/9xxx
(with VSX)
(with IRF) (with vPC)
Control Planes 2 1 2
(1)
Requires dual supervisor in each chassis.
(2)
Intended for a future software release 11
VSX Meets our Customers Needs for High Availability
15
Discover Search Edit Validate Deploy Audit Monitor Notify
NetEdit VRDs
See Airheads for latest Data Center VRDs with NetEdit
16
See YouTube Airheads Broadcasting Channel for latest Videos with NetEdit
Distributed Analytics in
Every CX Switch with
Network Analytics Engine
(NAE)
Overview
Deploy fix
Monitor
• Prevent issue from occurring again
18
Intelligent Embedded Distributed Pre-Processing with
Aruba Network Analytics Engine (NAE)
Other Monitoring Approaches Aruba CX Approach
Probes and Show Telemetry Third-Party Aruba NetEdit
Commands Streaming Monitoring Tools
CX Core
>_
vs
Needle in the Latency and large, Manual correlation
haystack unfiltered data sets and limited
actionable insights CX Access CX Access CX Access
NAE integrated everywhere in network
Difficult to Delays in data Resource Real-time, Automated 24/7 network
recreate and/or processing and intensive with network-wide monitoring for technician built-in
identify issues analysis longer MTTR visibility with rapid detection to every switch
actionable data of issues
19
Problem Statement VSX
1/1/1 1/1/1
ESXi
Server1
vCenter tasks
1. Create LAG with Server1 vmnic2/3 1. VSX LAG created for Server1 vmnic2/3
(1/1/1 TOR-1, 1/1/1 TOR-2) (1/1/1 TOR-1, 1/1/1 TOR-2)
2. Assign VLANs 1-80 Update Network Ticket 2. VLANs 1-80 assigned
3. Wait for ticket update
Completed, ticket closed.
20
Time
Problem Statement VSX
1/1/1 1/1/1
ESXi
Server1
vCenter tasks
1. Create LAG with Server1 vmnic2/3 1. VSX LAG created for Server1 vmnic2/3
(1/1/1 TOR-1, 1/1/1 TOR-2) (1/1/1 TOR-1, 1/1/1 TOR-2)
2. Assign VLANs 1-80 Update Network Ticket 2. VLANs 1-80 assigned
3. Wait for ticket update
Completed, ticket closed.
21
Time
Automated Connectivity set-up CLI Config
VSX
NAE
1/1/1 1/1/1
vCenter agent
Slack Channel
Message
Reduced
Time
22
Automated Connectivity set-up CLI Config
VSX
NAE
1/1/1 1/1/1
vCenter agent
Slack Channel
Message
Reduced
Time
23
DC Architecture Optimizing
Application and Server
Performance
Aruba DC Network Architectures
Campus Attached Dedicated Data Centers
Security Internet/WAN
Gateway Edge
Traditional 2-Tier DC
Layer 3 Spine and Leaf
L3
VSX VSX
L3 L3
L2
VSF VSF
DC ToRs
Campus Bldgs
Small Server Rooms Small to Medium Data Centers Large Universities, Financial Services, Large
K-12 SD, Small University, Retail Education, Local Gov., Enterprises, Universities Enterprises
Campus/DC
Core Layer
VSX – Why?
L3
– Simple design, fewer switches and easier management
– Maximum leverage of core investment
L2
VSX VSX VSX – VSX Live Upgrade capability @ Top of Rack (ToR)
– Low latency for workloads between DC and Campus
VSF VSF
DC ToRs
Campus Bldgs
– Caution Areas
– Core and DC Located in Separate Buildings (Cabling with Growth)
– Limited growth capabilities
– Increased failure domain, DC & Core combined in outage
26
Designing: Campus Attached DCs
Security Internet/WAN
Edge
– Design oversubscription to meet demands of environment Gateway
– Recommended ~ 10 racks
DC ToRs
Campus Bldgs
27
Aruba Traditional 2-Tier DC
L2 – Why?
– Simple design, simple L2 from rack to rack
Access
VSX VSX VSX
– Ease of management
– VSX Live Upgrade capability @ Core & Top of Rack (ToR)
– Caution Areas
– Headroom on growth capabilities (Core port count)
– MAC Scale in very large DCs
28
Designing: 2-Tier Dedicated DC
Modest scale, simplified solution, easy to manage, with L2 connectivity
Small 2-Tier DC Pod – Fixed Switches Large 2-Tier DC Pod - Modular Core Switches
– Core Layer = 2 x 8325 VSX - Each with 32 x 100GbE (QSFP28) ports – Core Layer = 2 x 6410 VSX - Each with 120 x 100GbE (QSFP28) ports
– Access Layer = 16 Racks, each with 2 x 832x VSX – Access Layer = 60 racks, each with 2 x 832x VSX
– Total Server Ports = 1,536 ports (16 racks x 48 ports x 2) – Total Server Ports = 5,760 ports (60 racks x 48 ports x 2)
2.4:1 Oversubscription [960G/400G] 400G = 4x100G uplinks per rack 960G = 48x10Gx2 Ports
6:1 Oversubscription [2,400G/400G] 400G = 4x100G uplinks per rack 2,400G = 48x25Gx2 Ports
29
Aruba Spine & Leaf DC
– Caution Areas
– Complexity due to BGP & Overlay/Underlay Technology
30
Designing: Spine and Leaf DC
L3 fabric, greater bandwidth/performance, often demanding EVPN/VXLAN
– Design oversubscription to meet demands of environment
– Server interfaces usually 10/25G Layer 3 Spine and Leaf
Spines
– Uplink interfaces usually 100G
– Remember the Spine Layer cannot add additional
L3
interfaces (once ports/slots are used)
– Leaf Layer density determined by # of physical interfaces Leafs
in single Spine switch. VSX VSX VSX
L2
– Ensure table scale of chosen devices will meet needs
– 3 Spines provide added HA value even during upgrades
Small Spine and Leaf Zone – Fixed Switches Large Spine and Leaf Zone - Modular Switches
– Core Layer = 2 x 8325 VSX - Each with 32 x 100GbE (QSFP28) ports – Core Layer = 2 x 8400 VSX - Each with 48 x 100GbE (QSFP28) ports
– Access Layer = 16 Racks, each with 2 x 832x VSX – Access Layer = 24 racks, each with 2 x 832x VSX
– Total Server Ports = 1,536 ports (16 racks x 48 ports x 2) – Total Server Ports = 2,304 ports (24 racks x 48 ports x 2)
2 Spine Oversubscription = 2.4:1 [960G/400G] 400G = 4x100G uplinks per rack 960G = 48x10Gx2 Ports
4 Spine Oversubscription = 1.2:1 [960G/800G] 800G = 8x100G uplinks per rack 960G = 48x10Gx2 Ports
2 Spine Oversubscription = 6:1 [2,400G/400G] 400G = 4x100G uplinks per rack 2,400G = 48x25Gx2 Ports
4 Spine Oversubscription = 3:1 [2,400G/800G] 800G = 8x100G uplinks per rack 2,400G = 48x25Gx2 Ports 31
Overlay Architectures
Flattening Server
Fabrics
Virtual Extensible LAN (VXLAN) - Overview
– Standards based data plane “point to point” tunnels that provides L2 network overlay connectivity across a L3 underlay
network
– Any device that supports VXLAN encapsulation/decapsulation is considered a VXLAN Tunnel End Point (VTEP)
– Supports multi-tenancy via VXLAN Network Identifiers (VNI) and traffic load sharing across Equal Cost Multi Pathing
(ECMP) routes between VTEPs
– VXLAN tunnels can be built via 1 of these methods
– Static VXLAN
– Centralized control plane (Controller based) Point to point L2 overlay VXLAN tunnels
– Distributed control plane (Non controller based, MP-BGP EVPN)
L3 IP underlay
network
– Provides MAC address advertisements between VTEPs via underlay network BGP peering
– Scales better than static VXLAN (MAC flood and learn in the data plane tunnel)
33
#1: Centralized L3 Gateway with VXLAN/EVPN
L3 DC Core
AS#65100
– When to use it?
– Provides L2 connectivity over L3 Fabric
Zone1
– Provides inter VXLAN routing RR1
Overlay
RR2
OSPF Area 0 VXLAN
– Suitable when the Pod is considered trusted AS#65001 Tunnels
Border Leafs
connect in/out
– Why? Zone
34
#2: Centralized L2 Gateway with VXLAN/EVPN
L3 DC Core
AS#65100
– When to use it?
– Suitable for Pods that require higher level of security
– Extends L2 to Gateway for bridging to Firewall Zone1
RR1 Overlay
RR2
– Firewall can route and inspect traffic OSPF Area 0 VXLAN
Tunnels
AS#65001
– VNI’s can be shared across any number of leaf pairs L2 VTEP L2 VTEP L2 VTEP
802.1Q trunk
– Best practice is to leverage a single pair of VSX (VLANs 11/12)
Switches for HA
VMs VMs Firewall - Default gateways
– Ensures that traffic on same subnet between VTEPs VLAN 11 10.1.1.10/24 VLAN 11 10.1.1.11/24 10.1.1.1/24
VLAN 12 10.1.2.10/24 VLAN 12 10.1.2.11/24 10.1.2.1/24
does not need to traverse border leaf
NetEdit DC VXLAN Solution Config
35
#3: VMware NSX-V/T 8325 Integration
– Used in environments with NSX, VMs and bare metal servers
– Provides L2 network connectivity between VMs (on ESXi hosts) and Bare Metal
Servers connected to the Hardware VTEP switch (8325)
– VMware recommends new deployments utilize NSX-T
– VMware does not have any plans to integrate NSX-T with hardware VTEPs
36
Designing: EVPN/VXLAN
– Identify the VXLAN Use Case:
– Centralized L3 Gateway with VXLAN/EVPN
– Centralized L2 Gateway with VXLAN/EVPN Point to point L2 overlay VXLAN tunnels
38
Data Center Network Architecture Summary
CX 6300 CX 6400 CX 8320/8325 CX 8400
- Check if interfaces, features, bandwidth and scale meet customer requirements
- Front-to-Back (port-to-power) = 6300M/F, 6400, 8320, 8400
General Guidance - Back-to-Front (power-to-port) = 8325, 1 model of 6300
- VSX dual control plane should be recommended for maximum network uptime/HA and live upgrades
- VSF not recommended as DC Core switches due to single control plane
39
Aruba and HPE Compute
and Storage – Better
Together
HPE & Aruba – Better Together
GreenLake
Storage
Hybrid Cloud
Networking
Azure
SDS
25GbE switch ports continue to see strong growth 100GbE switch revenues grew 24.7% year over
with port shipments rising 57.1% year over year in year
the quarter
41
HPE Hybrid IT Integration Near-term Roadmap
Complete – ready to sell
– HPE SimpliVity (HCI) integration
– Aruba CX VRD: SimpliVity - 8320 Interop
– HPE Server DAC cable cross-referencing
– HPE GreenLake for Aruba - Network-as-a-Service
– HPE Nimble dHCI (8320, 8325, 6300M)
– Aruba CX VRD : Nimble and CX Interop
New or Existing HPE customers updating their network infrastructure can benefit from
these integrations
43
Aruba Integrated SimpliVity Deployment Architecture
DM
Deployment
Manager
Thrift
REST
Mgmt Network
DI
– 8320 switchports towards Simplivity servers are detected via
DI
LLDP and configured with VLANs, trunk mode and MTU
DL 380 DL 380
interface 1/1/1
no shutdown
mtu 9000
no routing
vlan trunk native 1 CONFIDENTIAL
44 44
vlan trunk allowed 5
Easier Solutions Design With Aruba 83xx Switches
– Aruba solution architects and SEs can now rely on an Aruba + HIT certified cable list, cross referenced by
Aruba and the Volume BU, to design End to End HIT solutions.
– HIT link: https://h20195.www2.hpe.com/v2/getpdf.aspx/A00002507ENW.pdf?
– Updated Aruba optical guide downloadable from
https://asp.arubanetworks.com/downloads
– More to come in 2020:
– 10G, 25G, 40G, 100G AOCs
– 40G, 100G DACs
– Breakout 4x10G, 4x25G cables
– With more switches 6400, 8400
– Against more HPE solutions: DL servers, Synergy
10/25Gb HPE Server adapters tested 10Gb Base-T HPE Server adapters tested
– HPE SKU # SKU Description – HPE SKU # SKU Description
– 817749-B21 HPE Ethernet 10/25Gb 2-port 640FLR-SFP28 Adapter – 656596-B21 HPE Ethernet 10Gb 2-port 530T Adapter
– 817753-B21 HPE Ethernet 10/25Gb 2-port 640SFP28 Adapter – 700759-B21 HPE FlexFabric 10Gb 2-port 533FLR-T Adapter
– 817709-B21 HPE Ethernet 10/25Gb 2-port 631FLR-SFP28 Adapter – 817745-B21 HPE Ethernet 10Gb 2-port 562FLR-T Adapter
– 817718-B21 HPE Ethernet 10/25Gb 2-port 631SFP28 Adapter – 817738-B21 HPE Ethernet 10Gb 2-port 562T Adapter
– P11338-B21 HPE Ethernet 10Gb 2-port 548SFP+ Adapter – 10G DAC 487655-B21 HPE BLc 10G SFP+ SFP+ 3m DAC Cable
– P08446-B21 HPE Ethernet 10Gb 2-port 524SFP+ Adapter – 10G DAC 537963-B21 HPE BLc 10G SFP+ SFP+ 5m DAC Cable
– 727055-B21 HPE Ethernet 10Gb 2-port 562SFP+ Adapter – 25G DAC 844477-B21 HPE 25Gb SFP28 to SFP28 3m DAC
– 727054-B21 HPE Ethernet 10Gb 2-port 562FLR-SFP+ Adapter – 25G DAC 844480-B21 HPE 25Gb SFP28 to SFP28 5m DAC
47
Split Cables and Split Ports Using Transceivers – 10.5
HPE Split Cables1:
US List
SKU# Quote Description (EN)
Price
721064-B21 HPE BladeSystem c-Class 40G QSFP+ to 4x10G SFP+ 3m Direct Attach Copper Splitter Cable $529
Fixed Spine
8320 8325 8360
• 32x40G • 32x40/100G • 12x100G
OOBM/
1G RJ-45 ToR 6300M 48P Pwr2Prt
Business agility
I need to move a lot
faster – if IT could only
be ahead of business
initiatives for once.
Lower IT costs
I need to align our costs to
business benefits, and I’m Proper control
constrained by our budget. I’m worried about our ability to control
performance, security, compliance, and our data.
5252
Aruba integration
Introducing with Composable Fabric
Aruba CFM
The on-site orchestration system
Key Features & Benefits
• Zero touch deployment provisioning &
orchestration
• Manage and monitor global network configuration
• Complex workflow automation
• Visualize data center infrastructure
• Integrate with 3rd party data center orchestration
systems
• Integration with HPE Infrastructure hardware and
software
• Automate lifecycle events in the data center
53
The data center solution that serves the application
https HPE
OneView
API APIs
Cl nte
Co
ick xtu
-th al
HTML5 GUI ARUBA CFM ECOSYSTEM INTEGRATION
ro
ug
h
API
ARUBA CX 54
Value across the data center
55
GUI and automation drives all
END-TO-END HOST
VISIBILITY
NODE INVENTORY
WORKFLOW
AUTOMATIONS
and GUIDED 56
SETUP
Where to Find the Latest Presales Resources
https://community.arubanetworks.com/t5/
Data-Center/bd-p/DataCenter https://www.youtube.com/channel/UCFJ
CnuXFGfEbwEzfcgU_ERQ
https://ase.arubanetworks.com/solutions?p
https://github.com/aruba/nae-scripts roducts=19
57