0% found this document useful (0 votes)
187 views57 pages

Datacenter Networking With Aruba CX: June 17, 2020

The document discusses Aruba's datacenter networking solutions with Aruba CX. It introduces Aruba Virtual Switching Extension (VSX) for high availability, the Aruba CX Mobile App NetEdit for easy provisioning, and the Aruba CX Network Analytics Engine for traffic and device monitoring. It also discusses reference architectures, extending flat networks, and Aruba's datacenter roadmap including HPE integration initiatives. The document is an overview of Aruba's datacenter networking strategy focusing on the Aruba CX software and tools to simplify operations.

Uploaded by

Ilham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
187 views57 pages

Datacenter Networking With Aruba CX: June 17, 2020

The document discusses Aruba's datacenter networking solutions with Aruba CX. It introduces Aruba Virtual Switching Extension (VSX) for high availability, the Aruba CX Mobile App NetEdit for easy provisioning, and the Aruba CX Network Analytics Engine for traffic and device monitoring. It also discusses reference architectures, extending flat networks, and Aruba's datacenter roadmap including HPE integration initiatives. The document is an overview of Aruba's datacenter networking strategy focusing on the Aruba CX software and tools to simplify operations.

Uploaded by

Ilham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 57

June 17, 2020

Datacenter
Networking With
Aruba CX
HPE IS THE EDGE-TO-CLOUD
PLATFORM AS-A-SERVICE COMPANY
EVERYTHING AS-A-SERVICE
DEVELOPERS LINE OF BUSINESS DATA SCIENTISTS IT OPS

SECURITY DEVICE DETECTION LOCATION IDENTITY IAAS & PAAS COST AND COMPLIANCE

OPEN

CLOUD-NATIVE
ARUBA INTELLIGENT
HPE
INTELLIGENT EDGE HYBRID CLOUD
AUTONOMOUS

SECURE
ARUBA DCN STRATEGY AND PRIORITIES
Invest in Innovation Cloud-Native Aruba CX
Rationalize the portfolio Big bets on ASIC, Cloud, NVMeoF
Deliver pan-HPE solutions Make it easy & rewarding to sell

Lead with Aruba


Integrate Composabe Fabric
Support legacy Comware
Sunset Arista, Altoline, Nuage
Better-together HPE & Aruba
Agenda

– Introduction to the modern Aruba CX software system


– Introduction to Aruba Virtual Switching Extension (VSX) for superior HA
– Introduction to The Aruba CX Mobile App NetEdit for easy provisioning
– Introduction to The Aruba CX Network Analytics Engine for traffic/device monitoring
– Discussion of Reference Architectures and use cases used by Aruba in the DC
– Discussion of how Aruba CX solutions can help extend a flat network for critical layer 2
connectivity
– Discussion of Aruba’s data center roadmap which includes HIT integration initiatives
– Q & A? What do you want and need from Aruba data center networking solutions?

4
Aruba CX
Unique Value
AOS-CX: Accessible from System, NMS or Cloud

BUILT ON CLOUD-
NATIVE PRINCIPLES Aruba Network Analytics Engine

Time-Series Database
Modularity Programmability
Faster innovation Simplified 100% REST APIs
with independent operations
processes through automation State Database

Microservices
Resiliency Elasticity Architecture

Stable and reliable One operating model


from edge access AOS-CX
microservices
design to data center

6
AOS-CX SWITCHING FOR THE ENTERPRISE

NEW PLATFORMS COMPLETE END-TO-END SWITCHING PORTFOLIO


Campus Data Center
Access Aggregation Core Spine Leaf
AOS-CX

Deep buffers
Large tables
CX 8400 Carrier-class HA

Modular
High-density access
CX 6400 Core and Agg

Stackable Top of Rack


Access and Agg CX 83xx Small Core
Diverse closet scale Campus Agg
CX 6300
One Operating System. One Operating Model.

7
High Availability with
Virtual Switching
Extension (VSX)
What Makes for a Good High Availability Solution?

Customer requirement Solution Objective

Redundancy 1. HW redundancy – management modules, fabric, power, fans


2. SW redundancy – dual vs single control planes

Resiliency 3. Link virtualization – virtualize multiple links to one logical link


4. Process resiliency – self-restart to last known good state

Performance 5. Fast link failovers – minimize duration of traffic outage


6. Fast upgrade time – minimize time-at-risk during upgrade

Simplicity 7. Easy to configure – # entities to configure, # CLI commands


8. Low risk of error – config sync & consistency checks
9
Switch Virtualization Solutions
Chassis 1 Chassis 2
Management Management
Control Control Chassis 1 Chassis 2
SYNC
SYNC?(*)
Routing Routing
Management
Ethernet Links Control
Shared Routing
Ethernet Links
VSX Shared
(vPC / MLAG)
VSF
(VSS / IRF / Virtual Chassis)

10 10
Comparison of Virtualization Solutions
FlexFabric Cisco Nexus
Aruba 8400(1)/8320
Features 129xx/59xx/57xx 3/5/7/9xxx
(with VSX)
(with IRF) (with vPC)

IP’s for Switch Management 2 1 2

Control Planes 2 1 2

Live Upgrades (built in by ISSU within major code


HA during upgrades design) branches
Built in by design

Active-Active Unicast & Multicast(2) Supported Supported Supported

Config Simplicity and


Extensive Support Single config Limited
Troubleshooting
L2 Only
MC Port-channel/LAG L2 and L3 L2 and L3
(except 5K)
Eliminates need for VRRP Eliminates need for VRRP
First Hop Redundancy – less configuration – less configuration
Needs VRRP/HSRP

(1)
Requires dual supervisor in each chassis.
(2)
Intended for a future software release 11
VSX Meets our Customers Needs for High Availability

HA requirement VSX Solution Capabilities

Redundancy Dual Control Planes – Ideal Solution


Minimal scope of outage, each box operates separately in concert

Enhanced Link Virtualization – Ideal Solution


Resiliency
VSX support for Multi-Chassis LAGs

Support Failover and Live Upgrades – Ideal Solution


Performance
VSX enables failover and non-stop 50% upgrades inherently

Simplicity Operational Simplicity Enhancements – Good Solution


VSX pair configuration and troubleshooting from a single switch
12
Improving Operator
Experience with CX
Mobile App and
NetEdit
ANALYTICS AND AUTOMATION
POWERED NETWORK OPERATIONS
AUTOMATED CONFIG MANAGEMENT WITH ARUBA NETEDIT

Discover Management Simplicity


Search Topology with network health
GUI-driven solution configuration
Edit Auto-Change Verification
Continuous Conformance Validation
Validate
One Touch Deployment with Aruba CX Mobile App
Deploy
Accelerate day zero configuration (including VSF)
Audit
Visibility and Analytics via NAE
Monitor
Aggregation of Embedded analytics status
Health reports on devices, apps, and network services
Notify Script tags view layer
Troubleshoot
Workflow Integration with 3rd Party Tools
Slack, TOPdesk, ServiceNow, etc.
14
CX Mobile App
NetEdit 2.0
New Features
– Role based Access
– Admin / Technician (CX Mobile App)
– Discovery
– Network tree from Seed Devices
– Topology
– Layered View’s
– Enhanced Visibility with NAE
– GUI-Driven Workflows:
– Network topology for fast view into device health & config issues
– Express config to simplify common changes via prompt-driven UI
– Third Party Device visibility via SNMP
– Solutions Configurations with express configs to simplify common changes
via prompt-driven UI NetEdit 2.0

– Tighter Workflow Integration: Slack, TOPdesk, ServiceNow, etc.


– Diagnostics correlated to event available for troubleshooting

15
Discover Search Edit Validate Deploy Audit Monitor Notify
NetEdit VRDs
See Airheads for latest Data Center VRDs with NetEdit

Deploying iBGP Deploying EBGP EVPN (Dual-


Deploying L3 Spine and Leaf AS) VXLAN VSX Centralized
DC Fabric with NetEdit EVPN VXLAN VSX Centr
alized L3 Gateway with Ne L3 Gateway with NetEdit
tEdit

16
See YouTube Airheads Broadcasting Channel for latest Videos with NetEdit
Distributed Analytics in
Every CX Switch with
Network Analytics Engine
(NAE)
Overview

An end user reports an issue


• IT Ticket is Generated

IT needs to pinpoint the root cause


• Determine Service Impact and Time, Pull logs

Network Analytics Engine

Deploy fix

Monitor
• Prevent issue from occurring again

18
Intelligent Embedded Distributed Pre-Processing with
Aruba Network Analytics Engine (NAE)
Other Monitoring Approaches Aruba CX Approach
Probes and Show Telemetry Third-Party Aruba NetEdit
Commands Streaming Monitoring Tools

CX Core
>_
vs
Needle in the Latency and large, Manual correlation
haystack unfiltered data sets and limited
actionable insights CX Access CX Access CX Access
NAE integrated everywhere in network
Difficult to Delays in data Resource Real-time, Automated 24/7 network
recreate and/or processing and intensive with network-wide monitoring for technician built-in
identify issues analysis longer MTTR visibility with rapid detection to every switch
actionable data of issues

19
Problem Statement VSX

Server-ToR Connectivity set-up time TOR-1


8325
TOR-2
8325

1/1/1 1/1/1

Server admin VLAN 1-100 Network admin


vmnic2 vmnic3

ESXi
Server1

vCenter tasks

1. Create LAG with Server1 vmnics Rejected, port information missing


2. Assign VLANs 1-80 Open Network Ticket
3. Wait for ticket update

1. Create LAG with Server1 vmnic2/3 1. VSX LAG created for Server1 vmnic2/3
(1/1/1 TOR-1, 1/1/1 TOR-2) (1/1/1 TOR-1, 1/1/1 TOR-2)
2. Assign VLANs 1-80 Update Network Ticket 2. VLANs 1-80 assigned
3. Wait for ticket update
Completed, ticket closed.

20
Time
Problem Statement VSX

Server-ToR Connectivity set-up time TOR-1


8325
TOR-2
8325

1/1/1 1/1/1

Server admin VLAN 1-100 Network admin


vmnic2 vmnic3

ESXi
Server1

vCenter tasks

1. Create LAG with Server1 vmnics Rejected, port information missing


2. Assign VLANs 1-80 Open Network Ticket
3. Wait for ticket update

1. Create LAG with Server1 vmnic2/3 1. VSX LAG created for Server1 vmnic2/3
(1/1/1 TOR-1, 1/1/1 TOR-2) (1/1/1 TOR-1, 1/1/1 TOR-2)
2. Assign VLANs 1-80 Update Network Ticket 2. VLANs 1-80 assigned
3. Wait for ticket update
Completed, ticket closed.

1. Forgot some VLANs. 1. VLANs 81-100 assigned


Assign missing VLAN 81-100 for Server 1 Open Network Ticket Completed, ticket closed.
2. Wait for ticket update.

21
Time
Automated Connectivity set-up CLI Config
VSX

NAE using vCenter API TOR-1


8325
TOR-2
8325

NAE
1/1/1 1/1/1
vCenter agent

Server admin VLAN 1-100 Network admin


vmnic2 vmnic3

NAE agent enabled


ESXi
vCenter tasks Server1
vCenter
Notification update update

Slack Channel
Message

Reduced
Time

22
Automated Connectivity set-up CLI Config
VSX

NAE using vCenter API TOR-1


8325
TOR-2
8325

NAE
1/1/1 1/1/1
vCenter agent

Server admin VLAN 1-100 Network admin


vmnic2 vmnic3

NAE agent enabled


ESXi
vCenter tasks Server1
vCenter
Notification update update

Slack Channel
Message

Reduced
Time

23
DC Architecture Optimizing
Application and Server
Performance
Aruba DC Network Architectures
Campus Attached Dedicated Data Centers

Security Internet/WAN
Gateway Edge
Traditional 2-Tier DC
Layer 3 Spine and Leaf
L3

VSX VSX

L3 L3
L2

VSX VSX VSX VSX VSX VSX VSX VSX VSX


L2
L2

VSF VSF

DC ToRs

Campus Bldgs

Small Server Rooms Small to Medium Data Centers Large Universities, Financial Services, Large
K-12 SD, Small University, Retail Education, Local Gov., Enterprises, Universities Enterprises

Provides Layer 3 dedicated DC fabric allowing for greater


Unified operating model for switching from access to Provides simplicity of stretched dedicated L2 DC solution with bandwidth/performance.
core to DC modest scale capabilities - Often demands large 100G density, scale routing, multi-25
site connectivity
Aruba Campus Attached DC
– When to use it?
– Customer has server rooms or small core adjacent data center
Internet/WAN – HCI & Cloud evolution is shrinking data center sizes, fewer racks
Edge
needed for workloads at edge

Campus/DC
Core Layer
VSX – Why?
L3
– Simple design, fewer switches and easier management
– Maximum leverage of core investment
L2
VSX VSX VSX – VSX Live Upgrade capability @ Top of Rack (ToR)
– Low latency for workloads between DC and Campus
VSF VSF

DC ToRs

Campus Bldgs
– Caution Areas
– Core and DC Located in Separate Buildings (Cabling with Growth)
– Limited growth capabilities
– Increased failure domain, DC & Core combined in outage
26
Designing: Campus Attached DCs

Security Internet/WAN
Edge
– Design oversubscription to meet demands of environment Gateway

– Server interfaces usually 10/25G


– Uplink interfaces usually 100G VSX

– Remember the Campus Core Layer cannot add additional L3

interfaces (once ports/slots are used)


– Access Layer density determined by # of physical interfaces in L2
VSX VSX VSX

single core switch.


VSF VSF

– Recommended ~ 10 racks
DC ToRs

Campus Bldgs

27
Aruba Traditional 2-Tier DC

– When to use it?


– Dedicated data center and no customer demand or use case
Traditional 2-Tier DC requirements for spine + leaf
Core
L3 – Meets applications requirements
VSX

L2 – Why?
– Simple design, simple L2 from rack to rack
Access
VSX VSX VSX
– Ease of management
– VSX Live Upgrade capability @ Core & Top of Rack (ToR)

– Caution Areas
– Headroom on growth capabilities (Core port count)
– MAC Scale in very large DCs

28
Designing: 2-Tier Dedicated DC
Modest scale, simplified solution, easy to manage, with L2 connectivity

– Design oversubscription to meet demands of


2-Tier DC
environment
L3
– Server interfaces usually 10/25G Core
VSX
– Uplink interfaces usually 100G
L2
– Remember the Core Layer cannot add additional
interfaces (once ports/slots are used) Access
VSX VSX VSX

– Access Layer density determined by # of physical


interfaces in single core switch.
– Ensure table scale of chosen devices will meet needs

Small 2-Tier DC Pod – Fixed Switches Large 2-Tier DC Pod - Modular Core Switches
– Core Layer = 2 x 8325 VSX - Each with 32 x 100GbE (QSFP28) ports – Core Layer = 2 x 6410 VSX - Each with 120 x 100GbE (QSFP28) ports
– Access Layer = 16 Racks, each with 2 x 832x VSX – Access Layer = 60 racks, each with 2 x 832x VSX
– Total Server Ports = 1,536 ports (16 racks x 48 ports x 2) – Total Server Ports = 5,760 ports (60 racks x 48 ports x 2)

2.4:1 Oversubscription [960G/400G] 400G = 4x100G uplinks per rack 960G = 48x10Gx2 Ports
6:1 Oversubscription [2,400G/400G] 400G = 4x100G uplinks per rack 2,400G = 48x25Gx2 Ports
29
Aruba Spine & Leaf DC

– When to use it?


– Spine + Leaf CLOS fabric allows for easy expansion & decreased
oversubscription for building large data centers.
Layer 3 Spine and Leaf
Spines
– Why?
L3
– L3 between racks removes any L2 broadcast domain or L2 errors
Leafs – Highly scalable, increasing # of spines decreases oversubscription
VSX VSX VSX – Core for building VXLAN network for L2 between racks
L2
– Without VXLAN, great for full NSX data center

– Caution Areas
– Complexity due to BGP & Overlay/Underlay Technology

30
Designing: Spine and Leaf DC
L3 fabric, greater bandwidth/performance, often demanding EVPN/VXLAN
– Design oversubscription to meet demands of environment
– Server interfaces usually 10/25G Layer 3 Spine and Leaf
Spines
– Uplink interfaces usually 100G
– Remember the Spine Layer cannot add additional
L3
interfaces (once ports/slots are used)
– Leaf Layer density determined by # of physical interfaces Leafs
in single Spine switch. VSX VSX VSX
L2
– Ensure table scale of chosen devices will meet needs
– 3 Spines provide added HA value even during upgrades

Small Spine and Leaf Zone – Fixed Switches Large Spine and Leaf Zone - Modular Switches

– Core Layer = 2 x 8325 VSX - Each with 32 x 100GbE (QSFP28) ports – Core Layer = 2 x 8400 VSX - Each with 48 x 100GbE (QSFP28) ports
– Access Layer = 16 Racks, each with 2 x 832x VSX – Access Layer = 24 racks, each with 2 x 832x VSX
– Total Server Ports = 1,536 ports (16 racks x 48 ports x 2) – Total Server Ports = 2,304 ports (24 racks x 48 ports x 2)

2 Spine Oversubscription = 2.4:1 [960G/400G] 400G = 4x100G uplinks per rack 960G = 48x10Gx2 Ports
4 Spine Oversubscription = 1.2:1 [960G/800G] 800G = 8x100G uplinks per rack 960G = 48x10Gx2 Ports
2 Spine Oversubscription = 6:1 [2,400G/400G] 400G = 4x100G uplinks per rack 2,400G = 48x25Gx2 Ports
4 Spine Oversubscription = 3:1 [2,400G/800G] 800G = 8x100G uplinks per rack 2,400G = 48x25Gx2 Ports 31
Overlay Architectures
Flattening Server
Fabrics
Virtual Extensible LAN (VXLAN) - Overview
– Standards based data plane “point to point” tunnels that provides L2 network overlay connectivity across a L3 underlay
network
– Any device that supports VXLAN encapsulation/decapsulation is considered a VXLAN Tunnel End Point (VTEP)
– Supports multi-tenancy via VXLAN Network Identifiers (VNI) and traffic load sharing across Equal Cost Multi Pathing
(ECMP) routes between VTEPs
– VXLAN tunnels can be built via 1 of these methods
– Static VXLAN
– Centralized control plane (Controller based) Point to point L2 overlay VXLAN tunnels
– Distributed control plane (Non controller based, MP-BGP EVPN)

L3 IP underlay
network

MP-BGP Ethernet VPN (EVPN) - Overview VTEPs

– Standards based control plane protocol that


End clients
– Builds dynamic VXLAN tunnels between VTEPs

– Provides MAC address advertisements between VTEPs via underlay network BGP peering

– Scales better than static VXLAN (MAC flood and learn in the data plane tunnel)

33
#1: Centralized L3 Gateway with VXLAN/EVPN
L3 DC Core
AS#65100
– When to use it?
– Provides L2 connectivity over L3 Fabric
Zone1
– Provides inter VXLAN routing RR1
Overlay
RR2
OSPF Area 0 VXLAN
– Suitable when the Pod is considered trusted AS#65001 Tunnels

Border Leafs
connect in/out
– Why? Zone

– Easily supports centralized Firewalling of Data


Center L2 VTEP L2 VTEP VXLAN Default
– VNI’s can be shared across any number of leaf pairs VMs
VMs
Gateway
VRFA 10.1.1.10/24 L2 / L3 VTEPs
– Best practice is to leverage a single pair of VSX VRFB 10.1.2.10/24 VRFA 10.1.1.11/24
VRFB 10.1.2.11/24
10.1.1.1/24 (VRFA)
VRFB 10.1.3.10/24 10.1.2.1/24 (VRFB)
Switches for HA VRFB 10.1.3.11/24 10.1.3.1/24 (VRFB) Firewall

– Ensures that traffic on same subnet between VTEPs


does not need to traverse border leaf

34
#2: Centralized L2 Gateway with VXLAN/EVPN
L3 DC Core
AS#65100
– When to use it?
– Suitable for Pods that require higher level of security
– Extends L2 to Gateway for bridging to Firewall Zone1
RR1 Overlay
RR2
– Firewall can route and inspect traffic OSPF Area 0 VXLAN
Tunnels
AS#65001

– Why? Border Leafs


connect in/out
– Easily supports centralized Firewalling of Data Center Zone

– VNI’s can be shared across any number of leaf pairs L2 VTEP L2 VTEP L2 VTEP
802.1Q trunk
– Best practice is to leverage a single pair of VSX (VLANs 11/12)

Switches for HA
VMs VMs Firewall - Default gateways
– Ensures that traffic on same subnet between VTEPs VLAN 11 10.1.1.10/24 VLAN 11 10.1.1.11/24 10.1.1.1/24
VLAN 12 10.1.2.10/24 VLAN 12 10.1.2.11/24 10.1.2.1/24
does not need to traverse border leaf
NetEdit DC VXLAN Solution Config

35
#3: VMware NSX-V/T 8325 Integration
– Used in environments with NSX, VMs and bare metal servers
– Provides L2 network connectivity between VMs (on ESXi hosts) and Bare Metal
Servers connected to the Hardware VTEP switch (8325)
– VMware recommends new deployments utilize NSX-T
– VMware does not have any plans to integrate NSX-T with hardware VTEPs

36
Designing: EVPN/VXLAN
– Identify the VXLAN Use Case:
– Centralized L3 Gateway with VXLAN/EVPN
– Centralized L2 Gateway with VXLAN/EVPN Point to point L2 overlay VXLAN tunnels

– VMware NSX-V 8325 Integration


L3 IP underlay
– Design underlay L3 Leaf/Spine network network

– OSPF Underlay w/IBGP EVPN Overlay VTEPs

– OSPF Underlay recommended for majority of enterprise


End clients
customers
– EBGP Underlay w/EBGP EVPN Overlay
– EBGP Underlay is usually customer driven due to scale and
preferences
– Take note of scale, might need to utilize multiple Pods to accommodate a
large DC and create multiple failure domains
– 8325 and 8400 are validated for DC VXLAN/EVPN solutions
37
Know CX Capabilities
10.4 Scale
Switch Model 6300 6400 8320 8325 8400
Switching Capacity 880 Gbps 14 Tbps or 28 Tbps 2.5 Tbps 6.4 Tbps 19.2 Tbps
MAC 32,768 32,768 98,304 98,304 768,000
ARP max 49,152 49,152 120,000 120,000 756,000
ND max 32,768 32,768 52,000 52,000 524,000
ARP : 81,000
ARP : 32,768 ARP : 32,768 ARP : 44,542 ARP : 48,638
ARP / ND max &
or or or or
(considering 1:1 MAC:IP) ND : 55,000 (1:1)
ND : 32,768 ND : 32,768 ND : 44,542 ND : 48,638
220,000 (1:4)
Native IPv4 Clients Max: 32,768 Max: 32,768 Max: 44,542 Max: 48,638 Max: 81,000
Native IPv6 Clients (3x IPv6) Max: 10,922 Max: 10,922 Max: 17,333 Max: 17,333 Max: 55,000
IPv4 Unicast Routes 64,000 64,000 131,072 131,072 1,011,712
IPv6 Unicast Routes 32,000 64,000 32,732 32,732 524,288
IPv4 Multicast Routes 8,000 8,000 4,094 4,094 32,767
IPv6 Multicast Routes 8,000 8,000 4,094 4,094 32,767
VRF 64 64 64 64 64
VXLAN VTEP Peers 256 256 NA 1,000 NA
4,039 (VLAN 1 not
VXLAN L2 VNI (VLANs) 1,024 1,024 NA NA
supported)

38
Data Center Network Architecture Summary
CX 6300 CX 6400 CX 8320/8325 CX 8400
- Check if interfaces, features, bandwidth and scale meet customer requirements
- Front-to-Back (port-to-power) = 6300M/F, 6400, 8320, 8400
General Guidance - Back-to-Front (power-to-port) = 8325, 1 model of 6300
- VSX dual control plane should be recommended for maximum network uptime/HA and live upgrades
- VSF not recommended as DC Core switches due to single control plane

- 8325 recommended as both


Recommended as both access or
Could be deployed as access switch if Could be deployed as high port access or core switch
2-Tier L2 Architecture core switch if high port/table density is
10/100 connectivity is required density access or core switch - 8320 recommended for dense 10G
required
copper solutions

- 8325 recommended as both leaf or


L3 Leaf/ Spine Recommended as both leaf or spine
Could be deployed as leaf switch if Could be deployed as higher port spine switch
Architecture (without switch if high port/table density is
VXLAN/EVPN) 10/100 connectivity is required density leaf or spine switch - 8320 recommended for dense 10G
required
copper solutions

8320 could be used as L3 core for


OOB Recommended N/A N/A
larger OOB network

Advanced use cases with VXLAN/EVPN


Centralized L3 Gateway
with VXLAN/EVPN Not tested/validated
8325 recommended for leaf and spine
Recommended for spine (IP routing &
(IP routing & IBGP EVPN RR)
IBGP EVPN RR) functionality
Centralized L2 Gateway functionality
Not tested/validated
with VXLAN/EVPN

- 8325 recommended for leaf - Recommended for spine


VMware NSX-V 8325
Integration Not tested/validated functionality functionality (IP routing)
- Single switch certification in 10.4 - Not planned for certification

39
Aruba and HPE Compute
and Storage – Better
Together
HPE & Aruba – Better Together

HCI Virtualization / Containers Bare Metal

GreenLake
Storage
Hybrid Cloud
Networking
Azure
SDS

MCS Simplified ordering, interoperability HPC

25GbE switch ports continue to see strong growth 100GbE switch revenues grew 24.7% year over
with port shipments rising 57.1% year over year in year
the quarter
41
HPE Hybrid IT Integration Near-term Roadmap
Complete – ready to sell
– HPE SimpliVity (HCI) integration
– Aruba CX VRD: SimpliVity - 8320 Interop
– HPE Server DAC cable cross-referencing
– HPE GreenLake for Aruba - Network-as-a-Service
– HPE Nimble dHCI (8320, 8325, 6300M)
– Aruba CX VRD : Nimble and CX Interop

2H FY20 and Beyond


– HPE Microsoft Azure Stack
– Shasta Super-Computing integration
– HPE CS900 for SAP HANA
– Apollo/HPCM integration
– HPE Synergy
– HPE Synergy DAC cables cross-referencing 42
Aruba Attach To HPE Made Easier w/ Integrated Solutions

Nimble dHCI Interop with ArubaOS-CX Switch


• Nimble and vSphere deployment guide

• 6300M, 8320 & 8325 orderable as a networking option in OCA tool

SimpliVity with ArubaOS-CX Switch


• Validated reference design: SimpliVity - 8320 Interop

• SimpliVity Deployment Manager integration

New or Existing HPE customers updating their network infrastructure can benefit from
these integrations

43
Aruba Integrated SimpliVity Deployment Architecture
DM
Deployment
Manager
Thrift
REST
Mgmt Network

Manages switch authentication


New AIS
Aruba Evaluates MTU, VLAN and Network
Thrift REST Integrated
Service Configures Aruba Switch Ports

– Deployment Manager (with new AIS) evaluates existing


Aruba CX 8320
VLANs present on Aruba switches with certain filter criteria
– It proposes new VLANs to configure for storage and
Storage and
Federation
federation ports connected to Simplivity DI (Deploy Installer)
nodes

DI
– 8320 switchports towards Simplivity servers are detected via
DI
LLDP and configured with VLANs, trunk mode and MTU
DL 380 DL 380

interface 1/1/1
no shutdown
mtu 9000
no routing
vlan trunk native 1 CONFIDENTIAL
44 44
vlan trunk allowed 5
Easier Solutions Design With Aruba 83xx Switches

– Aruba solution architects and SEs can now rely on an Aruba + HIT certified cable list, cross referenced by
Aruba and the Volume BU, to design End to End HIT solutions.
– HIT link: https://h20195.www2.hpe.com/v2/getpdf.aspx/A00002507ENW.pdf?
– Updated Aruba optical guide downloadable from
https://asp.arubanetworks.com/downloads
– More to come in 2020:
– 10G, 25G, 40G, 100G AOCs
– 40G, 100G DACs
– Breakout 4x10G, 4x25G cables
– With more switches 6400, 8400
– Against more HPE solutions: DL servers, Synergy

Aruba CX 83xx Switch – NIC Certified Connectivity 45


Support for HPE Servers and Systems products

10/25Gb HPE Server adapters tested 10Gb Base-T HPE Server adapters tested
– HPE SKU # SKU Description – HPE SKU # SKU Description

– 817749-B21 HPE Ethernet 10/25Gb 2-port 640FLR-SFP28 Adapter – 656596-B21 HPE Ethernet 10Gb 2-port 530T Adapter

– 817753-B21 HPE Ethernet 10/25Gb 2-port 640SFP28 Adapter – 700759-B21 HPE FlexFabric 10Gb 2-port 533FLR-T Adapter

– 817709-B21 HPE Ethernet 10/25Gb 2-port 631FLR-SFP28 Adapter – 817745-B21 HPE Ethernet 10Gb 2-port 562FLR-T Adapter

– 817718-B21 HPE Ethernet 10/25Gb 2-port 631SFP28 Adapter – 817738-B21 HPE Ethernet 10Gb 2-port 562T Adapter

10Gb SFP+ HPE Server adapters tested HPE DACs

– HPE SKU # SKU Description – Cable Type HIT SKU # Description

– P11338-B21 HPE Ethernet 10Gb 2-port 548SFP+ Adapter – 10G DAC 487655-B21 HPE BLc 10G SFP+ SFP+ 3m DAC Cable

– P08446-B21 HPE Ethernet 10Gb 2-port 524SFP+ Adapter – 10G DAC 537963-B21 HPE BLc 10G SFP+ SFP+ 5m DAC Cable

– 727055-B21 HPE Ethernet 10Gb 2-port 562SFP+ Adapter – 25G DAC 844477-B21 HPE 25Gb SFP28 to SFP28 3m DAC

– 727054-B21 HPE Ethernet 10Gb 2-port 562FLR-SFP+ Adapter – 25G DAC 844480-B21 HPE 25Gb SFP28 to SFP28 5m DAC

Server Networking Transceiver and Cable Compatibility Matrix


Aruba OS-Switch and ArubaOS-CX Transceiver Guide 46
Unsupported Transceiver Mode (UTM) – 10.5

– Applies to Transceivers and DACs


– 1G and 10G only
– Unsupported 25G are still LOCKED OUT – Supported list on Aruba documentation
– Nothing is guaranteed (Differences in DAC characteristics – loss, levels most relevant to
issues)
– For Transceivers:
– 3rd party transceivers that adhere to MSA standard (+ properly programmed MSA-dictated EEPROM
locations) should work without issue
– I.E. Other vendors may violate those practices and put information in different locations or leave it out entirely
– We don’t support LX4, ZR (not an IEEE standard)
– 3rd party AOCs that “properly” mimic a 10G SR should also work without issue.

47
Split Cables and Split Ports Using Transceivers – 10.5
HPE Split Cables1:
US List
SKU# Quote Description (EN)
Price
721064-B21 HPE BladeSystem c-Class 40G QSFP+ to 4x10G SFP+ 3m Direct Attach Copper Splitter Cable $529

845416-B21 HPE 100Gb QSFP28 to 4x25Gb SFP28 3m DAC $699

Platform, Support and Enablement Mode:


Split Cable Type Platform1 Switch-to-Server Switch-to-Switch
No Support
4x10G DAC 8320, 8325 Support
Switch (QSFP+) to Server (SFP+) (Enabled using UTM2)
Switch (QSFP+) to Switch (SFP+)

4x25G DAC 8325 Support No Support


Switch (QSFP28) to Server (SFP28) Switch (QSFP28) to Switch (SFP28)

40G SR4/eSR4 4x10G 8320, 8325, 8400 Support Support


100G SR4 4x25G 8325 Support Support
4x10G, 4x25G AOC Future (10.6) Future (10.6) Future (10.6)
48
1
Split cables for the 6400, specifically for the 100G Module, will be in 10.7
2
Although the SFP end to switch uses UTM, full support from TAC will be made available
Aruba CX DC Switching Roadmap
Modular
Spine/MoR/EoR
8400-32Y
6400 8400 Module

Fixed Spine
8320 8325 8360
• 32x40G • 32x40/100G • 12x100G

Fixed Leaf/ToR 8360


8320 • 24x1/10G SFP+ + 2x40/100G
• 48x10G SFP+ + 6x40G 8325 • 16x10/25G + 2x40/100G
• 48x10G BaseT + 6x40G • 48x25 SFP28 + 8x40/100G • 48x1/10G BaseT + 4x40/100G
• 32x10/25G + 4x40/100G

OOBM/
1G RJ-45 ToR 6300M 48P Pwr2Prt

Today H1CY20 H2CY20


49
Attaching Aruba switches to HPE compute/storage infrastructure

ProLiant Gen10 SimpliVity, DX Apollo, SGI Mission-Critical Hybrid Storage

DL 360/380 Gen10 Hyperconverged HPC, Big Data Superdome Primera, Nimble

Workload performance, latency requirements drives network I/O ToR requirement

General guidelines - switch port density and speed


will vary based on workload requirement -
switches are also multi-rate (1/10/25GbE) capable
10/40 GbE 25/100 GbE
GbE Out-of-Band Mgt. / iLO

Aruba CX 8320 Aruba CX 8325


Aruba CX 6300M
50
Orchestrating and
Automating DC
Networks with Aruba
Composable Fabric
Manager (CFM)
What we hear from our customers
Simpler IT
If I could get help and Evolving Roles
expertise with the routine stuff,
Our personal are asked to
our people could do so much
do more than traditional
more.
networking or IT

Business agility
I need to move a lot
faster – if IT could only
be ahead of business
initiatives for once.

Lower IT costs
I need to align our costs to
business benefits, and I’m Proper control
constrained by our budget. I’m worried about our ability to control
performance, security, compliance, and our data.

5252
Aruba integration
Introducing with Composable Fabric
Aruba CFM
The on-site orchestration system
Key Features & Benefits
• Zero touch deployment provisioning &
orchestration
• Manage and monitor global network configuration
• Complex workflow automation
• Visualize data center infrastructure
• Integrate with 3rd party data center orchestration
systems
• Integration with HPE Infrastructure hardware and
software
• Automate lifecycle events in the data center

53
The data center solution that serves the application

https HPE
OneView

API APIs

Cl nte
Co
ick xtu
-th al
HTML5 GUI ARUBA CFM ECOSYSTEM INTEGRATION

ro
ug
h
API

ARUBA CX 54
Value across the data center

INFRASTRUCTURE & NETWORK SERVER ADMINS VM / APPLICATION OWNERS


TEAM • Deploy and scale without the need • Launch new applications faster
• Scale and grow non-disruptively for specialized skills • Maintain visibility and control of
• Streamline deployments to deliver • Provision resources in real-time workload performance
higher value to business owners • Remove bottlenecks and boost
• Secure and prioritize critical
• Enhance visibility and control with performance
workloads and data
API integrations

55
GUI and automation drives all
END-TO-END HOST
VISIBILITY

NODE INVENTORY

NETWORK & PORTS VIEW

WORKFLOW
AUTOMATIONS
and GUIDED 56
SETUP
Where to Find the Latest Presales Resources

https://community.arubanetworks.com/t5/
Data-Center/bd-p/DataCenter https://www.youtube.com/channel/UCFJ
CnuXFGfEbwEzfcgU_ERQ

https://ase.arubanetworks.com/solutions?p
https://github.com/aruba/nae-scripts roducts=19

57

You might also like