0% found this document useful (0 votes)
37 views23 pages

Benchmarking Network Performance in Containers

Uploaded by

kmdyu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views23 pages

Benchmarking Network Performance in Containers

Uploaded by

kmdyu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Considerations for Benchmarking

Network Performance in
Containerized Infrastructure
draft-dcn-bmwg-containerized-infra-00

Kyoungjae Sun ([email protected]),


Hyunsik Yang, Youngki Park, Younghan Kim
IISTRC, Soong-Sil University
Wangbong Lee
ETRI
Containerized Infrastructure
• Virtualized Network Functions(VNFs) are running on container
• Sharing same host OS
• isolated by using different namespace
• It can reduce
• Processing load by hypervisor
• Resource for Guest OS
• Suitable for micro-service and cloud-native environment
NFV Infrastructure Model
• ETSI GS NFV-TST 009
• For container networking, ETSI already described their
network test architecture
• host system may use OVS, but there are many other options
• Network Plug-ins (CNI, CNM, ..)
Benchmarking Considerations
• There are two RFCs about NFV benchmarking
• RFC 8172 : Considerations for Benchmarking Virtual Network Functions and
Their Infrastructure
• Define general-purpose platform as VM-based infra
• RFC 8204 : Benchmarking Virtual Switches in the Open Platform for NFV (OPNFV)
• Describe deployment scenarios for testing vswitch benchmarking based on VM-based infra

• Does it applicable for containerized infrastructure?


• Do test scenarios are covered also for containerized infrastructure?
Our Experience
• Network performance testing in containerized
infrastructure
• Deployment Environment
• Deploy the container on Baremetal
• Deploy the container on VM

• OpenStack + Kubernetes Hybrid Environment


• Creates POD using Kubernetes (baremetal & VM)

• Network Feature
• CNI – Flannel, Kuryr Networking, ..
• Network Acceleration Feature(SR-IOV)

• Network Service Type


• VxLAN, VLAN, SR-IOV, offloading VxLAN
Test-bed Environment #1
Test-bed Environment #2
NODE Classification Specification

Baremetal CPU Intel(R) Xeon(R) Gold 6148 2.40GHz * 2

(Master / Minion1 / MEMORY DDR4 2400 MHz 32GB * 6


Minion2) SR-IOV NIC Mellanox ConnectX-5 (40G SFP+)

CPU Virtualized CPU * 8 (apply host-model)


VM
MEMORY Virtualized MEM * 32GB
(Minion3 / Minion4)
NIC vhost-net and sr-iov vf, vhost-user

OS Ubuntu 16.04 Server LTS

Cloud OS Openstack queens by Devstack


System Software COE kubernetes v1.9.0 and docker 18.06
default cni plugin driver and kuryr, flannel, sr-iov,
CNI
vshot-user, multus
Testing Scenarios
• BMP2BMP
• Baremetal POD to Baremetal POD (local or remote)

• BMP2VMP
• Baremetal POD to VM POD (local or remote)

• VMP2VMP
• VM POD to VM POD (local or remote)

• Common Configuration
• container image : ubuntu 16.04 (modified)
• bandwidth tool : iperf or iperf3 (https://iperf.fr)
• latency tool : sockperf (https://github.com/Mellanox/sockperf)
Scenario – BMP2BMP
• Networking Scenario
• OpenStack-Kuryr (OVS bridge)
• Flannel-CNI (docker bridge-Flannel bridge)
• MACVLAN, IPVLAN / Data acceleration(SR-IOV)
Scenario – BMP2VMP
• VM based Container Network
• VxLAN and VLAN modules are running in guest VM
(ovs bridge)
• VM network port supports VLAN and SR-IOV
Scenario – VMP2VMP
Result – BMP2BMP (local)
• VxLAN results
• Ovs-vxlan > flannel-vxlan up to 10%
• Overhead due to software processing of VxLAN packets
• VLAN results
• Throughput : macvlan > ovs-vlan (20% lower) > SR-IOV > ipvlan
• Latency : SRIOV(up to 16K) > ovs-vlan > ipvlan > macvlan
Result - BMP2BMP (Remote)
• VxLAN results: ovs-vxlan > flannel-vxlan
• VLAN results: MACVLAN > ovs-vlan > ipvlan
• SR-IOV cannot support RDMA (remote direct memory access)
Result – BMP2VMP
• Performance degradation by software processing of
Vxlan in VM
• Encap/Decap processing of VxLAN (for internal network)
Result – VMP2VMP
• In the case of VM, Best performance by applying
hardware offload to SR-IOV and VxLAN.
• Using H/W offloading, Encap/Decap process is done by
hardware
Conclusion
• What we learned
• Containerized infrastructure have different isolation method
• It may impact performance of VNF lifecycle management

• Containerized infrastructures have several deployment


options
• POD / individual container (depends on container engine)
• Running on VM / Baremetal
• Testing scenarios will be different for each deployment models

• Our initial draft based on learning


• But, we need more work to go forward
• Including Test scenario, specific technologies, …
• Feedbacks and reviews are always welcome
• Thanks Al and Maciek for review before meeting!
Thankyou!
Backup slides
Parallel Paths Test
• Using Message Passing Interface(MPI)
• Apply Collective communication (MPI_ALLTOALL)
• 8 PODs in each host server
• Measure latency of 2 socket processing on each POD
(packet size=16KB)

Test Scenario
BMP2BMP
BMP2VMP
VMP2VMP
Testing Results (1)
• VLAN technologies(ovs-vlan, macvlan, sriov) are
shown better performance up to 10% than overlay
network (vxlan) for all test scenarios.

BMP2BMP
Testing Results (2)

BMP2VMP VMP2VMP
Results - Increase the process to four
• BMP2BMP – same host case results higher latency for increasing
process load
• BMP2VMP – Parallel path created in BMP impacts latency for both case
(same & different host case)
• VMP2VMP
• In case of same-host, low latency since that parallel path are processed in
host kernel via single interface

You might also like