Arquitetura de Rede Do VMware Cloud Foundation-Vmware-Validated-Design-Sddc-Multiple-Pnic-Configuration
Arquitetura de Rede Do VMware Cloud Foundation-Vmware-Validated-Design-Sddc-Multiple-Pnic-Configuration
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2020 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
About Using Hosts with Multiple Physical NICs with VMware Cloud Foundation
3.10 4
VMware, Inc. 3
About Using Hosts with Multiple Physical
NICs with VMware Cloud Foundation 3.10
When deploying an SDDC by using VMware Cloud Foundation™, the integration of the software
stack with your environment occurs at various logical (Active Directory, certificates, and VLANs)
and physical (network uplinks and physical hardware) points. The Using Hosts with Multiple
Physical NICs with VMware Cloud Foundation 3.10 technical note provides guidelines on physical
and logical network integration.
Usually, you deploy physical servers with two physical network interface cards (physical NICs). If
your SDDC configuration requires three or more physical NICs per host in a workload domain
® ® ®
with VMware NSX Data Center for vSphere or with VMware NSX-T Data Center, you can use
this technical note to understand the reasons for such a configuration and learn how to deploy
the SDDC on top of it.
Guidance Scope
Hardware
By using VMware Cloud Foundation, you can deploy your SDDC on Dell EMC VxRail, VMware
vSAN ReadyNodes, or other hardware listed on the VMware Compatibility Guide. This
technical note is for vSAN ReadyNodes and hardware listed on the VMware Compatibility
Guide.
For guidance on using multiple physical NICs on VxRail, consult your Dell EMC team.
Software-Defined Networking
This technical note is applicable to virtual infrastructure workload domains with NSX for
vSphere and NSX-T.
Prerequisites
You must have an instance of VMware Cloud Foundation 3.10 deployed in at least one region.
VMware, Inc. 4
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
Intended Audience
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10 is intended for
consultants and architects who have solid understanding of VMware Validated Design™ for
building and managing an SDDC that meets the requirements for capacity and scalability.
Update History
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10 is updated with
software releases or when necessary.
Revision Description
16 JUN 2020 You can apply this technical note only for specific VMware
Cloud Foundation versions. See Prerequisites.
VMware, Inc. 5
Network Architecture of VMware
Cloud Foundation 1
The network architecture that is supported for automated SDDC deployment and maintenance
determines the options for integrating hosts with multiple physical NICs.
n VMware Cloud Foundation uses the first two physical NICs in the server, that is, vmnic0 and
vmnic1, for all network traffic , that is, ESXi management, vSphere vMotion, storage (VMware
vSAN™ or NFS), network virtualization (VXLAN or Geneve), management applications, and so
on.
n Traffic is isolated in VLANs which terminate at the top of rack switches (ToRs).
n Load-Based Teaming (LBT), or Route based on physical NIC load, is used for balancing traffic
independently of the physical switches. The environment does not use LACP, VPC, or MLAG.
n Network I/O Control supports resolving situations where several types of traffic compete for
common resource.
n At the physical network card level, VMware Cloud Foundation works with any NIC on the
VMware Hardware Compatibility Guide that is supported by your hardware vendor. While 25-
Gb NICs are recommended, 10-Gb NICs are also supported. As a result, you can implement a
solution that supports the widest range of network hardware with safeguards for availability,
traffic isolation, and contention mitigation.
Traffic Types
In an SDDC, several general types of network traffic exist.
Traffic for management applications and tenant workloads that are running on a host. Virtual
machine traffic might be north-south from the SDDC out to your corporate network and
beyond, or east-west to other virtual machines or logical networking devices, such as load
balancers.
VMware, Inc. 6
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
Virtual machines are typically deployed on virtual wire. Virtual wires are logical networks in
NSX that are similar to VLANs in a physical data center. However, virtual wires do not require
any changes to your data center because they exist within the NSX SDN.
Overlay traffic
VXLAN or Geneve encapsulated traffic in your data center network. Overlay traffic might
encapsulate tenant workload traffic in workload domains or management application traffic in
the management domain. Overlay traffic consists of UDP packets to or from the VTEP or TEP
interface on your host.
VMkernel traffic
Traffic to support the management and operation of the ESXi hosts including ESXi
management, vSphere vMotion, and storage traffic (vSAN or NFS). This traffic originates from
the hypervisor itself to support management of and operations in the SDDC or the storage
needs of management and tenant virtual machines.
NSX for vSphere uses the capabilities of vSphere Distributed Switch by enabling applications
across clusters or sites to reside on the same logical network segment without the need to
manage or extend that network segment in the physical data center. VXLANs encapsulate these
logical network segments and enable routing across data center network boundaries. The
entities that exchange VXLAN encapsulated packets are the VTEP VMkernel adapters in NSX for
vSphere. vSphere Distributed Switch supports both VLAN- and VXLAN-backed port groups.
Although one host can have one or more vSphere Distributed Switches, you can assign a
physical NIC only to one vSphere Distributed Switch.
N-VDS in NSX-T Data Center is a functional equivalent of vSphere Distributed Switch. Like the
vSphere Distributed Switch, it provides logical networking segmentation across clusters without
the need to manage or extend a segment in the physical data center network. N-VDS used
Geneve instead of VXLAN, and TEPs as the VMkernel interface instead of VTEPs. Because NSX-T
is designed to work with non-vSphere clusters, NSX-T itself is responsible for N-VDS
®
management instead of VMware vCenter Server .
Like the vSphere Distributed Switch, the N-VDS supports both VLAN or overlay-backed port
groups using Geneve encapsulation. Although a host typically has only a single N-VDS, you can
map traffic types to individual physical NICs leveraging N-VDS uplink teaming policies. The host
might also be connected to a vSphere Distributed Switch, but the vSphere Distributed Switch
must use a dedicated physical NIC. You cannot share a physical NIC between a vSphere
Distributed Switch and N-VDS.
VMware, Inc. 7
Use Cases for Multi-NIC Hosts
2
When considering the need to deploy an SDDC with more than two physical NICs, evaluate the
reasons for such a configuration. Usually, the standard architecture with two physical NIC per
host meets the requirements of your customers, data center networks, storage types, and use
cases, without the added complexity of operating more physical NICs.
Legacy Practices
When you deploy a virtualized environment, you might need to follow older operational or
environment practices without additional evaluation. In the past, some of the following common
practices existed:
n Use hosts with 8 or more physical NICs with a full separation of management, vSphere
vMotion, storage, and virtual machine traffic. Today, these traffic types can be safely
integrated on the same fabric using the safeguards noted in Chapter 1 Network Architecture
of VMware Cloud Foundation.
n Physically separate virtual machine traffic from management traffic as a physical firewall was
required to ensure traffic security. Today, hypervisor level firewalls can provide even better
security without the added complexity and traffic flow of physical firewall.
Following legacy practices might carry forward additional complexity and risk. Consider modern
network performance, VMware congestion mitigation processes with Network I/O Control, and
availability when taking design decisions about your data center network.
VMware, Inc. 8
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
Usually, fast physical data link speeds provide the best increase in the overall aggregate
bandwidth available to individual traffic flows. In some cases, 2 x 25 GbE might not be sufficient,
or a 25 GbE physical NIC might not be available or certified for your hardware. You can also
move to 2 x 40 GbE or 2 x 100 GbE but the physical infrastructure costs to support 40 GbE might
be a limitation.
However, for environments that require physical separation, VMware Cloud Foundation can
support mapping of management, storage, or virtual machine traffic to a logical switch that is
connected to dedicated physical NICs. Such a configuration can provide separation of traffic on
to separate physical links from the host into the data center network – be it to different network
ports on the same fabric, or to a distinct network fabric.
In this case, your management VMkernel adapters can be on one pair of physical NICs, with all
other traffic (vSphere Motion, vSAN, and so on) on another pair. Another goal might be to isolate
virtual machine traffic so that one pair of physical NICs handles all management VMkernel
adapters, vSphere vMotion, and vSAN, and a second pair handles the overlay and virtual machine
traffic.
n Such a configuration might come from the time when a physical throughput of 100 Mbps or 1
Gbps was a concern or congestion mitigation methods were not as advanced as they are
today with Network I/O Control.
n Such a configuration can also be related to legacy operational practices for storage area
networking (SAN) where storage traffic was always isolated to a separate physical fabric not
just for technical reasons, but to ensure the SAN fabric was treated with an extra high degree
of care by data center operations personal.
VMware, Inc. 9
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
Even with modern data center design and appropriately sized physical NICs, some organizations
require that you separate traffic with high bandwidth potential or low latency, most commonly
storage traffic such as vSAN or NFS, or virtual machine backup traffic. In other cases, it is simply
for physical separation without any performance concerns such as utilizing a dedicated
management fabric or dedicated fabric for virtual machine traffic.
In these cases, VMkernel network adapters or virtual machine network adapters can be assigned
to dedicated VLAN-backed port groups or overlay segments on a second vSphere Distributed
Switch or N-VDS with dedicated physical NICs.
In certain cases, after the VMware Professional Services team evaluates your environment and
workload traffic patterns, benefits from total separation between storage traffic and virtual
machine traffic (VLAN and overlay) might appear.
In these cases, you can connect VMkernel adapters or virtual machine port groups to a specific
logical switch with dedicated physical NICs. Usually, you isolate virtual machine traffic including
overlay on a separate physical link.
On some server network interface cards (NICs), the Geneve encapsulation might not be fully
supported. You can continue using vSphere Distributed Switch for VMkernel traffic (e.g. vSAN,
vSphere vMotion, and so on). In such limited use cases, as an example, you can use a pair of
additional physical NICs for the N-VDS instance while using VLAN-backed virtual machine traffic.
VMware, Inc. 10
SDDC Deployment on Multi-NIC
Hosts 3
VMware Cloud Foundation supports specific networking configuration for traffic separation in the
management domain and workload domains. Consider also the options that are available to you
in the public API compared to the user interface of SDDC Manager.
The user interface of VMware Cloud Foundation does not currently support dedicating a specific
physical NIC to a system traffic type. For Day-1 or Day-2 operations that require assigning system
traffic to physical NICs, you use the public REST API of VMware Cloud Foundation.
Deployment of hardware in VMware Cloud Foundation follows the high-level process of host
discovery, validation, and installation. This high-level process is used for both initial bring-up and
expansion of workload domains.
Note Deployments based on Dell EMC VxRail have a different bring-up workflow because of the
automation in VxRail Manager. The process described in this technical note is not applicable to
VxRail-based deployments.
VMware, Inc. 11
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
Discovery is preformed by using the standard host commissioning workflow in the user interface
of SDDC Manager. As a result, SDDC Manager becomes aware of the host, assigns the host a
specific GUID, and places it in the SDDC Manager inventory.
Important For a workload domain with NFS storage, storage must be accessible through
vmnic0.
VMware, Inc. 12
API Examples for Extending Your
SDDC with Multi-NIC Hosts 4
The user interface of SDDC Manager lacks the functionality to directly integrate hosts with
multiple physical NICs in workload domains in VMware Cloud Foundation. Use the public API of
VMware Cloud Foundation to extend the SDDC with workload domains, clusters, and individual
multi-NIC hosts.
In VMware Cloud Foundation 3.10, you can use a public API to automate deployments by using
SDDC Manager on hosts with mulitple physical NICs. For information on the API, see VMware
Cloud Foundation 3.10 API on VMware {code}.
In VMware Cloud Foundation 3.10, you can use the SDDC Manager public API to automate
multiple physical NIC configurations when using NSX for vSphere.
For traffic separation, two vSphere Distributed Switches handle the traffic in the initial cluster of
the example workload domain - one for system traffic and one for application traffic. You assign
a pair of physical NICs to each switch.
Number of hosts 3
VMware, Inc. 13
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
Procedure
1 Send a query for unassigned hosts to SDDC Manager by using this request.
2 In the response, locate the fqdn parameter for the target hosts.
3 Write down the GUID of each host object for the domain from the id field.
Parameter Value
VMware, Inc. 14
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
{
"domainName": "myWLD01",
"vcenterSpec": {
"name": "vcenter-2",
"networkDetailsSpec": {
"ipAddress": "10.0.0.43",
"dnsName": "vcenter-2.vrack.vsphere.local",
"gateway": "10.0.0.250",
"subnetMask": "255.255.255.0"
},
"rootPassword": "Random0$",
"datacenterName": "new-vi-1"
},
"computeSpec": {
"clusterSpecs": [ {
"name": "Cluster1",
"hostSpecs": [ {
"id": "97fea1a1-107e-4845-bdd1-6a14ab18010a",
"license":"AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"hostNetworkSpec": {
"vmNics": [ {
"id": "vmnic0",
"vdsName": "SDDC-Dswitch-Private1"
}, {
"id": "vmnic1",
"vdsName": "SDDC-Dswitch-Private1"
}, {
"id": "vmnic2",
"vdsName": "SDDC-Dswitch-Private2"
}, {
"id": "vmnic3",
"vdsName": "SDDC-Dswitch-Private2"
} ]
}
}, {
"id": "b858eb07-4f07-4e34-8053-d502bf9cfeb0",
"license":"AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"hostNetworkSpec": {
"vmNics": [ {
"id": "vmnic0",
"vdsName": "SDDC-Dswitch-Private1"
}, {
"id": "vmnic1",
"vdsName": "SDDC-Dswitch-Private1"
}, {
"id": "vmnic2",
"vdsName": "SDDC-Dswitch-Private2"
}, {
"id": "vmnic3",
"vdsName": "SDDC-Dswitch-Private2"
} ]
}
}, {
"id": "45c3c5e6-6c49-46d2-a027-216d2a20c8d1",
"license":"AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
VMware, Inc. 15
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
"hostNetworkSpec": {
"vmNics": [ {
"id": "vmnic0",
"vdsName": "SDDC-Dswitch-Private1"
}, {
"id": "vmnic1",
"vdsName": "SDDC-Dswitch-Private1"
}, {
"id": "vmnic2",
"vdsName": "SDDC-Dswitch-Private2"
}, {
"id": "vmnic3",
"vdsName": "SDDC-Dswitch-Private2"
} ]
}
} ],
"datastoreSpec": {
"vsanDatastoreSpec": {
"failuresToTolerate": 1,
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"datastoreName": "vSanDatastore"
}
},
"networkSpec": {
"vdsSpecs": [ {
"name": "SDDC-Dswitch-Private1",
"portGroupSpecs": [ {
"name": "SDDC-DPortGroup-Mgmt",
"transportType": "MANAGEMENT"
}, {
"name": "SDDC-DPortGroup-VSAN",
"transportType": "VSAN"
}, {
"name": "SDDC-DPortGroup-vMotion",
"transportType": "VMOTION"
} ]
},
{
"name": "SDDC-Dswitch-Private2",
"portGroupSpecs": [ {
"name": "SDDC-DPortGroup-Public",
"transportType": "PUBLIC" } ]
}
],
"nsxClusterSpec": {
"nsxVClusterSpec": {
"vlanId": 0,
"vdsNameForVxlanConfig": "SDDC-Dswitch-Private1"
}
}
}
} ]
},
"nsxVSpec": {
"nsxManagerSpec": {
VMware, Inc. 16
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
"name": "nsx-2",
"networkDetailsSpec": {
"ipAddress": "10.0.0.44",
"dnsName": "nsx-2.vrack.vsphere.local",
"gateway": "10.0.0.250",
"subnetMask": "255.255.255.0"
}
},
"nsxVControllerSpec": {
"nsxControllerIps": [ "10.0.0.45", "10.0.0.46", "10.0.0.47"],
"nsxControllerPassword": "Test123456$%",
"nsxControllerGateway": "10.0.0.250",
"nsxControllerSubnetMask": "255.255.255.0"
},
"licenseKey": "CCCCC-CCCCC-CCCCC-CCCCC-CCCCC",
"nsxManagerAdminPassword": "Random0$",
"nsxManagerEnablePassword": "Random0$"
}
}
5 By using the JSON specification, add the workload domain to VMware Cloud Foundation by
sending this request.
6 If the task fails, retry running it from the user interface of SDDC Manager.
For traffic separation, two vSphere Distributed Switches handle the traffic in the example cluster
- one for system traffic and overlay traffic and one for external traffic. You assign a pair of
physical NICs to each switch.
Number of hosts 3
VMware, Inc. 17
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
NIC teaming policy Route based on physical NIC load (default one)
Procedure
1 Send a query for unassigned hosts to SDDC Manager by using this request.
2 In the response, locate the fqdn parameter for the target hosts.
3 Write down the GUID of each host object for the cluster from the id field.
4 Send a query for workload domains to SDDC Manager by using this request.
5 In the response, locate the name parameter for the target workload domain
6 Write down the GUID of the workload domain from the id property.
Parameter Value
VMware, Inc. 18
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
"vlanId": 0
},
"hostSpecs": {
"hostSystemSpec": [
{
"license": "AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"id": "c2398611-23cd-4b94-b2e3-9d84848b73cb",
"vmnicToVdsNameMap": {
"vmnic0": "w01-c02-vds01",
"vmnic1": "w01-c02-vds01",
"vmnic2": "w01-c02-vds02",
"vmnic3": "w01-c02-vds02"
}
},
{
"license": "AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"id": "8dbe7dcb-f409-4ccd-984b-711e70e9e767",
"vmnicToVdsNameMap": {
"vmnic0": "w01-c02-vds01",
"vmnic1": "w01-c02-vds01",
"vmnic2": "w01-c02-vds02",
"vmnic3": "w01-c02-vds02"
}
},
{
"license": "AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"id": "e9ba66e0-4670-4973-bdb1-bc05702ca91a",
"vmnicToVdsNameMap": {
"vmnic0": "w01-c02-vds01",
"vmnic1": "w01-c02-vds01",
"vmnic2": "w01-c02-vds02",
"vmnic3": "w01-c02-vds02"
}
}
]
},
"clusterName": "c02",
"highAvailabilitySpec": {
"enabled": true
},
"domainId": "983840c1-fa13-4edd-b3cb-907a95c29652",
"datastoreSpec": {
"vsanDatastoreSpec": {
"license": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"ftt": 1,
"name": "w01-c02-vsan01"
}
},
"vdsSpec": [
{
"name": "w01-c02-vds01",
"portGroupSpec": [
{
"name": "w01-c02-vds01-management",
"transportType": "MANAGEMENT"
VMware, Inc. 19
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
},
{
"name": "w01-c02-vds01-vmotion",
"transportType": "VMOTION"
},
{
"name": "w01-c02-vds01-vsan",
"transportType": "VSAN"
}
]
},
{
"name": "w01-c02-vds02",
"portGroupSpec": [
{
"name": "w01-c02-vds02-ext",
"transportType": "PUBLIC"
}
]
}
]
}
8 By using the JSON specification, add the cluster to the workload domain in VMware Cloud
Foundation by sending this request.
9 If the task fails, retry running it from the user interface of SDDC Manager.
For traffic separation, two vSphere Distributed Switches handle the traffic in and out of the
example host - one for system traffic and overlay traffic and one for external traffic. You assign a
pair of physical NICs to each switch.
VMware, Inc. 20
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
Procedure
1 Send a query for unassigned hosts to SDDC Manager by using this request.
2 In the response, locate the fqdn parameter for the target hosts.
3 Write down the GUID of the host object for the cluster from the id field.
5 In the response, locate the name parameter for the target cluster.
Parameter Value
{
"clusterExpansionSpec" : {
"hostSpecs" : [ {
"id" : "1d539f62-d85a-48a3-b5db-a165e06ae6ba",
"license": "AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"hostNetworkSpec" : {
"vmNics" : [ {
"id" : "vmnic0",
"vdsName" : "SDDC2-Dswitch-Private1"
}, {
"id" : "vmnic1",
"vdsName" : "SDDC2-Dswitch-Private1"
}, {
"id" : "vmnic2",
VMware, Inc. 21
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
"vdsName" : "SDDC2-Dswitch-Private2"
}, {
"id" : "vmnic3",
"vdsName" : "SDDC2-Dswitch-Private2"
} ]
}
} ]
}
}
8 By using the JSON specification, add the host to the cluster by sending this request.
9 If the task fails, retry running it from the user interface of SDDC Manager.
In VMware Cloud Foundation 3.10, you can use the SDDC Manager public API to automate
multiple physical NIC configurations when using NSX-T in a workload domain.
In VMware Cloud Foundation 3.10, you can use the SDDC Manager public API to automate
multiple physical NIC configurations when using for NSX-T.
In this example workload domain, for traffic separation, one vSphere Distributed Switch handles
the system traffic for vSphere management, vSphere vMotion, and vSAN in the initial cluster of
the example workload domain and one N-VDS handles the workload traffic, for example, NSX-T
overlay. You assign a pair of physical NICs to each switch.
Number of hosts 4
VMware, Inc. 22
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
NIC teaming policy for vSphere Route based on physical NIC load
Distributed Switch sfo01-w01-c01- (default)
vds02
Procedure
1 Send a query to SDDC Manager for unassigned hosts by using this request.
2 In the response, locate the fqdn parameter for the target hosts.
3 Write down the GUID of each host object for the domain from the id field.
VMware, Inc. 23
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
Parameter Value
Caution Provide a valid licenseKey value. Otherwise, a failure in the workflow execution
occurs. To resolve the issue, you must delete the partially created workload domain, and
decommission and recommission the hosts.
{
"domainName": "sfo01-w01",
"orgName": "rainpole",
"vcenterSpec": {
"name": "sfo01w01vc01",
"networkDetailsSpec": {
"ipAddress": "172.16.11.64",
"dnsName": "sfo01w01vc01.sfo01.rainpole.local",
"gateway": "172.16.11.253",
"subnetMask": "255.255.255.0"
},
"licenseKey": "AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"rootPassword": "vcenter_server_appliance_root_password",
"datacenterName": "sfo01-dc01"
},
"computeSpec": {
"clusterSpecs": [
{
"name": "sfo01-w01-c01",
"hostSpecs": [
{
"id": "80b1397e-97a5-4cac-a64a-6902d12adaf3",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
VMware, Inc. 24
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
"moveToNvds": true
}
]
}
},
{
"id": "58dcfb1c-ee55-40cb-888d-e458903fb102",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]
}
},
{
"id": "fe848d6e-7156-41fc-aed3-24dbd0ee3a7b",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]
}
},
{
"id": "29b5b562-4f45-46ea-98a9-512cdd111ab7",
VMware, Inc. 25
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]
}
}
],
"datastoreSpec": {
"vsanDatastoreSpec": {
"failuresToTolerate": 1,
"licenseKey": "CCCCC-CCCCC-CCCCC-CCCCC-CCCCC",
"datastoreName": "sfo01-w01-c01-vsan01"
}
},
"networkSpec": {
"vdsSpecs": [
{
"name": "sfo01-w01-c01-vds01",
"portGroupSpecs": [
{
"name": "sfo01-w01-c01-vds01-mgmt",
"transportType": "MANAGEMENT"
},
{
"name": "sfo01-w01-c01-vds01-vsan",
"transportType": "VSAN"
},
{
"name": "sfo01-w01-c01-vds01-vmotion",
"transportType": "VMOTION"
}
]
}
],
"nsxClusterSpec": {
"nsxTClusterSpec": {
"geneveVlanId": 1634
}
}
VMware, Inc. 26
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
}
}
]
},
"nsxTSpec": {
"nsxManagerSpecs": [
{
"name": "sfo01wnsx01a",
"networkDetailsSpec": {
"ipAddress": "172.16.11.82",
"dnsName": "sfo01wnsx01a.sfo01.rainpole.local",
"gateway": "172.16.11.253",
"subnetMask": "255.255.255.0"
}
},
{
"name": "sfo01wnsx01b",
"networkDetailsSpec": {
"ipAddress": "172.16.11.83",
"dnsName": "sfo01wnsx01b.sfo01.rainpole.local",
"gateway": "172.16.11.253",
"subnetMask": "255.255.255.0"
}
},
{
"name": "sfo01wnsx01c",
"networkDetailsSpec": {
"ipAddress": "172.16.11.84",
"dnsName": "sfo01wnsx01c.sfo01.rainpole.local",
"gateway": "172.16.11.253",
"subnetMask": "255.255.255.0"
}
}
],
"vip": "172.16.11.81",
"vipFqdn": "sfo01wnsx01.sfo01.rainpole.local",
"licenseKey": "DDDDD-DDDDD-DDDDD-DDDDD-DDDDD",
"nsxManagerAdminPassword": "nsxt_manager_admin_password"
}
}
5 By using the JSON specification, add the workload domain to VMware Cloud Foundation by
sending this request.
6 If the task fails, retry running it from the user interface of SDDC Manager.
VMware, Inc. 27
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
In this cluster example, for traffic separation, in a second cluster of the example workload
domain, one vSphere Distributed Switch handles the system traffic for vSphere management,
vSphere vMotion, and vSAN and one N-VDS handles the workload traffic, for example, NSX-T
overlay. You assign a pair of physical NICs to each switch.
Number of hosts 4
NIC teaming policy for vSphere Route based on physical NIC load
Distributed Switch sfo01-w01-c02- (default)
vds02
Procedure
1 Send a query for unassigned hosts to SDDC Manager by using this request.
2 In the response, locate the fqdn parameter for the target hosts.
3 Write down the GUID of each host object for the cluster from the id field.
4 Send a query for workload domains to SDDC Manager by using this request.
VMware, Inc. 28
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
5 In the response, locate the name parameter for the target workload domain.
6 Write down the GUID of the workload domain from the id property.
Parameter Value
{
"computeSpec": {
"clusterSpecs": [
{
"advancedOptions": {
"evcMode": "",
"highAvailability": {
"enabled": true
}
},
"datastoreSpec": {
"vsanDatastoreSpec": {
"datastoreName": "sfo01-w01-c02-vsan01",
"dedupAndCompressionEnabled": false,
"failuresToTolerate": 1,
"licenseKey": "CCCCC-CCCCC-CCCCC-CCCCC-CCCCC"
}
},
"hostSpecs": [
{
"id": "45f035db-03ab-4b86-9406-545dda930541",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic2",
VMware, Inc. 29
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]
}
},
{
"id": "d2da8319-b222-43c0-8eb0-bb30c163e1e6",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]
}
},
{
"id": "0db2cbd1-679c-47e0-9759-b2651d1c370f",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]
VMware, Inc. 30
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
}
},
{
"id": "5d643de7-57d5-4a21-ac34-31f0537fba3d",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]
}
}
],
"name": "sfo01-w01-c02",
"networkSpec": {
"nsxClusterSpec": {
"nsxTClusterSpec": {
"geneveVlanId": 1644
}
},
"vdsSpecs": [
{
"name": "sfo01-w01-c02-vds01",
"portGroupSpecs": [
{
"name": "sfo01-w01-c02-vds01-mgmt",
"transportType": "MANAGEMENT"
},
{
"name": "sfo01-w01-c02-vds01-vsan",
"transportType": "VSAN"
},
{
"name": "sfo01-w01-c02-vds01-vmotion",
"transportType": "VMOTION"
}
]
}
]
}
}
VMware, Inc. 31
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
]
},
"domainId": "sfo01-w01"
}
8 By using the JSON specification, add the cluster to the workload domain in VMware Cloud
Foundation by sending this request.
9 If the task fails, retry running it from the user interface of SDDC Manager.
In this cluster example, a host is added to the second cluster of the example workload domain.
Procedure
1 Send a query for unassigned hosts to SDDC Manager by using this request.
2 In the response, locate the fqdn parameter for the target hosts.
3 Write down the GUID of the host object for the cluster from the id field.
VMware, Inc. 32
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
5 In the response, locate the name parameter for the target cluster.
Parameter Value
{
"clusterUpdateSpec": {
"clusterExpansionSpec": {
"hostSpecs": [
{
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
],
"id": "8fdb3f46-ef65-46d8-8be3-e9ab484f79ca",
"licensekey": "AAAAA-AAAAA-AAAAA-AAAAA-AAAAA"
}
}
VMware, Inc. 33
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10
]
}
}
}
8 By using the JSON specification, add the host to the cluster by sending this request.
9 If the task fails, retry running it from the user interface of SDDC Manager.
VMware, Inc. 34