0% found this document useful (0 votes)
36 views34 pages

Arquitetura de Rede Do VMware Cloud Foundation-Vmware-Validated-Design-Sddc-Multiple-Pnic-Configuration

Uploaded by

Ivay Peresi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views34 pages

Arquitetura de Rede Do VMware Cloud Foundation-Vmware-Validated-Design-Sddc-Multiple-Pnic-Configuration

Uploaded by

Ivay Peresi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Using Hosts with Multiple Physical NICs

with VMware Cloud Foundation 3.10

Modified on 07 MAY 2021


VMware Validated Design
VMware Cloud Foundation 3.10
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

You can find the most up-to-date technical documentation on the VMware website at:

https://docs.vmware.com/

VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com

©
Copyright 2020 VMware, Inc. All rights reserved. Copyright and trademark information.

VMware, Inc. 2
Contents

About Using Hosts with Multiple Physical NICs with VMware Cloud Foundation
3.10 4

1 Network Architecture of VMware Cloud Foundation 6

2 Use Cases for Multi-NIC Hosts 8

3 SDDC Deployment on Multi-NIC Hosts 11

4 API Examples for Extending Your SDDC with Multi-NIC Hosts 13


API Examples for NSX for vSphere 13
Deploy a Workload Domain with NSX for vSphere 13
Add a Cluster to a Workload Domain with NSX for vSphere 17
Add a Host to a Cluster in a Workload Domain with NSX for vSphere 20
API Examples for NSX-T Data Center 22
Deploy a Workload Domain with NSX-T Data Center 22
Add a Cluster to a Workload Domain with NSX-T Data Center 27
Add a Host to a Cluster in a Workload Domain with NSX-T Data Center 32

VMware, Inc. 3
About Using Hosts with Multiple Physical
NICs with VMware Cloud Foundation 3.10

When deploying an SDDC by using VMware Cloud Foundation™, the integration of the software
stack with your environment occurs at various logical (Active Directory, certificates, and VLANs)
and physical (network uplinks and physical hardware) points. The Using Hosts with Multiple
Physical NICs with VMware Cloud Foundation 3.10 technical note provides guidelines on physical
and logical network integration.

Usually, you deploy physical servers with two physical network interface cards (physical NICs). If
your SDDC configuration requires three or more physical NICs per host in a workload domain
® ® ®
with VMware NSX Data Center for vSphere or with VMware NSX-T Data Center, you can use
this technical note to understand the reasons for such a configuration and learn how to deploy
the SDDC on top of it.

Guidance Scope
Hardware

By using VMware Cloud Foundation, you can deploy your SDDC on Dell EMC VxRail, VMware
vSAN ReadyNodes, or other hardware listed on the VMware Compatibility Guide. This
technical note is for vSAN ReadyNodes and hardware listed on the VMware Compatibility
Guide.

For guidance on using multiple physical NICs on VxRail, consult your Dell EMC team.

Software-Defined Networking

This technical note is applicable to virtual infrastructure workload domains with NSX for
vSphere and NSX-T.

Prerequisites
You must have an instance of VMware Cloud Foundation 3.10 deployed in at least one region.

See the VMware Cloud Foundation documentation.

VMware, Inc. 4
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

Intended Audience
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10 is intended for
consultants and architects who have solid understanding of VMware Validated Design™ for
building and managing an SDDC that meets the requirements for capacity and scalability.

Required VMware Software


Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10 is compliant with
certain product versions according to the version of VMware Cloud Foundation. See VMware
Cloud Foundation Release Notes for more information about supported product versions. See
VMware Cloud Foundation documentation.

Update History
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10 is updated with
software releases or when necessary.

Revision Description

7 MAY 2021 Because VMware Cloud Foundation 3.9.1 reached end of


support life, the technical note covers only VMware Cloud
Foundation 3.10.

21 AUG 2020 Improved wording to promote a more inclusive culture in


accordance with VMware values.

16 JUN 2020 You can apply this technical note only for specific VMware
Cloud Foundation versions. See Prerequisites.

09 MAR 2020 You can deploy multi-switch configurations in workload


domains on hosts with multiple physical NICs. See API
Examples for NSX-T Data Center.

14 JAN 2020 Initial release.

VMware, Inc. 5
Network Architecture of VMware
Cloud Foundation 1
The network architecture that is supported for automated SDDC deployment and maintenance
determines the options for integrating hosts with multiple physical NICs.

Standard SDDC Architecture


Aligned with VMware best practices, VMware Cloud Foundation can deploy your SDDC on
physical servers with two or more physical NICs per server. VMware Cloud Foundation
implements the following traffic management configuration:

n VMware Cloud Foundation uses the first two physical NICs in the server, that is, vmnic0 and
vmnic1, for all network traffic , that is, ESXi management, vSphere vMotion, storage (VMware
vSAN™ or NFS), network virtualization (VXLAN or Geneve), management applications, and so
on.

n Traffic is isolated in VLANs which terminate at the top of rack switches (ToRs).

n Load-Based Teaming (LBT), or Route based on physical NIC load, is used for balancing traffic
independently of the physical switches. The environment does not use LACP, VPC, or MLAG.

n Network I/O Control supports resolving situations where several types of traffic compete for
common resource.

n At the physical network card level, VMware Cloud Foundation works with any NIC on the
VMware Hardware Compatibility Guide that is supported by your hardware vendor. While 25-
Gb NICs are recommended, 10-Gb NICs are also supported. As a result, you can implement a
solution that supports the widest range of network hardware with safeguards for availability,
traffic isolation, and contention mitigation.

Traffic Types
In an SDDC, several general types of network traffic exist.

Virtual machine traffic

Traffic for management applications and tenant workloads that are running on a host. Virtual
machine traffic might be north-south from the SDDC out to your corporate network and
beyond, or east-west to other virtual machines or logical networking devices, such as load
balancers.

VMware, Inc. 6
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

Virtual machines are typically deployed on virtual wire. Virtual wires are logical networks in
NSX that are similar to VLANs in a physical data center. However, virtual wires do not require
any changes to your data center because they exist within the NSX SDN.

Overlay traffic

VXLAN or Geneve encapsulated traffic in your data center network. Overlay traffic might
encapsulate tenant workload traffic in workload domains or management application traffic in
the management domain. Overlay traffic consists of UDP packets to or from the VTEP or TEP
interface on your host.

VMkernel traffic

Traffic to support the management and operation of the ESXi hosts including ESXi
management, vSphere vMotion, and storage traffic (vSAN or NFS). This traffic originates from
the hypervisor itself to support management of and operations in the SDDC or the storage
needs of management and tenant virtual machines.

vSphere Distributed Switch and N-VDS


In a virtualized environment, virtual machine network adapters are connected to a virtual network
switch.VMware Cloud Foundation supports vSphere Distributed Switch and NSX-T Virtual
Distributed Switch (N-VDS).

NSX for vSphere uses the capabilities of vSphere Distributed Switch by enabling applications
across clusters or sites to reside on the same logical network segment without the need to
manage or extend that network segment in the physical data center. VXLANs encapsulate these
logical network segments and enable routing across data center network boundaries. The
entities that exchange VXLAN encapsulated packets are the VTEP VMkernel adapters in NSX for
vSphere. vSphere Distributed Switch supports both VLAN- and VXLAN-backed port groups.
Although one host can have one or more vSphere Distributed Switches, you can assign a
physical NIC only to one vSphere Distributed Switch.

N-VDS in NSX-T Data Center is a functional equivalent of vSphere Distributed Switch. Like the
vSphere Distributed Switch, it provides logical networking segmentation across clusters without
the need to manage or extend a segment in the physical data center network. N-VDS used
Geneve instead of VXLAN, and TEPs as the VMkernel interface instead of VTEPs. Because NSX-T
is designed to work with non-vSphere clusters, NSX-T itself is responsible for N-VDS
®
management instead of VMware vCenter Server .

Like the vSphere Distributed Switch, the N-VDS supports both VLAN or overlay-backed port
groups using Geneve encapsulation. Although a host typically has only a single N-VDS, you can
map traffic types to individual physical NICs leveraging N-VDS uplink teaming policies. The host
might also be connected to a vSphere Distributed Switch, but the vSphere Distributed Switch
must use a dedicated physical NIC. You cannot share a physical NIC between a vSphere
Distributed Switch and N-VDS.

VMware, Inc. 7
Use Cases for Multi-NIC Hosts
2
When considering the need to deploy an SDDC with more than two physical NICs, evaluate the
reasons for such a configuration. Usually, the standard architecture with two physical NIC per
host meets the requirements of your customers, data center networks, storage types, and use
cases, without the added complexity of operating more physical NICs.

Legacy Practices
When you deploy a virtualized environment, you might need to follow older operational or
environment practices without additional evaluation. In the past, some of the following common
practices existed:

n Use hosts with 8 or more physical NICs with a full separation of management, vSphere
vMotion, storage, and virtual machine traffic. Today, these traffic types can be safely
integrated on the same fabric using the safeguards noted in Chapter 1 Network Architecture
of VMware Cloud Foundation.

n Physically separate virtual machine traffic from management traffic as a physical firewall was
required to ensure traffic security. Today, hypervisor level firewalls can provide even better
security without the added complexity and traffic flow of physical firewall.

Following legacy practices might carry forward additional complexity and risk. Consider modern
network performance, VMware congestion mitigation processes with Network I/O Control, and
availability when taking design decisions about your data center network.

Aggregate Bandwidth Limitation


You often build modern data centers around a 25 Gbps host connectivity. For environments
where 2 x10 Gb is not a sufficient bandwidth to provide contention free networking, moving to 2
x 25-GbE NICs is the simplest and best choice. In this way, you can provide more potential
bandwidth to each traffic flow from a host. Verify that your hardware manufacturer supports the
NIC you have selected in your specific server. Network cards that support VXLAN or Geneve
offload are also recommended if they are listed in the VMware Compatibility Guide.

VMware, Inc. 8
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

Usually, fast physical data link speeds provide the best increase in the overall aggregate
bandwidth available to individual traffic flows. In some cases, 2 x 25 GbE might not be sufficient,
or a 25 GbE physical NIC might not be available or certified for your hardware. You can also
move to 2 x 40 GbE or 2 x 100 GbE but the physical infrastructure costs to support 40 GbE might
be a limitation.

Traffic Separation for Security


Some organizations have a business or security requirement for physical separation of traffic
onto separate fabrics. For instance, you must completely isolate management traffic from virtual
machine traffic or ESXi management access from other traffic to support specific data center
deployment needs. Often, such a requirement exists because of legacy data center designs with
physical firewalls or switch ACLs. With the use of VLANs and the NSX distributed firewall, traffic
can be securely isolated without the complexity of additional pNICs. Virtual machine traffic on
NSX logical switches is further encapsulated in VXLAN or Geneve packets.

However, for environments that require physical separation, VMware Cloud Foundation can
support mapping of management, storage, or virtual machine traffic to a logical switch that is
connected to dedicated physical NICs. Such a configuration can provide separation of traffic on
to separate physical links from the host into the data center network – be it to different network
ports on the same fabric, or to a distinct network fabric.

In this case, your management VMkernel adapters can be on one pair of physical NICs, with all
other traffic (vSphere Motion, vSAN, and so on) on another pair. Another goal might be to isolate
virtual machine traffic so that one pair of physical NICs handles all management VMkernel
adapters, vSphere vMotion, and vSAN, and a second pair handles the overlay and virtual machine
traffic.

Traffic Separation for Operational Concerns


In the past, some data center networks were physically seperate from each other because of
data center operations concerns.

n Such a configuration might come from the time when a physical throughput of 100 Mbps or 1
Gbps was a concern or congestion mitigation methods were not as advanced as they are
today with Network I/O Control.

n Such a configuration can also be related to legacy operational practices for storage area
networking (SAN) where storage traffic was always isolated to a separate physical fabric not
just for technical reasons, but to ensure the SAN fabric was treated with an extra high degree
of care by data center operations personal.

Today, communication between components is critical regardless of type, source, or destination.


All networks need to be treated with the same care because east-west workload communication
is just as critical as storage communication. As discussed in Legacy Practices, you can resolve
concerns on latency and congestion by sizing physical NICs appropriately and using Network I/O
Control.

VMware, Inc. 9
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

Even with modern data center design and appropriately sized physical NICs, some organizations
require that you separate traffic with high bandwidth potential or low latency, most commonly
storage traffic such as vSAN or NFS, or virtual machine backup traffic. In other cases, it is simply
for physical separation without any performance concerns such as utilizing a dedicated
management fabric or dedicated fabric for virtual machine traffic.

In these cases, VMkernel network adapters or virtual machine network adapters can be assigned
to dedicated VLAN-backed port groups or overlay segments on a second vSphere Distributed
Switch or N-VDS with dedicated physical NICs.

Traffic Separation Because of Bandwidth Limitations


Today, data centers are often built with 25 GbE, 40 GbE, and even 100 GbE physical NICs per
host. With high throughput physical NICs, data centers are able to provide sufficient bandwidth
for integrating vSAN, NFS, and virtual machine traffic (VLAN or overlay) on the same physical
links. In some cases, customers might not be able to use physical NICs of 25 GbE or higher. Then,
you add more physical NICs to a virtual switch. See Traffic Separation Because of Bandwidth
Limitations.

In certain cases, after the VMware Professional Services team evaluates your environment and
workload traffic patterns, benefits from total separation between storage traffic and virtual
machine traffic (VLAN and overlay) might appear.

In these cases, you can connect VMkernel adapters or virtual machine port groups to a specific
logical switch with dedicated physical NICs. Usually, you isolate virtual machine traffic including
overlay on a separate physical link.

Traffic Separation for NSX-T


Because NSX-T uses its own type of virtual switch, that is, N-VDS, during migration to NSX-T, you
migrate also all traffic types and VMkernel adapters to N-VDS.

On some server network interface cards (NICs), the Geneve encapsulation might not be fully
supported. You can continue using vSphere Distributed Switch for VMkernel traffic (e.g. vSAN,
vSphere vMotion, and so on). In such limited use cases, as an example, you can use a pair of
additional physical NICs for the N-VDS instance while using VLAN-backed virtual machine traffic.

VMware, Inc. 10
SDDC Deployment on Multi-NIC
Hosts 3
VMware Cloud Foundation supports specific networking configuration for traffic separation in the
management domain and workload domains. Consider also the options that are available to you
in the public API compared to the user interface of SDDC Manager.

Cloud Foundation Automation


You interact with VMware Cloud Foundation by using the Web user interface for Day-0
(imaging), Day-1 (deployment of the management domain and workload domains), and Day-2
(workload domain expansion and upgrade) operations. In the cases where the user interface
lacks the functionality that you need, you can use the public API of VMware Cloud Foundation.

The user interface of VMware Cloud Foundation does not currently support dedicating a specific
physical NIC to a system traffic type. For Day-1 or Day-2 operations that require assigning system
traffic to physical NICs, you use the public REST API of VMware Cloud Foundation.

Deployment of hardware in VMware Cloud Foundation follows the high-level process of host
discovery, validation, and installation. This high-level process is used for both initial bring-up and
expansion of workload domains.

Note Deployments based on Dell EMC VxRail have a different bring-up workflow because of the
automation in VxRail Manager. The process described in this technical note is not applicable to
VxRail-based deployments.

Management Domain Workflow


During the initial bring-up of the management domain, you configure the bring-up process with a
list of hosts which are pre-imaged and configured according to the requirements of VMware
Cloud Foundation. By using the Cloud Builder, you establish the first domain in VMware Cloud
Foundation, that is, the management domain.

Workload Domain Workflow


During the initial bring-up of the workload domain, you configure the bring-up process with a list
of hosts which are pre-imaged and configured according to the requirements of VMware Cloud
Foundation.

VMware, Inc. 11
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

Discovery is preformed by using the standard host commissioning workflow in the user interface
of SDDC Manager. As a result, SDDC Manager becomes aware of the host, assigns the host a
specific GUID, and places it in the SDDC Manager inventory.

Important For a workload domain with NFS storage, storage must be accessible through
vmnic0.

VMware, Inc. 12
API Examples for Extending Your
SDDC with Multi-NIC Hosts 4
The user interface of SDDC Manager lacks the functionality to directly integrate hosts with
multiple physical NICs in workload domains in VMware Cloud Foundation. Use the public API of
VMware Cloud Foundation to extend the SDDC with workload domains, clusters, and individual
multi-NIC hosts.

In VMware Cloud Foundation 3.10, you can use a public API to automate deployments by using
SDDC Manager on hosts with mulitple physical NICs. For information on the API, see VMware
Cloud Foundation 3.10 API on VMware {code}.

This chapter includes the following topics:

n API Examples for NSX for vSphere

n API Examples for NSX-T Data Center

API Examples for NSX for vSphere


You can use the API examples when using NSX for vSphere and multiple physical NICs

In VMware Cloud Foundation 3.10, you can use the SDDC Manager public API to automate
multiple physical NIC configurations when using NSX for vSphere.

Deploy a Workload Domain with NSX for vSphere


By using the VMware Cloud Foundation API, you can deploy an example workload domain with 4
physical NICs per hosts, distributing the NICs between multiple vSphere Distributed Switch
instances.

For traffic separation, two vSphere Distributed Switches handle the traffic in the initial cluster of
the example workload domain - one for system traffic and one for application traffic. You assign
a pair of physical NICs to each switch.

Table 4-1. Example Workload Domain Specification


Component Value

SDDC Manager FQDN sddc-manager.vrack.vsphere.local

vCenter Server FQDN vcenter-2.vrack.vsphere.local

Number of hosts 3

Cluster name Cluster1

VMware, Inc. 13
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

Table 4-1. Example Workload Domain Specification (continued)


Component Value

Number of physical NICs per host 4

vSphere Distributed Switch instances n SDDC-Dswitch-Private1


n SDDC-Dswitch-Private2

vmnic per vSphere Distributed Switch configuration n vmnic0, vmnic 1 - SDDC-Dswitch-Private1


n vmnic2, vmnic 3 - SDDC-Dswitch-Private2

Distributed port groups n SDDC-Dswitch-Private1


n SDDC-DPortGroup-Mgmt
n SDDC-DPortGroup-vMotion
n SDDC-DPortGroup-VSAN
n SDDC-Dswitch-Private2
n SDDC-DPortGroup-Public

NIC teaming policy Route based on physical NIC load (default)

vSphere Distributed Switch for VXLAN SDDC-Dswitch-Private1

NSX Manager FQDN nsx-2.vrack.vsphere.local

NSX Controller IP addresses n 10.0.0.45


n 10.0.0.46
n 10.0.0.47

Storage type vSAN

Procedure

1 Send a query for unassigned hosts to SDDC Manager by using this request.

GET https://sddc-manager.vrack.vsphere.local/v1/hosts?status=UNASSIGNED_USEABLE HTTP/1.1


Authorization:Basic admin REST API admin password

2 In the response, locate the fqdn parameter for the target hosts.

3 Write down the GUID of each host object for the domain from the id field.

4 Prepare a domain specification in JSON format according to the requirements of your


environment and send it for validation to SDDC Manager.

Most of the parameters are self-explanatory.

Parameter Value

computeSpec.clusterSpecs.hostSpecs.id GUIDs from Step 3.

computeSpec.clusterSpecs.hostSpecs.hostNetworkSpec. Physical NIC to use, for example, vmnic0.


vmNics.id

POST https://sddc-manager.vrack.vsphere.local/v1/domains/validations/creations HTTP/1.1


Authorization:Basic admin REST API admin password
Content-Type:application/json

VMware, Inc. 14
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

{
"domainName": "myWLD01",
"vcenterSpec": {
"name": "vcenter-2",
"networkDetailsSpec": {
"ipAddress": "10.0.0.43",
"dnsName": "vcenter-2.vrack.vsphere.local",
"gateway": "10.0.0.250",
"subnetMask": "255.255.255.0"
},
"rootPassword": "Random0$",
"datacenterName": "new-vi-1"
},
"computeSpec": {
"clusterSpecs": [ {
"name": "Cluster1",
"hostSpecs": [ {
"id": "97fea1a1-107e-4845-bdd1-6a14ab18010a",
"license":"AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"hostNetworkSpec": {
"vmNics": [ {
"id": "vmnic0",
"vdsName": "SDDC-Dswitch-Private1"
}, {
"id": "vmnic1",
"vdsName": "SDDC-Dswitch-Private1"
}, {
"id": "vmnic2",
"vdsName": "SDDC-Dswitch-Private2"
}, {
"id": "vmnic3",
"vdsName": "SDDC-Dswitch-Private2"
} ]
}
}, {
"id": "b858eb07-4f07-4e34-8053-d502bf9cfeb0",
"license":"AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"hostNetworkSpec": {
"vmNics": [ {
"id": "vmnic0",
"vdsName": "SDDC-Dswitch-Private1"
}, {
"id": "vmnic1",
"vdsName": "SDDC-Dswitch-Private1"
}, {
"id": "vmnic2",
"vdsName": "SDDC-Dswitch-Private2"
}, {
"id": "vmnic3",
"vdsName": "SDDC-Dswitch-Private2"
} ]
}
}, {
"id": "45c3c5e6-6c49-46d2-a027-216d2a20c8d1",
"license":"AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",

VMware, Inc. 15
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

"hostNetworkSpec": {
"vmNics": [ {
"id": "vmnic0",
"vdsName": "SDDC-Dswitch-Private1"
}, {
"id": "vmnic1",
"vdsName": "SDDC-Dswitch-Private1"
}, {
"id": "vmnic2",
"vdsName": "SDDC-Dswitch-Private2"
}, {
"id": "vmnic3",
"vdsName": "SDDC-Dswitch-Private2"
} ]
}
} ],
"datastoreSpec": {
"vsanDatastoreSpec": {
"failuresToTolerate": 1,
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"datastoreName": "vSanDatastore"
}
},
"networkSpec": {
"vdsSpecs": [ {
"name": "SDDC-Dswitch-Private1",
"portGroupSpecs": [ {
"name": "SDDC-DPortGroup-Mgmt",
"transportType": "MANAGEMENT"
}, {
"name": "SDDC-DPortGroup-VSAN",
"transportType": "VSAN"
}, {
"name": "SDDC-DPortGroup-vMotion",
"transportType": "VMOTION"
} ]
},
{
"name": "SDDC-Dswitch-Private2",
"portGroupSpecs": [ {
"name": "SDDC-DPortGroup-Public",
"transportType": "PUBLIC" } ]
}
],
"nsxClusterSpec": {
"nsxVClusterSpec": {
"vlanId": 0,
"vdsNameForVxlanConfig": "SDDC-Dswitch-Private1"
}
}
}
} ]
},
"nsxVSpec": {
"nsxManagerSpec": {

VMware, Inc. 16
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

"name": "nsx-2",
"networkDetailsSpec": {
"ipAddress": "10.0.0.44",
"dnsName": "nsx-2.vrack.vsphere.local",
"gateway": "10.0.0.250",
"subnetMask": "255.255.255.0"
}
},
"nsxVControllerSpec": {
"nsxControllerIps": [ "10.0.0.45", "10.0.0.46", "10.0.0.47"],
"nsxControllerPassword": "Test123456$%",
"nsxControllerGateway": "10.0.0.250",
"nsxControllerSubnetMask": "255.255.255.0"
},
"licenseKey": "CCCCC-CCCCC-CCCCC-CCCCC-CCCCC",
"nsxManagerAdminPassword": "Random0$",
"nsxManagerEnablePassword": "Random0$"
}
}

5 By using the JSON specification, add the workload domain to VMware Cloud Foundation by
sending this request.

POST https://sddc-manager.vrack.vsphere.local/v1/domains HTTP/1.1


Authorization: Basic admin REST API admin password
Content-Type: application/json

6 If the task fails, retry running it from the user interface of SDDC Manager.

Add a Cluster to a Workload Domain with NSX for vSphere


By using the VMware Cloud Foundation API, you can create an example cluster of hosts, each
having four physical NICs. You can distribute these NICs to multiple vSphere Distributed Switch
instances.

For traffic separation, two vSphere Distributed Switches handle the traffic in the example cluster
- one for system traffic and overlay traffic and one for external traffic. You assign a pair of
physical NICs to each switch.

Table 4-2. Example Workload Domain Cluster Specification


Component Value

SDDC Manager FQDN sddc-manager.vrack.vsphere.local

Number of hosts 3

Number of physical NICs per host 4

vSphere Distributed Switch instances n w01-c02-vds01


n w01-c02-vds02

vmnic per vSphere Distributed Switch configuration n vmnic0, vmnic 1 - w01-c02-vds01


n vmnic2, vmnic 3 - w01-c02-vds02

VMware, Inc. 17
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

Table 4-2. Example Workload Domain Cluster Specification (continued)


Component Value

Distributed port groups n w01-c02-vds01


n w01-c02-vds01-management
n w01-c02-vds01-vmotion
n w01-c02-vds01-vsan
n w01-c02-vds02
n w01-c02-vds01-ext

NIC teaming policy Route based on physical NIC load (default one)

vSphere Distributed Switch for VXLAN w01-c02-vds01

Storage type vSAN

Procedure

1 Send a query for unassigned hosts to SDDC Manager by using this request.

GET https://sddc-manager.vrack.vsphere.local/v1/hosts?status=UNASSIGNED_USEABLE HTTP/1.1


Authorization:Basic admin REST API admin password

2 In the response, locate the fqdn parameter for the target hosts.

3 Write down the GUID of each host object for the cluster from the id field.

4 Send a query for workload domains to SDDC Manager by using this request.

GET https://sddc-manager.vrack.vsphere.local/inventory/domains HTTP/1.1


Authorization: Basic admin REST API admin password

5 In the response, locate the name parameter for the target workload domain

6 Write down the GUID of the workload domain from the id property.

7 Prepare a cluster specification in JSON format according to the requirements of your


environment and send it for validation to SDDC Manager.

Most of the parameters are self-explanatory.

Parameter Value

domainId GUID from Step 6.

computeSpec.clusterSpecs.hostSpecs.id GUIDs from Step 3.

computeSpec.clusterSpecs.hostSpecs.hostNetworkSpec. Physical NIC to use, for example, vmnic0.


vmNics.id

POST https://sddc-manager.vrack.vsphere.local/v1/domains/validations/creations HTTP/1.1


Authorization:Basic admin REST API admin password
Content-Type:application/json
{
"nsxVClusterSpec": {
"vdsNameForVxlanConfig": "w01-c02-vds02",

VMware, Inc. 18
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

"vlanId": 0
},
"hostSpecs": {
"hostSystemSpec": [
{
"license": "AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"id": "c2398611-23cd-4b94-b2e3-9d84848b73cb",
"vmnicToVdsNameMap": {
"vmnic0": "w01-c02-vds01",
"vmnic1": "w01-c02-vds01",
"vmnic2": "w01-c02-vds02",
"vmnic3": "w01-c02-vds02"
}
},
{
"license": "AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"id": "8dbe7dcb-f409-4ccd-984b-711e70e9e767",
"vmnicToVdsNameMap": {
"vmnic0": "w01-c02-vds01",
"vmnic1": "w01-c02-vds01",
"vmnic2": "w01-c02-vds02",
"vmnic3": "w01-c02-vds02"
}
},
{
"license": "AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"id": "e9ba66e0-4670-4973-bdb1-bc05702ca91a",
"vmnicToVdsNameMap": {
"vmnic0": "w01-c02-vds01",
"vmnic1": "w01-c02-vds01",
"vmnic2": "w01-c02-vds02",
"vmnic3": "w01-c02-vds02"
}
}
]
},
"clusterName": "c02",
"highAvailabilitySpec": {
"enabled": true
},
"domainId": "983840c1-fa13-4edd-b3cb-907a95c29652",
"datastoreSpec": {
"vsanDatastoreSpec": {
"license": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"ftt": 1,
"name": "w01-c02-vsan01"
}
},
"vdsSpec": [
{
"name": "w01-c02-vds01",
"portGroupSpec": [
{
"name": "w01-c02-vds01-management",
"transportType": "MANAGEMENT"

VMware, Inc. 19
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

},
{
"name": "w01-c02-vds01-vmotion",
"transportType": "VMOTION"
},
{
"name": "w01-c02-vds01-vsan",
"transportType": "VSAN"
}
]
},
{
"name": "w01-c02-vds02",
"portGroupSpec": [
{
"name": "w01-c02-vds02-ext",
"transportType": "PUBLIC"
}
]
}
]
}

8 By using the JSON specification, add the cluster to the workload domain in VMware Cloud
Foundation by sending this request.

POST https://sddc-manager.vrack.vsphere.local/v1/clusters HTTP/1.1


Authorization: Basic admin REST API admin password
Content-Type: application/json

9 If the task fails, retry running it from the user interface of SDDC Manager.

Add a Host to a Cluster in a Workload Domain with NSX for vSphere


By using the VMware Cloud Foundation API, you can add an example host that has four physical
NICs. You can distribute the NICs to multiple vSphere Distributed Switch instances.

For traffic separation, two vSphere Distributed Switches handle the traffic in and out of the
example host - one for system traffic and overlay traffic and one for external traffic. You assign a
pair of physical NICs to each switch.

Table 4-3. Example Workload Domain Host Specification


Component Value

SDDC Manager FQDN sddc-manager.vrack.vsphere.local

Number of physical NICs per host 4

vSphere Distributed Switch instances n SDDC2-Dswitch-Private1


n SDDC2-Dswitch-Private2

vmnic per vSphere Distributed Switch configuration n vmnic0, vmnic 1 - SDDC2-Dswitch-Private1


n vmnic2, vmnic 3 - SDDC2-Dswitch-Private2

VMware, Inc. 20
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

Procedure

1 Send a query for unassigned hosts to SDDC Manager by using this request.

GET https://sddc-manager.vrack.vsphere.local/v1/hosts?status=UNASSIGNED_USEABLE HTTP/1.1


Authorization:Basic admin REST API admin password

2 In the response, locate the fqdn parameter for the target hosts.

3 Write down the GUID of the host object for the cluster from the id field.

4 Send a query for clusters to SDDC Manager by sending this request.

GET https://sddc-manager.vrack.vsphere.local/v1/clusters HTTP/1.1


Authorization:Basic admin REST API admin password

5 In the response, locate the name parameter for the target cluster.

6 Write down the GUID from the id parameter.

7 Prepare a host specification in JSON format according to the requirements of your


environment and send it for validation to SDDC Manager.

The URL contains the cluster GUID from Step 4.

Most of the parameters are self-explanatory.

Parameter Value

clusterExpansionSpec.hostSpecs.id GUIDs from Step 3.

clusterExpansionSpec.hostSpecs.license ESXi license key

clusterExpansionSpec.hostSpecs.hostNetworkSpec.vmNi Physical NIC to use, for example, vmnic0.


cs.id

clusterExpansionSpec.hostSpecs.hostNetworkSpec.vmNi Name of the vSphere Distributed Switch to connect a


cs.vdsName physical NIC to.

POST https://sddc-manager.vrack.vsphere.local/v1/clusters/cluster_id1/validations/updates HTTP/1.1


Authorization: Basic admin REST API admin password
Content-Type: application/json

{
"clusterExpansionSpec" : {
"hostSpecs" : [ {
"id" : "1d539f62-d85a-48a3-b5db-a165e06ae6ba",
"license": "AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"hostNetworkSpec" : {
"vmNics" : [ {
"id" : "vmnic0",
"vdsName" : "SDDC2-Dswitch-Private1"
}, {
"id" : "vmnic1",
"vdsName" : "SDDC2-Dswitch-Private1"
}, {
"id" : "vmnic2",

VMware, Inc. 21
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

"vdsName" : "SDDC2-Dswitch-Private2"
}, {
"id" : "vmnic3",
"vdsName" : "SDDC2-Dswitch-Private2"
} ]
}
} ]
}
}

8 By using the JSON specification, add the host to the cluster by sending this request.

PATCH https://sddc-manager.vrack.vsphere.local/v1/clusters/cluster_id HTTP/1.1


Authorization: Basic admin REST API admin password
Content-Type: application/json

9 If the task fails, retry running it from the user interface of SDDC Manager.

API Examples for NSX-T Data Center


You can use the API examples when using NSX-T Data Center and multiple physical NICs.

In VMware Cloud Foundation 3.10, you can use the SDDC Manager public API to automate
multiple physical NIC configurations when using NSX-T in a workload domain.

Deploy a Workload Domain with NSX-T Data Center


By using the VMware Cloud Foundation API, you can deploy an example workload domain with 4
physical NICs per host.

In VMware Cloud Foundation 3.10, you can use the SDDC Manager public API to automate
multiple physical NIC configurations when using for NSX-T.

In this example workload domain, for traffic separation, one vSphere Distributed Switch handles
the system traffic for vSphere management, vSphere vMotion, and vSAN in the initial cluster of
the example workload domain and one N-VDS handles the workload traffic, for example, NSX-T
overlay. You assign a pair of physical NICs to each switch.

Table 4-4. Example Workload Domain Specification


Component Value

SDDC Manager FQDN sfo01m01sddc01.sfo01.rainpole.local

Workload domain name sfo01-w01

vCenter Server FQDN sfo01w01vc01.sfo01.rainpole.local

Number of hosts 4

Cluster name sfo01-w01-cl01

Number of physical NICs per host 4

vSphere Distributed Switch vSphere Distributed Switch instances sfo01-w01-c01-vds01


configuration

VMware, Inc. 22
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

Table 4-4. Example Workload Domain Specification (continued)


Component Value

vmnic configuration for vSphere vmnic0, vmnic1


Distributed Switch sfo01-w01-c01-
vds01

NIC teaming policy for vSphere Route based on physical NIC load
Distributed Switch sfo01-w01-c01- (default)
vds02

Distributed port groups for vSphere n sfo01-w01-vds01-mgmt


Distributed Switch sfo01-w01-c01- n sfo01-w01-vds01-vsan
vds02 n sfo01-w01-vds01-vmotion

NSX-T Manager configuration NSX-T Manager VIP FQDN sfo01wnsx01.sfo01.rainpole.local

NSX-T Manager member FQDNs n sfo01wnsx01a.sfo01.rainpole.local


n sfo01wnsx01b.sfo01.rainpole.local
n sfo01wnsx01c.sfo01.rainpole.local

N-VDS configuration N-VDS instance Auto-generated name

vmnic configuration for N-VDS vmnic2, vmnic3

Uplink Profile Teaming Policy Load Balance Source


that is, load balancing between uplink-1
and uplink-2 based on the source port
ID

Traffic Segments Overlay

Storage type vSAN

Procedure

1 Send a query to SDDC Manager for unassigned hosts by using this request.

GET https://sfo01m01sddc01.sfo01.rainpole.local/v1/hosts?status=UNASSIGNED_USEABLE HTTP/1.1


Authorization:Basic admin rest_api_admin_password

2 In the response, locate the fqdn parameter for the target hosts.

3 Write down the GUID of each host object for the domain from the id field.

4 Prepare a domain specification in JSON format according to the requirements of your


environment and send it for validation to SDDC Manager.

Most of the parameters are self-explanatory.

VMware, Inc. 23
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

Parameter Value

computeSpec.clusterSpecs.hostSpecs.id GUIDs from Step 3.

computeSpec.clusterSpecs.hostSpecs.hostNetworkSpec. Physical NIC to use, for example, vmnic0.


vmNics.id

Caution Provide a valid licenseKey value. Otherwise, a failure in the workflow execution
occurs. To resolve the issue, you must delete the partially created workload domain, and
decommission and recommission the hosts.

POST https://sfo01m01sddc01.sfo01.rainpole.local/v1/domains/validations/creations HTTP/1.1


Authorization:Basic admin rest_api_admin_password
Content-Type:application/json

{
"domainName": "sfo01-w01",
"orgName": "rainpole",
"vcenterSpec": {
"name": "sfo01w01vc01",
"networkDetailsSpec": {
"ipAddress": "172.16.11.64",
"dnsName": "sfo01w01vc01.sfo01.rainpole.local",
"gateway": "172.16.11.253",
"subnetMask": "255.255.255.0"
},
"licenseKey": "AAAAA-AAAAA-AAAAA-AAAAA-AAAAA",
"rootPassword": "vcenter_server_appliance_root_password",
"datacenterName": "sfo01-dc01"
},
"computeSpec": {
"clusterSpecs": [
{
"name": "sfo01-w01-c01",
"hostSpecs": [
{
"id": "80b1397e-97a5-4cac-a64a-6902d12adaf3",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",

VMware, Inc. 24
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

"moveToNvds": true
}
]
}
},
{
"id": "58dcfb1c-ee55-40cb-888d-e458903fb102",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]
}
},
{
"id": "fe848d6e-7156-41fc-aed3-24dbd0ee3a7b",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]
}
},
{
"id": "29b5b562-4f45-46ea-98a9-512cdd111ab7",

VMware, Inc. 25
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c01-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]
}
}
],
"datastoreSpec": {
"vsanDatastoreSpec": {
"failuresToTolerate": 1,
"licenseKey": "CCCCC-CCCCC-CCCCC-CCCCC-CCCCC",
"datastoreName": "sfo01-w01-c01-vsan01"
}
},
"networkSpec": {
"vdsSpecs": [
{
"name": "sfo01-w01-c01-vds01",
"portGroupSpecs": [
{
"name": "sfo01-w01-c01-vds01-mgmt",
"transportType": "MANAGEMENT"
},
{
"name": "sfo01-w01-c01-vds01-vsan",
"transportType": "VSAN"
},
{
"name": "sfo01-w01-c01-vds01-vmotion",
"transportType": "VMOTION"
}
]
}
],
"nsxClusterSpec": {
"nsxTClusterSpec": {
"geneveVlanId": 1634
}
}

VMware, Inc. 26
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

}
}
]
},
"nsxTSpec": {
"nsxManagerSpecs": [
{
"name": "sfo01wnsx01a",
"networkDetailsSpec": {
"ipAddress": "172.16.11.82",
"dnsName": "sfo01wnsx01a.sfo01.rainpole.local",
"gateway": "172.16.11.253",
"subnetMask": "255.255.255.0"
}
},
{
"name": "sfo01wnsx01b",
"networkDetailsSpec": {
"ipAddress": "172.16.11.83",
"dnsName": "sfo01wnsx01b.sfo01.rainpole.local",
"gateway": "172.16.11.253",
"subnetMask": "255.255.255.0"
}
},
{
"name": "sfo01wnsx01c",
"networkDetailsSpec": {
"ipAddress": "172.16.11.84",
"dnsName": "sfo01wnsx01c.sfo01.rainpole.local",
"gateway": "172.16.11.253",
"subnetMask": "255.255.255.0"
}
}
],
"vip": "172.16.11.81",
"vipFqdn": "sfo01wnsx01.sfo01.rainpole.local",
"licenseKey": "DDDDD-DDDDD-DDDDD-DDDDD-DDDDD",
"nsxManagerAdminPassword": "nsxt_manager_admin_password"
}
}

5 By using the JSON specification, add the workload domain to VMware Cloud Foundation by
sending this request.

POST https://sfo01m01sddc01.sfo01.rainpole.local/v1/domains HTTP/1.1


Authorization: Basic admin rest_api_admin_password
Content-Type: application/json

6 If the task fails, retry running it from the user interface of SDDC Manager.

Add a Cluster to a Workload Domain with NSX-T Data Center


By using the VMware Cloud Foundation API, you can create an example cluster of hosts, each
having four physical NICs, and assign these NICs to a vSphere Distributed and an N-VDS.

VMware, Inc. 27
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

In this cluster example, for traffic separation, in a second cluster of the example workload
domain, one vSphere Distributed Switch handles the system traffic for vSphere management,
vSphere vMotion, and vSAN and one N-VDS handles the workload traffic, for example, NSX-T
overlay. You assign a pair of physical NICs to each switch.

Table 4-5. Example Workload Domain Cluster Specification


Component Value

SDDC Manager FQDN sfo01m01sddc01.sfo01.rainpole.local

Number of hosts 4

Number of physical NICs per host 4

Workload domain name sfo01-w01

vSphere Distributed Switch vSphere Distributed Switch instances sfo01-w01-c02-vds01


configuration
vmnic configuration for vSphere vmnic0, vmnic1
Distributed Switch sfo01-w01-c02-
vds02

NIC teaming policy for vSphere Route based on physical NIC load
Distributed Switch sfo01-w01-c02- (default)
vds02

Distributed port groups for vSphere n sfo01-w01-c02-vds01-mgmt


Distributed Switch sfo01-w01-c02- n sfo01-w01-c02-vds01-vsan
vds02 n sfo01-w01-c02-vds01-vmotion

N-VDS configuration N-VDS instance Auto-generated name

vmnic configuration for N-VDS vmnic2, vmnic3

Uplink Profile Teaming Policy Load Balance Source


that is, load balancing between uplink-1
and uplink-2 based on the source port
ID

Traffic Segments Overlay

Storage type vSAN

Procedure

1 Send a query for unassigned hosts to SDDC Manager by using this request.

GET https://sfo01m01sddc01.sfo01.rainpole.local/v1/hosts?status=UNASSIGNED_USEABLE HTTP/1.1


Authorization:Basic admin rest_api_admin_password

2 In the response, locate the fqdn parameter for the target hosts.

3 Write down the GUID of each host object for the cluster from the id field.

4 Send a query for workload domains to SDDC Manager by using this request.

GET https://sfo01m01sddc01.sfo01.rainpole.local/inventory/domains HTTP/1.1


Authorization: Basic admin rest_api_admin_password

VMware, Inc. 28
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

5 In the response, locate the name parameter for the target workload domain.

6 Write down the GUID of the workload domain from the id property.

7 Prepare a cluster specification in JSON format according to the requirements of your


environment and send it for validation to SDDC Manager.

Most of the parameters are self-explanatory.

Parameter Value

domainId GUID from Step 5.

computeSpec.clusterSpecs.hostSpecs.id GUIDs from Step 3.

computeSpec.clusterSpecs.hostSpecs.hostNetworkSpec. Physical NIC to use, for example, vmnic0.


vmNics.id

POST https://sfo01m01sddc01.sfo01.rainpole.local/v1/domains/validations/creations HTTP/1.1


Authorization:Basic admin rest_api_admin_password
Content-Type:application/json

{
"computeSpec": {
"clusterSpecs": [
{
"advancedOptions": {
"evcMode": "",
"highAvailability": {
"enabled": true
}
},
"datastoreSpec": {
"vsanDatastoreSpec": {
"datastoreName": "sfo01-w01-c02-vsan01",
"dedupAndCompressionEnabled": false,
"failuresToTolerate": 1,
"licenseKey": "CCCCC-CCCCC-CCCCC-CCCCC-CCCCC"
}
},
"hostSpecs": [
{
"id": "45f035db-03ab-4b86-9406-545dda930541",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic2",

VMware, Inc. 29
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]
}
},
{
"id": "d2da8319-b222-43c0-8eb0-bb30c163e1e6",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]
}
},
{
"id": "0db2cbd1-679c-47e0-9759-b2651d1c370f",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]

VMware, Inc. 30
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

}
},
{
"id": "5d643de7-57d5-4a21-ac34-31f0537fba3d",
"licenseKey": "BBBBB-BBBBB-BBBBB-BBBBB-BBBBB",
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
]
}
}
],
"name": "sfo01-w01-c02",
"networkSpec": {
"nsxClusterSpec": {
"nsxTClusterSpec": {
"geneveVlanId": 1644
}
},
"vdsSpecs": [
{
"name": "sfo01-w01-c02-vds01",
"portGroupSpecs": [
{
"name": "sfo01-w01-c02-vds01-mgmt",
"transportType": "MANAGEMENT"
},
{
"name": "sfo01-w01-c02-vds01-vsan",
"transportType": "VSAN"
},
{
"name": "sfo01-w01-c02-vds01-vmotion",
"transportType": "VMOTION"
}
]
}
]
}
}

VMware, Inc. 31
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

]
},
"domainId": "sfo01-w01"
}

8 By using the JSON specification, add the cluster to the workload domain in VMware Cloud
Foundation by sending this request.

POST https://sfo01m01sddc01.sfo01.rainpole.local/v1/clusters HTTP/1.1


Authorization: Basic admin rest_api_admin_password
Content-Type: application/json

9 If the task fails, retry running it from the user interface of SDDC Manager.

Add a Host to a Cluster in a Workload Domain with NSX-T Data


Center
By using the VMware Cloud Foundation API, you can add an example host that has four physical
NICs, distributing the NICs to a vSphere Distributed Switch and an N-VDS.

In this cluster example, a host is added to the second cluster of the example workload domain.

Table 4-6. Example Workload Domain Host Specification


Component Value

SDDC Manager FQDN sfo01m01sddc01.sfo01.rainpole.local

Number of physical NICs per host 4

Workload domain name sfo01-w01

vSphere Distributed Switch vSphere Distributed Switch instance sfo01-w01-c02-vds01


configuration
vmnic configuration for vSphere vmnic0, vmnic1
Distributed Switch sfo01-w01-c02-
vds02

N-VDS configuration N-VDS instance Auto-generated name

vmnic configuration for N-VDS vmnic2, vmnic3

Procedure

1 Send a query for unassigned hosts to SDDC Manager by using this request.

GET https://sfo01m01sddc01.sfo01.rainpole.local/v1/hosts?status=UNASSIGNED_USEABLE HTTP/1.1


Authorization:Basic admin rest_api_admin_password

2 In the response, locate the fqdn parameter for the target hosts.

3 Write down the GUID of the host object for the cluster from the id field.

4 Send a query for clusters to SDDC Manager by sending this request.

GET https://sfo01m01sddc01.sfo01.rainpole.local/v1/clusters HTTP/1.1


Authorization:Basic admin rest_api_admin_password

VMware, Inc. 32
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

5 In the response, locate the name parameter for the target cluster.

6 Write down the GUID from the id parameter.

7 Prepare a host specification in JSON format according to the requirements of your


environment and send it for validation to SDDC Manager.

The URL contains the cluster GUID from Step 6.

Most of the parameters are self-explanatory.

Parameter Value

clusterUpdateSpec.clusterExpansionSpec.hostSpecs.host GUID from Step 3.


NetworkSpec.id

clusterUpdateSpec.clusterExpansionSpec.hostSpecs.host ESXi license key


NetworkSpec.license

clusterUpdateSpec.clusterExpansionSpec.hostSpecs.host Physical NIC to use, for example, vmnic0.


NetworkSpec.vmNics.id

POST https://sfo01m01sddc01.sfo01.rainpole.local/v1/clusters/cluster_id1/validations/updates HTTP/


1.1
Authorization: Basic admin rest_api_admin_password
Content-Type: application/json

{
"clusterUpdateSpec": {
"clusterExpansionSpec": {
"hostSpecs": [
{
"hostNetworkSpec": {
"vmNics": [
{
"id": "vmnic0",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic1",
"vdsName": "sfo01-w01-c02-vds01"
},
{
"id": "vmnic2",
"moveToNvds": true
},
{
"id": "vmnic3",
"moveToNvds": true
}
],
"id": "8fdb3f46-ef65-46d8-8be3-e9ab484f79ca",
"licensekey": "AAAAA-AAAAA-AAAAA-AAAAA-AAAAA"
}
}

VMware, Inc. 33
Using Hosts with Multiple Physical NICs with VMware Cloud Foundation 3.10

]
}
}
}

8 By using the JSON specification, add the host to the cluster by sending this request.

PATCH https://sfo01m01sddc01.sfo01.rainpole.local/v1/clusters/cluster_id HTTP/1.1


Authorization: Basic admin rest_api_admin_password
Content-Type: application/json

9 If the task fails, retry running it from the user interface of SDDC Manager.

VMware, Inc. 34

You might also like