Cisco DCUCI v4.0 Student Guide Volume 2
Cisco DCUCI v4.0 Student Guide Volume 2
Student Guide
Text Part Number: 97-3021-0
DISCLAIMER WARRANTY: THIS CONTENT IS BEING PROVIDED AS IS. CISCO MAKES AND YOU RECEIVE NO WARRANTIES IN
CONNECTION WITH THE CONTENT PROVIDED HEREUNDER, EXPRESS, IMPLIED, STATUTORY OR IN ANY OTHER PROVISION OF
THIS CONTENT OR COMMUNICATION BETWEEN CISCO AND YOU. CISCO SPECIFICALLY DISCLAIMS ALL IMPLIED
WARRANTIES, INCLUDING WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT AND FITNESS FOR A PARTICULAR
PURPOSE, OR ARISING FROM A COURSE OF DEALING, USAGE OR TRADE PRACTICE. This learning product may contain early release
content, and while Cisco believes it to be accurate, it falls subject to the disclaimer above.
Student Guide
Table of Contents
Volume 2
Server Resources Implementation
Overview
Module Objectives
6-1
6-1
6-1
6-3
6-3
6-3
6-4
6-4
6-5
6-6
6-7
6-7
6-8
6-9
6-10
6-11
6-12
6-13
6-13
6-14
6-15
6-16
6-17
6-17
6-18
6-19
6-20
6-21
6-22
6-23
6-23
6-24
6-25
6-26
6-27
6-27
6-28
6-29
6-30
6-31
6-32
6-33
6-33
6-34
6-35
6-36
6-37
6-38
6-39
6-39
6-40
6-41
6-41
6-41
6-43
6-43
6-44
6-45
6-45
6-46
6-47
6-48
6-49
6-50
6-50
6-51
6-52
6-53
6-53
6-54
6-55
6-56
6-57
6-58
6-59
6-59
6-60
6-61
6-62
6-63
6-64
6-65
6-65
6-66
6-67
6-68
6-68
6-69
6-70
6-70
6-71
6-72
6-72
6-73
6-74
6-74
6-75
6-76
6-77
6-78
6-79
6-79
6-80
6-81
6-82
6-83
6-84
6-85
6-85
6-86
6-87
2011 Cisco Systems, Inc.
6-88
6-89
6-90
6-91
6-92
6-92
6-93
6-94
6-95
6-96
6-97
6-99
Overview
Objectives
Service Profile Templates
Creating a Service Profile Template
Name the New Template
Template Types
Apply UUID Pool
Apply WWNN Pool
Create vHBA for Fabric A
Create vHBA for Fabric B
vHBA Templates
Create vNIC for Fabric A
Create vNIC for Fabric B
vNIC Templates
Boot Order and Boot Target
Template Server Assignment
IPMI and SoL Policy Assignment
BIOS and Scrub Policy Assignment
Modify Template
Creating Differentiated Service Profile Templates
Automating Creation of a Server Farm Using Service Profile Templates
Creating Service Profiles from Template
Select Prefix and the Number of Profiles
Describe the Hidden Pitfalls When Using Updating Templates
Updating Template Issues to Consider
Updating Template Warning
Unbind a Service Profile from Its Template
Bind a Service Profile to a Template
Cloning a Service Profile
Service Profile Cloning
Clone Destination
Summary
6-99
6-99
6-100
6-100
6-101
6-102
6-103
6-104
6-105
6-106
6-107
6-108
6-109
6-110
6-111
6-112
6-113
6-114
6-115
6-116
6-117
6-117
6-118
6-119
6-119
6-120
6-121
6-122
6-123
6-123
6-124
6-125
6-127
Overview
Objectives
Associating and Disassociating a Service Profile to a Server Blade
Associate a Service Profile with a Compute Node
Associate a Service Profile with a Server Pool
Observe FSM Status During Service Profile Association
What Happens During Service Profile Association?
Cisco UCS Utility Operating System
Disassociate a Service Profile from a Compute Node
FSM Status During Service Profile Disassociation
Changes to a Service Profile that Trigger a Cisco UCS Utility Operating System Update
Planning the Organization Where a Service Profile Is Created
Creating Service Profiles in the Correct Organization
Creating Service Profiles in the Wrong Organization
2011 Cisco Systems, Inc.
6-127
6-127
6-128
6-128
6-129
6-130
6-131
6-132
6-133
6-134
6-135
6-136
6-136
6-137
iii
Moving a Service Profile to a New Server Blade in the Event of Hardware Failure
A Compute Node Hardware Has Failed
Automatic Service Profile Reassociation
Summary
Module Summary
References
Module Self-Check
Module Self-Check Answer Key
7-1
Overview
Module Objectives
7-1
7-1
7-3
Overview
Objectives
Cisco Virtual Switching Overview
Evolution of Virtual NetworkingBefore Virtualization
Evolution of Virtual NetworkingVirtual Switches
vNetwork Distributed Switch
VMware vNetwork Evolution
Distributed Virtual Networking
Virtual Switch Options with vSphere 4
Cisco Nexus 1000V Virtual Switching Feature Overview
Cisco Nexus 1000V Features
VM View of Resources
VM Transparency
Scaling Server Virtualization
VN-Link Brings VM-Level Granularity
VN-Link with the Cisco Nexus 1000V
Summary
7-3
7-3
7-4
7-4
7-5
7-6
7-7
7-8
7-9
7-10
7-10
7-11
7-13
7-14
7-15
7-16
7-17
7-19
Overview
Objectives
VMware vDS
vDS Configuration
Distributed Virtual Switching
Virtual Network Configuration
vDS Enhancements
vSwitch and vDS
Cisco Nexus 1000V DVS
Cisco Nexus 1000V Components
Cisco Nexus 1000VSingle Chassis Management
Cisco Nexus 1000VVSM Deployment Options
Cisco Nexus 1000VVSM High-Availability Options
Cisco Nexus 1000V CommunicationExtending the Backplane
VSM and VEM CommunicationLayer 2 Connectivity
VSM and VEM CommunicationLayer 3 Connectivity
VSM and VEM CommunicationImportant Considerations for Layer 3 Control
Cisco Nexus 1000V ComponentvCenter Communication
Cisco Nexus 1000VDomain ID
Cisco Nexus 1000VOpaque Data
Cisco Nexus 1000V Administrator Roles
Standard VMware Administrator Roles
Cisco Nexus 1000V Administrator Roles
Comparing VN-Link in Software and Hardware
VN-Link Packet FlowCisco Nexus 1000V and a Generic Adapter
VN-Link ProductsCisco UCS 6100 and VIC
VN-Link DeploymentVIC and Cisco UCS 6100 Series with VMware VMDirectPath
iv
6-138
6-138
6-139
6-140
6-141
6-142
6-145
6-149
7-19
7-19
7-20
7-20
7-21
7-22
7-23
7-24
7-25
7-25
7-26
7-28
7-29
7-30
7-31
7-32
7-33
7-34
7-35
7-36
7-37
7-37
7-38
7-39
7-40
7-41
7-42
7-43
7-44
7-45
7-46
7-47
Overview
Objectives
Cisco Nexus 1000V Overview
Cisco Nexus 1000V Series DVS
Cisco Nexus 1000V
Managing Network Policies with the Cisco Nexus 1000V
Cisco Nexus 1000V Architectural Overview
Cisco Nexus 1000V Architecture
Cisco Nexus 1000V VLANs
Cisco Nexus 1000v Management VLAN
Cisco Nexus 1000V Control and Packet VLANs
Cisco Nexus 1000V Configuration Example
VEM-to-VSM Communication
Cisco Nexus 1000V Opaque Data
Policy-Based VM Connectivity
Mobility of Security and Network Properties
Summary
7-47
7-47
7-48
7-48
7-49
7-51
7-52
7-52
7-53
7-54
7-55
7-56
7-57
7-58
7-59
7-61
7-63
7-65
Overview
Objectives
Configure VSM vSwitch Networks
Preparing the ESX Servers
VSM Port Group Requirements
VSM Port Group Creation
VSM vSwitch Configuration Showing Port Groups
Install the VSM on a VM
Cisco Nexus 1000V VSM Installation Methods
Creating a VSM VMChoose VM Configuration
Creating a VMName the VM and Inventory Location
Creating a VMChoose Shared Storage for VSM VM Files
Creating a VMChoose VSM Guest Operating System
Creating a VMCreate a VSM Local Disk
Creating a VMReview VSM Options
Creating a VMAdjust the VSM Memory Size
Creating a VMAdd the VSM Port Groups
Creating a VMAdd the Port Group Adapters
Creating a VMChoose Adapter Driver
Creating a VMReview Adapter Options
Verify Port Group Configuration
Creating a VMChoose .iso Boot File
Initial VSM Configuration
Access the VSM Console
Initial Setup
Configure the VSM-to-vCenter Connection
Install and Register the Plug-In for the New VSM
VSM Plug-In
Install the Plug-In for the New VSM
Verify Connectivity
Configure VSM Connectivity
Verify VSM Connectivity
Cisco Nexus 1000V High-Availability Configuration
Deploy the Secondary VSM
Cisco Nexus 1000V High Availability
Supervisor Modes
7-65
7-65
7-66
7-66
7-67
7-68
7-71
7-72
7-72
7-73
7-74
7-75
7-76
7-77
7-78
7-79
7-80
7-81
7-82
7-83
7-84
7-85
7-86
7-86
7-88
7-90
7-90
7-91
7-92
7-94
7-95
7-96
7-97
7-97
7-98
7-99
7-100
7-102
7-102
7-103
7-104
7-105
7-106
7-107
7-109
7-110
7-112
7-112
7-113
7-114
7-115
7-115
7-115
7-116
7-116
7-117
7-118
7-119
7-120
7-121
7-122
7-123
7-124
7-124
7-125
7-126
7-127
7-127
7-128
7-129
7-129
7-131
7-132
7-133
7-133
7-134
7-135
7-135
7-136
7-137
7-138
7-139
7-140
7-141
7-142
7-143
7-143
7-146
7-148
7-149
7-149
7-152
7-152
7-154
7-156
2011 Cisco Systems, Inc.
7-157
7-157
7-157
7-158
7-158
7-159
7-160
7-161
7-162
7-163
7-163
7-164
7-164
7-165
7-166
7-167
7-168
7-169
7-169
7-170
7-171
7-172
7-173
7-174
7-174
7-175
7-176
7-177
7-178
7-179
7-180
7-181
7-183
7-185
vii
viii
Module 6
Server Resources
Implementation
Overview
Stateless computing and unified fabric are two of the cornerstone value propositions of Cisco
Unified Computing System (UCS). Now that physical connectivity and administrative and
operational procedures are in place, the focus of this module is logical connectivity. A blade
server in the Cisco UCS B-Series is merely a compute node. To establish LAN and SAN
connectivity, universal unique identifier (UUID), MAC Address, BIOS settings, and various
other policies, a service profile must be created to contain all of these elements. The great
benefit of abstracting policy settings and identities is that they are portable. If a blade server
fails in such a way that the operating system or hypervisor can no longer operate, the service
profile can simply be associated with the replacement blade. All of the elements of that server
are transferred.
Module Objectives
Upon completing this module, you will be able to implement a stateless computing architecture.
This ability includes being able to meet these objectives:
6-2
Lesson 1
Objectives
Upon completing this lesson, you will be able to configure identity and resource pools used by
service profiles in service profile templates. This ability includes being able to meet these
objectives:
WWN
Pool
UUID Pool
MAC
Pool
DCUCI v4.06-4
Stateless computing requires unique identity resources for universally unique identifiers
(UUIDs), MAC addresses, and world wide names (WWNs) for Fibre Channel. Using pooled
resources ensures consistent application of policy and reasonable assurances that identities are
unique within the Cisco UCS Manager.
6-4
DCUCI v4.06-5
Logical resource pools provide abstracted identities that are used by service profiles in service
profile templates to facilitate stateless computing.
6-5
DCUCI v4.06-6
Physical resource pools are used to create groupings of blade servers that are based on arbitrary
administrative criteria. These pools can be used with service profile templates to provide rapid
provisioning of compute resources.
6-6
UUID Pools
This topic discusses the configuration of UUID pools.
UUID Use
UUID Use
UUIDs are essentially standardized serial numbers that
identify a particular server.
Traditional servers have a hardware UUID stored in the
system BIOS.
Operating systems and software licensing schemes may use
the UUID to detect if they have been moved between
physical servers.
Cisco UCS allows for the manual or automatic assignment of
UUIDs to enhance mobility of operating systems and
applications.
DCUCI v4.06-8
UUIDs are designed as globally unique identifiers for each compute node on a network. UUIDs
are used in a number of different ways. In the context of Cisco UCS, the UUID refers to a 128bit identifier coded into the compute node BIOS.
Operating systems, hypervisors, and applications can leverage the UUID for processes like
activation, internal disk labels, and so on. Some applications may use the UUID as an internal
root value propagated very tightly within data structures. Therefore, UUIDs should be locally
administered in the service profile instead of derived from the BIOS. UUIDs within a service
profile are mobile. If the underlying compute node fails, the service profile carries the UUID to
the replacement compute node, eliminating the need for potentially time-consuming searchand-replace operations.
6-7
UUID Format
UUID Format
UUIDs are globally unique 128-bit numbers.
Many schemes exist to define or generate the UUID.
Cisco UCS Manager uses a configurable 64-bit prefix and
allows you to specify a range of suffixes for use by compute
nodes.
It is recommended that prefixes be set to the same 24-bit
OUI as used in WWN pools, and pad as necessary.
DCUCI v4.06-9
There are many schemas for deploying and formatting UUIDs. It is the responsibility of Cisco
UCS to determine what values to encode in the UUID prefix and suffix.
6-8
DCUCI v4.06-10
To create a UUID pool, navigate to the Servers tab in the navigation pane. Navigate to Pools
and the organization in which the pool should be created. Right-click UUID Suffix Pools and
choose Create UUID Suffix Pool.
6-9
DCUCI v4.06-11
Assign a name and optional description for the pool. There are two choices for creating the
UUID prefix. The prefix represents the first 8 bytes of the 16-byte value. If you select Derived,
Cisco UCS Manager supplies the prefix. If you select Other, Cisco UCS Manager will prompt
you to supply the first 16 bits of the UUID.
6-10
DCUCI v4.06-12
Click Add to create a starting point for the 16-bit UUID suffix. In the example, the first two
bytes of the suffix have been changed to 0X0718. The current maximum number of compute
nodes in Cisco UCS is 320. The designer has decided to preallocate all of the UUIDs that can
be used.
Note
It is a best practice to only preallocate the number of identities in a given pool that are based
on current and near-term forecast. Every identity resource that is allocated is a managed
object in the Cisco UCS Manager database.
6-11
UUID Pool
UUID Pool
UUIDs are now available for assignment.
DCUCI v4.06-13
A pool of 320 UUIDs were created using the derived prefix that is combined with the 320
defined suffixes. They are immediately available for consumption by service profiles or service
profile templates.
6-12
MAC Pools
This topic discusses the configuration of MAC address pools.
DCUCI v4.06-15
A MAC pool consists of a range of MAC addresses. Create the MAC pool and assign a name.
With MAC pools, Cisco UCS administration is made easier when scaling server deployment of
service profiles by prompting stakeholders to define a set of MAC addresses before actual
deployment.
To create a MAC pool, navigate to the LAN tab in the navigation pane. Click the organization
that the pool should be created beneath. Click the Create MAC Pool link to start the wizard.
6-13
DCUCI v4.06-16
Provide the MAC pool with a unique name and, optionally, a description. Click Next and
decide how many MAC addresses should be created in the pool. Cisco provides a three-byte
Organizationally Unique Identifier (OUI) assigned by the IEEE. It is recommended that you do
not modify the prefix.
6-14
DCUCI v4.06-17
When the MAC pool has been created, there is an opportunity to verify the addresses and go
back to the previous window if a mistake has been noticed. Otherwise, click Finish to complete
the wizard.
6-15
DCUCI v4.06-18
A MAC address pool was added beneath the Americas organization. These addresses are
immediately available for assignment.
6-16
WWNN Pools
This topic discusses the creation and configuration of world wide node names (WWNNs).
WWN Format
WWN Format
WWNs are 64-bit addresses
Extended format
2X:XX:YY:YY:YY:ZZ:ZZ:ZZ
Example: 20:00:00:25:B5:20:20:00
X = organizationally assigned
YY:YY:YY = OUI
ZZ:ZZ:ZZ = organizationally assigned
DCUCI v4.06-20
WWNs are 64-bit addresses that have many possible formats. The example that is shown is for
reference only, as Cisco UCS Manager enforces a specific format in all WWN pools.
A WWN pool can include only WWNNs or WWPNs in the ranges from
20:00:00:00:00:00:00:00 to 20:FF:FF:FF:FF:FF:FF:FF or from 50:00:00:00:00:00:00:00 to
5F:FF:FF:FF:FF:FF:FF:FF. All other WWN ranges are reserved.
To ensure the uniqueness of the Cisco UCS WWNNs and WWPNs in the SAN fabric, you
should use the following WWN prefix for all blocks in a pool:
20:00:00:25:B5:XX:XX:XX
6-17
DCUCI v4.06-21
Cisco UCS Manager enforces the use of WWNNs that begin with 20. All WWN pools must
begin with that value, with any remaining values that you create. In keeping with the global
standards set for WWNs, it is recommended that a locally administered OUI be selected and
used as the third through fifth octets. Additionally, it is useful if the WWNN and world wide
port name (WWPN) values are easily distinguishable. Because the second octet of a WWN can
be organizationally assigned, that octet might be used to encode meaning for the WWNN or
WWPN. This convention and address block should be agreed upon by all stakeholders in the
initial implementation phase of a Cisco UCS deployment.
6-18
DCUCI v4.06-22
To create a WWNN pool, navigate to the SAN tab in the navigation pane. Click the
organization that the pool should be created beneath. Click Create WWNN Pool to start the
wizard.
6-19
DCUCI v4.06-23
Cisco supplies the first four bytes of the WWNN prefix by combining 20 with one of the
three-byte OUIs provided by Cisco. One WWNN is used by each service profile. In the
example that is shown, the administrator has decided to preallocate 320 addresses.
6-20
DCUCI v4.06-24
After the WWNN pool has been created, there is an opportunity to verify the addresses and go
back to the previous window if you notice a mistake. Otherwise, click Finish to complete the
wizard.
6-21
DCUCI v4.06-25
A WWNN pool was added beneath the Americas organization. These addresses are
immediately available for assignment.
6-22
WWPN Pools
This topic discusses the creation of WWPN pools.
DCUCI v4.06-27
To create a WWPN pool, navigate to the SAN tab in the navigation pane. Click the
organization that the pool should be created beneath. Click Create WWPN Pool to start the
wizard.
6-23
DCUCI v4.06-28
Cisco supplies the first four bytes of the WWPN prefix by combining 20 with one of the
three-byte OUIs provided by Cisco. One WWNN is used by each service profile. In this
example, the administrator decided to preallocate 640 addresses. One WWPN is required for
each virtual host bus adapter (HBA).
6-24
DCUCI v4.06-29
After the WWPN pool has been created, there is an opportunity to verify the addresses and go
back to the previous window if you notice a mistake. Otherwise, click Finish to complete the
wizard.
6-25
DCUCI v4.06-30
A WWPN pool was added beneath the Americas organization. These addresses are
immediately available for assignment.
6-26
Server Pools
This topic discusses the creation of server pools.
Server Pools
Server Pools
Server pools can be manually populated or auto-populated.
Blade server can be in multiple pools at the same time.
Associate a service profile with a pool:
A compute node is selected automatically from the pool.
Cisco UCS Manager will only select a blade server not yet
associated with another logical server and not in the
process of being disassociated.
Pool Dev
Pool QA
DCUCI v4.06-32
6-27
DCUCI v4.06-33
Server pools are configured under the LAN tab in the navigation pane. To create a new server
pool, right-click Server Pools and select Create Server Pool, or click the plus sign.
6-28
DCUCI v4.06-34
Enter a unique name and, optionally, a description for the new server pool and click Next.
6-29
DCUCI v4.06-35
Use the mouse to populate the new server pool. Hold down the shift key to select a range of
servers. Click the >> button to move the selected servers into the pool.
6-30
DCUCI v4.06-36
Verify that the desired servers are members of the new pool. Click Finish to complete the
wizard.
6-31
DCUCI v4.06-37
The content pane displays the IDs of the servers added to the pool. It includes information
about whether the server has been assigned. If assigned, a link to the service profile of the
server will display in the Assigned To column.
6-32
DCUCI v4.06-39
Specify qualifications that will be used for matching specific blade servers.
Specify server pool policies, which will put every blade server that matches a particular
qualification into a particular server pool.
Note
Server pool auto-population only happens as individual blade servers are discovered. If you
want auto-population to apply to existing compute node servers, they need to be
reacknowledged.
6-33
DCUCI v4.06-40
Create an empty server pool where servers matching a qualification criteria will be placed.
6-34
DCUCI v4.06-41
In the Servers tab of the navigation pane, expand the Policies member of the navigation tree.
Right-click the Server Policy Pool Qualification policy and select Create Server Pool Policy
Qualification.
6-35
DCUCI v4.06-42
In the Actions section of the policy wizard, there is a list of six categories that can be used for
selection criteria. The policy SAP_CRM_POOL is assigned to name this policy. In the
example, requirements for CPU memory and mezzanine card are specified.
6-36
DCUCI v4.06-43
The next step is to create a server pool policy. The purpose of this policy is to map a
qualification policy to the empty pool that was created earlier.
6-37
DCUCI v4.06-44
The example shows that two servers were automatically added to the pool SAP_Pool. As was
previously discussed, servers that have already been discovered will not automatically be
matched against a qualification policy. Both servers in the example were reacknowledged.
After discovery was completed, they were added to the SAP_Pool server pool. A new blade
server that is inserted into a chassis will be evaluated by all qualification policies that have been
configured.
6-38
DCUCI v4.06-46
Important lessons in IT are sometimes learned in the most difficult way possible. One such
example would be creating many service profiles, templates, policies, pools, and thresholds
under the root organization. If it is later decided to create suborganizations, it is not possible to
move any of the objects that are created under root to another organization. It is also not
possible to move an object from one nonroot organization to another. When you right-click on
a policy object or template, notice that there is no option for cut. There is the tantalizing
option of copy, but if one right-clicks on a different organization, it is apparent that there is
no option for paste.
The only way to remedy the situation is to delete and re-create every object in its appropriate
organization.
6-39
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Identity and resource pools simplify the creation of mobile
profiles and help to ensure that policies are consistently applied.
UUID pools are created in the Server tab and are used to
uniquely identify each blade server.
MAC address pools are created in the LAN tab and are
consumed in the service profile by vNICs.
WWNN pools are created in the SAN tab and are consumed in
the service profile by virtual HBAs.
WWPN pools are created in the SAN tab and are consumed in
the service profile by virtual HBAs.
Server pools are created in the LAN tab and are consumed by
service profiles and service profile templates.
Servers can be automatically added to server pools during
discovery based on a set of qualification criteria.
It is extremely important to create policies, pools, and thresholds
in the correct organization, as they cannot be moved after they
are created.
6-40
DCUCI v4.06-47
Lesson 2
Objectives
Upon completing this lesson, you will be able to configure service profiles and service profile
templates. This ability includes being able to meet these objectives:
Configure an adapter policy to enable RSS and set the failback timer for fabric failover
Create a QoS system class and allow all Ethernet traffic to use jumbo frames up to an MTU
of 9216
Differentiate between the features available in the simple service profile wizard and the
expert wizard
Configure a vHBA for two fabrics and have the service profile take its assignment of
WWNNs and WWPNs from a pool
Configure a vNIC for two fabrics and have the service profile take its assignment of MAC
addresses from a pool
6-42
Differentiate between required components and optional components of the service profile
definition
Profile
SAP_SJC
Profile
SAP_DFW
Course acronym
DCUCIvx.x#-5
v4.06-5
Stateless computing requires unique identity resources for universally unique identifiers
(UUIDs), MAC addresses, and world wide names (WWNs) for Fibre Channel. Using pooled
resources ensures consistent application of policy and reasonable assurances that identities are
unique within the Cisco UCS Manager.
6-43
Course acronym
DCUCIvx.x#-6
v4.06-6
The service profile represents a logical view of a server without any ties to a specific physical
device. The profile object contains all the elements of server function. This identity contains the
unique information for that server, including MAC, world wide port name (WWPN), UUID,
boot order, and so on. Each profile can only be associated with a single blade server at any
given time, and every blade server requires a unique service profile.
Service profiles facilitate server mobility. Mobility is the ability to transfer server identity
seamlessly between compute nodes in such a way that the underlying operating system or
hypervisor does not detect any change in server hardware.
In environments where blades are managed as traditional individual servers, service profiles are
still required. Service profiles provide LAN and SAN connectivity configuration. Configuring
service profiles in this way is similar to the need to configure individual LAN and SAN ports
for traditional rack servers.
6-44
Course acronym
DCUCIvx.x#-8
v4.06-8
In the Server tab of the navigation pane, expand the policies to locate BIOS Policies. Rightclick on the organization in the content pane where the new policy is to be created.
6-45
Course acronym
DCUCIvx.x#-9
v4.06-9
In earlier Cisco UCS Manager versions, the only BIOS setting that could be modified was boot
order. Beginning in Cisco UCS Manager version 1.3, control of nearly every BIOS setting is
available as a policy that can be assigned to a service profile.
The BIOS policy requires a name. There are three options for each BIOS setting:
6-46
Course acronym
DCUCIvx.x#-10
v4.06-10
The settings on the example shown in the figure enable Intel performance features of the CPU.
The service profile of this BIOS policy will be applied and can take complete and direct
advantage of the CPU features.
Feature
Description
Turbo Boost
Hyper Threading
Virtualization Technology
6-47
Course acronym
DCUCIvx.x#-11
v4.06-11
Intel Virtualization Technology for Directed I/O (VT-d) options can accelerate I/O operations
in virtualized environments where a physical interface is mapped directly to a virtual machine,
bypassing the hypervisor. VMware vSphere can benefit from enabling these options. Refer to
operating system or hypervisor documentation for guidance on the appropriate settings for
these options.
6-48
Course acronym
DCUCIvx.x#-12
v4.06-12
Click Finish in the wizard to save the new BIOS policy. The new policy is now available for
assignment to a service profile. When a service profile that references this policy is associated
to a blade server, there is no need for manual configuration of the BIOS at power-on self-test
(POST) time. This new capability can greatly simplify and accelerate the pace of server
provisioning.
6-49
Course acronym
DCUCIvx.x#-14
v4.06-14
In the Server tab of the navigation pane, expand Policies to locate Adapter Policies. Right-click
on the organization in the content pane where the new policy is to be created.
6-50
Course acronym
DCUCIvx.x#-15
v4.06-15
Receive-side scaling (RSS) relieves the single-server bottleneck that occurs in multicore CPU
systems. TCP packets that are received without RSS being enabled are only processed by a
single core. By enabling RSS, received packets are processed on all cores. RSS should
generally be enabled on any server with multicore CPUs. This adapter feature is disabled by
default.
The failback timer determines how long an adapter should wait to fail back to its original fabric
if a fabric failover event has occurred. As an example, if fabric interconnect A became
unavailable, servers would failover to their backup connections on the fabric interconnect B.
The 2-second timer that is employed here would apply as soon as fabric interconnect A
becomes available.
6-51
Course acronym
DCUCIvx.x#-16
v4.06-16
Click Finish in the wizard to save the new adapter policy. The new policy is now available for
assignment to a service profile.
6-52
Course acronym
DCUCIvx.x#-18
v4.06-18
You can create a quality of service (QoS) policy in two steps. To access a QoS system class,
the LAN Uplinks Manager must be opened.
From the Equipment tab of the navigation pane, select one of the fabric interconnects. In the
content pane, click the LAN Uplinks Manager link.
6-53
Course acronym
DCUCIvx.x#-19
v4.06-19
Click the QoS tab in the content pane to open the dialog box to access QoS and modify system
classes.
6-54
Course acronym
DCUCIvx.x#-20
v4.06-20
In the example, the Cisco UCS administrator has enabled the Gold QoS system class. In
addition, the administrator is configuring this as a member of the drop class and setting the
relative weighting to best-effort. These two parameters ensure that, from the perspective of
priority flow control and enhanced transmission selection, this traffic receives no special
handling. The goal of this policy is limited to enabling jumbo frame support for every virtual
network interface card (vNIC) that the policy is applied to.
Note
Disabling drop class and setting a weighting percentage other than best-effort will affect the
performance of service profiles that do not include the adapter policy that references this
system class.
6-55
Course acronym
DCUCIvx.x#-21
v4.06-21
Select the LAN tab of the navigation pane and expand Policies to locate the QoS Policies item
in the tree. Select the organization where the policy will reside and click the Create QoS
Policy link.
6-56
Course acronym
DCUCIvx.x#-22
v4.06-22
Name the new policy and select the QoS system class gold from the priority drop-down list.
Because the goal of this policy is simply to enable jumbo frame support, leave the burst and
rate options at their defaults.
The Host Control option allows you to determine whether this QoS policy can be modified or
overridden by the administrator of the hypervisor or operating system. The default is None,
which acts as a lockout. Only a Cisco UCS administrator with sufficient privileges in this
organization can modify or delete a policy.
6-57
Course acronym
DCUCIvx.x#-23
v4.06-23
The QoS Policy Jumbo_9216 is now available to be assigned to a vNIC in a service profile.
6-58
Course acronym
DCUCIvx.x#-25
v4.06-25
Click the Server tab and select the organization where the new IPMI policy will be created.
Click the Create IPMI Profile link to start the policy wizard.
6-59
Course acronym
DCUCIvx.x#-26
v4.06-26
Name the new policy and create at least one user. The role can be either admin or read-only.
Read-only users may query the IPMI system that is provided by the Cisco Integrated
Management Controller for the status of any IPMI sensor. Admin users can access sensors and
additionally perform power control operations of the server.
6-60
Course acronym
DCUCIvx.x#-27
v4.06-27
The Oracle_RAC_IPMI policy is now available for assignment to a service profile. User
jcasalleto is the sole administrative user.
Note
The users that are created in IPMI policies do not count against the 40-user limit on the local
user authentication database.
6-61
Course acronym
DCUCIvx.x#-28
v4.06-28
Expand the Server tab in the navigation pane and locate Serial Over LAN policies. In the
content pane, right-click on the organization where the policy should be created.
6-62
Course acronym
DCUCIvx.x#-29
v4.06-29
Name the new SoL policy, set the administrative state to Enabled, and set the serial baud rate
so that the connection will communicate. SoL connections use UDP port 623.
6-63
Course acronym
DCUCIvx.x#-30
v4.06-30
6-64
Course acronym
DCUCIvx.x#-32
v4.06-32
Expand Policies in the Server tab and select the organization where the policy will be created.
Click Create Scrub Policy to begin the wizard.
6-65
Course acronym
DCUCIvx.x#-33
v4.06-33
Name the policy and, optionally, provide a description. Previous versions of Cisco UCS
Manager did not include the BIOS scrubbing capability of the new scrub policies. Cisco UCS
Manager version 1.3 introduces this new combined scrub policy.
When disk scrub is set to Yes, local disk drives will be completely erased upon disassociation
of a service profile.
When the BIOS settings scrub policy is set to Yes, all BIOS settings will revert to factory
defaults upon disassociation of a service profile.
6-66
Course acronym
DCUCIvx.x#-34
v4.06-34
6-67
Course acronym
DCUCIvx.x#-36
v4.06-36
The primary difference between the simple service profile wizard and the expert service profile
wizard is the scope of tools available to manipulate within the wizard. The simple wizard
provides a fast, single-page form for the rapid provisioning of a blade server using all derived
identity values. The expert service profile wizard allows for the granular configuration of
policies, identities, and thresholds.
6-68
Expert Wizard
Course acronym
DCUCIvx.x#-37
v4.06-37
The table summarizes the most important differences between the simple and expert service
profile wizards.
6-69
Course acronym
DCUCIvx.x#-39
v4.06-39
From the Server tab in the navigation pane, select the organization for which a new service
profile will be created. In the content pane, click the link for Create Service Profile (expert).
6-70
Course acronym
DCUCIvx.x#-40
v4.06-40
Next, name the service profile and, optionally, provide a description. The name must not
exceed 16 characters and may not include special characters or spaces.
6-71
Course acronym
DCUCIvx.x#-42
v4.06-42
If you begin the service profile wizard and realize that you have forgotten to first create a
UUID pool, you can create the new pool from within the wizard. Some very useful capabilities
are available throughout the expert wizard for pooled values.
6-72
Course acronym
DCUCIvx.x#-43
v4.06-43
You can select the UUID pool directly from the drop-down list. After the assignment is made,
click Next to continue the wizard.
6-73
Configuration of vHBAs
This topic discusses the creation of virtual host bus adapters (vHBAs) that derive their identity
from pools.
Course acronym
DCUCIvx.x#-45
v4.06-45
6-74
Course acronym
DCUCIvx.x#-46
v4.06-46
Since Americas_WWNN pool was defined and populated with values, it can be directly
referenced from the drop-down list. The WWNN refers to the mezzanine card and there is only
one WWNN per service profile unless there are two mezzanine cards that are populated in a
full-slot blade. Click Next to continue the wizard.
6-75
Course acronym
DCUCIvx.x#-47
v4.06-47
Click the plus sign to open a dialog box to create the vHBA for fabric A.
6-76
Course acronym
DCUCIvx.x#-48
v4.06-48
The vHBA requires a name, WWPN assignment, VSAN assignment, and fabric assignment.
The example creates a new vHBA named vHBA-A. It will pull its WWPN assignment from
Americas_WWPN pool, is a member of VSAN 11, and is associated with fabric A.
6-77
Course acronym
DCUCIvx.x#-49
v4.06-49
Repeat the steps for creating the vHBA on fabric A to create the vHBA on fabric B. Be certain
to select a different VSAN for fabric B. Click Next to continue the wizard.
6-78
Configuration of vNICs
This topic discusses the configuration of virtual network adapters in the assignment of identity
from pools.
Course acronym
DCUCIvx.x#-51
v4.06-51
The expert wizard defaults to simple view on the opening page of networking configuration.
This view limits you to selecting derived MAC addresses and a single VLAN.
Note
The MK81-KR virtualization adapter does not have burned-in MAC addresses. Pooled or
manual address assignment is required on this mezzanine adapter.
6-79
Course acronym
DCUCIvx.x#-52
v4.06-52
Click the Expert radio button to reveal the complete suite of networking configuration tools
available within the expert wizard. Click the +Add button to open the dialog box and define a
new virtual network card.
6-80
Course acronym
DCUCIvx.x#-53
v4.06-53
Name
MAC assignment
Policies
This vNIC, named vNIC-A, will have its MAC address assignment from the Americas_MAC
identity pool. Because the mezzanine card that this vNIC will be associated with is an MK81KR, it is configured for hardware-based fabric failover.
A VLAN trunk with two tagged VLANs will be provided to the hypervisor.
The adapter and QoS policies enable RSS, reduced failback window, and jumbo frame support.
6-81
Course acronym
DCUCIvx.x#-54
v4.06-54
Repeat the process to create the vNIC for fabric B. It requires a unique name and it will be
assigned to fabric B. All other parameters will be identical to the vNIC for fabric A.
6-82
Course acronym
DCUCIvx.x#-55
v4.06-55
The vNIC summary window is used to validate the configuration of the newly created virtual
interface cards. Click Next to continue the wizard.
Note
The MAC address assignment indicates derived because the actual assignment of the
address will not occur until the service profile wizard has completed.
6-83
Course acronym
DCUCIvx.x#-57
v4.06-57
The Cisco UCS B250 and B440 full-slot blade servers include two slots for mezzanine cards.
Because a vNIC is a virtual definition of a network interface card, it could be placed on the
appropriate fabric on either of the mezzanine cards present in the full-slot server.
In a half-slot blade with a single mezzanine card, simply allow the system to select the only
mezzanine card. If manual control is desired, select Specify Manually from the Select
Placement drop-down list. vCon1 maps vNICs to the left mezzanine slot, and vCon2 maps a
vNIC to the right mezzanine slot (as viewed from the front panel of the blade server). Click
Next to continue in the wizard.
6-84
Course acronym
DCUCIvx.x#-59
v4.06-59
To select vHBAs to boot from a logical unit number (LUN), click and drag the first vHBA into
the boot order whitespace.
6-85
Course acronym
DCUCIvx.x#-60
v4.06-60
A pop-up window will appear with the name of the vHBA and the choice to make this the
primary or secondary boot device. Select Primary for the vHBA on fabric A, then click OK.
6-86
Course acronym
DCUCIvx.x#-61
v4.06-61
A pop-up window will appear with the name of the vHBA and the choice to make this the
primary or secondary boot device. Select Secondary for the vHBA on fabric B, then click OK.
6-87
Course acronym
DCUCIvx.x#-62
v4.06-62
Click the Add SAN Boot Target below the vHBAs and select Add SAN Boot Target to SAN
Primary.
6-88
Course acronym
DCUCIvx.x#-63
v4.06-63
In the pop-up window, enter the boot LUN (always LUN 0 on Cisco UCS systems), the
WWPN of the boot target, and set the type to Primary.
6-89
Course acronym
DCUCIvx.x#-64
v4.06-64
Repeat the steps that are required to set the primary, but set the type to Secondary. In the event
that the primary boot device fails, the secondary device will attempt to boot the system from the
other vHBA.
6-90
Course acronym
DCUCIvx.x#-65
v4.06-65
After boot devices are configured, the boot order summary window allows you to verify and
make modifications to the boot order before committing the configuration.
There are two checkboxes:
Reboot on Boot Order Change: Requires that the blade associated with the service profile
reboot immediately.
Enforce vNIC/vHBA Name: Means that the system uses any vNICs or vHBAs in the
order that is shown in the Boot Order table. If not checked, the system uses the priority that
is specified in the vNIC or vHBA.
Note
If the configuration of a vHBA is changed (other than the boot order), the system will
immediately reboot.
6-91
Server Assignment
Server assignment is one of the final configuration decisions within the expert service profile
wizard. The Cisco UCS administrator can simply finish the wizard and manually assign the
service profile at a later time, or manually select a server from the list of unassociated servers.
Service profiles can only be associated with a server that does not have a profile that is actively
associated with it.
Course acronym
DCUCIvx.x#-67
v4.06-67
From the Server Assignment drop-down list, select an available pool. Cisco UCS Manager will
remove that server from all of the pools of which it is currently a member and associate the
service profile with that blade. Click Next to continue the wizard.
6-92
Course acronym
DCUCIvx.x#-68
v4.06-68
Choose Select Existing Server from the Server Assignment drop-down list. Click the radio
button of an unassociated server.
The power state radio button allows you to choose the initial power state after the service
profile has been successfully associated with the blade server. If the SAN team has not
provisioned boot LUN in time for the service profile, you should leave the power state down.
6-93
Course acronym
DCUCIvx.x#-69
v4.06-69
Operational Policies is the last page of the wizard. Use the drop-down lists to select the IPMI
and SoL policies.
6-94
Course acronym
DCUCIvx.x#-70
v4.06-70
While still on the Operational Policies page of the wizard, expand the BIOS Configuration and
Scrub Policy subwindows. Use the drop-down lists to select the BIOS and scrub policies.
6-95
Optional
UUID
Additional vNICs
MAC address
Course acronym
DCUCIvx.x#-72
v4.06-72
The table summarizes required and optional elements of all service profiles.
6-96
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
A blade server requires a service policy to achieve external
communication through the mezzanine card.
Cisco UCS Manager version 1.3 includes manipulating BIOS
settings within a BIOS policy in the service profile.
Adapter policies allow configuration of RSS, checksum
offloading, failback timer, and transmit and receive buffers.
The LAN Uplinks Manager allows the modification of QoS
system classes to tune bandwidth priority, lossless fabric,
multicast, and MTU.
IPMI and SoL policies are applied to service profiles to allow
external access to the Cisco Integrated Management Controller
and serial console.
Scrub policy for local disks and BIOS can be applied to a service
profile that allows local disks and BIOS settings to be erased
upon disassociation.
The expert service profile wizard allows complete control over
the assignment of identity, policy, and thresholds.
Course acronym
DCUCIvx.x#-73
v4.06-73
Summary (Cont.)
The expert service profile wizard is initiated from the Server tab in the
navigation pane.
UUID can be assigned from a pool, manually assigned, or derived from
the server BIOS.
WWNN and WWPN assignment can be performed from a pool,
manually assigned, or (depending on the mezzanine model) derived
from hardware.
MAC address assignment can be performed from a pool, manually
assigned, or derived from hardware.
Full-slot blade servers include two mezzanine slots in the service profile,
and offer manual or automatic selection binding vNICs and vHBAs to a
slot.
You must configure the binding of a vHBA to a Fibre Channel boot
target.
Server assignment can be directly selected from a list of unassociated
servers, assigned at a later time, or signed from a pool.
UUID and MAC address assignments are the only required elements in
a service profile.
2 0 1 1 Ci s co Sy s te m s , n
I c . A l ri g h ts re s er ve d .
Co u rs e a crDoCU
n ym
CI vvx4.x
.0 6
# -7
-744
6-97
6-98
Lesson 3
Objectives
Upon completing this lesson, you will be able to configure service profile templates and
automate the creation of service profiles that are based on the template. This ability includes
being able to meet these objectives:
Create a service profile template and describe the need for pooled resources and identities
Describe the reasons to create differentiated service profile templates to allow variations of
policy
DCUCI v4.06-4
The process of creating a service profile template is nearly identical to creating a service profile
manually. The principal difference is that service profile templates cannot be directly applied to
a compute node and no hardware elements can use a derived value.
6-100
DCUCI v4.06-5
Service profile templates require a name, just as manually created service profiles. Templates
can also contain an optional description.
6-101
Template Types
Template Types
Initial templates
Updates to template are not propagated to service profiles
created using the initial template.
Updating templates
Changes to template are propagated to service profiles
created using the updating template.
DCUCI v4.06-6
There are two types of templates. For both types, profiles that are created from a template
cannot be modified. The ability to bind or unbind a service profile from its template will be
discussed later in this lesson.
6-102
Initial templates: This type of template maintains no connection to service profiles created
from this definition. Changes to the template do not propagate to service profiles created
from the template.
DCUCI v4.06-7
To facilitate stateless computing, the universally unique identifier (UUID) must be assigned
from a pool. The UUID is unique in that it is the only identity resource that has the option of
using the hardware default in the BIOS. Click Next to continue the wizard.
6-103
DCUCI v4.06-8
To enable Fibre Channel over Ethernet (FCoE) support to service profiles generated from this
template, enter the name of the world wide node name (WWNN) pool.
6-104
DCUCI v4.06-9
Click the Expert radio button to provide complete control over creating vHBAs. Click the Add
(+) button to create the virtual host bus adapter (vHBA) for fabric A. As in the service profile
creation wizard, enter a name, fabric affiliation, VSAN, and world wide port name (WWPN)
pool.
6-105
DCUCI v4.06-10
Click the Expert radio button to provide complete control over creating vHBAs. Click the Add
(+) button to create the vHBA for fabric B. As in the service profile creation wizard, enter a
name, fabric affiliation, VSAN, and WWPN pool.
6-106
vHBA Templates
vHBA Templates
All the elements of a vHBA can be stored in a template.
Templates are created under the SAN tab in Policies.
DCUCI v4.06-11
vHBA templates allow Cisco UCS administrators to speed the repetitive task of entering vHBA
parameters in many templates or service profiles. Enter the information once in the template
and never set it again.
6-107
DCUCI v4.06-12
The definition of a virtual network interface card (vNIC) in the service profile template wizard
is identical to the way that the definition is created in the manual service profile wizard. Enter a
name for the vNIC, MAC address pool, access VLAN or VLANs associated with an 802.1Q
trunk, and adapter performance profiles. In this example, the receive-side scaling (RSS) and
jumbo frame policies will be bound to every service profile generated from this template.
6-108
DCUCI v4.06-13
The definition of the vNIC for fabric B is identical to the creation of the vNIC for fabric A,
except for the name.
6-109
vNIC Templates
vNIC Templates
All the elements of a vNIC
can be stored in a
template.
Templates are created
under the LAN tab in
Policies.
DCUCI v4.06-14
vNIC templates allow Cisco UCS administrators to speed the repetitive task of entering vNIC
parameters in many templates or service profiles. It is especially useful when many VLANs
must be selected for a trunk interface.
6-110
DCUCI v4.06-15
The Cisco UCS administrator has two options regarding the boot order. The example illustrates
a SAN boot policy, where every initiator is mapped to the same target logical unit number
(LUN). This is only possible if the storage system maps the source WWPN with a unique LUN.
This method is very useful if the SAN administrators can provide premapped WWPNs to
LUNs.
The second option is to define the vHBAs as bootable, but leave the boot target definition for a
later time.
6-111
DCUCI v4.06-16
It is clear from the drop-down list for server assignment that manual assignment is not an
option. Servers must be assigned from a pool.
6-112
DCUCI v4.06-17
The Intelligent Platform Management Interface (IPMI) and Serial over LAN (SoL) policies that
were created earlier can be applied to the template. All service profiles that are generated from
this template will inherit both policies.
6-113
DCUCI v4.06-18
The BIOS and Scrub policies are applied to the template, and will be assigned automatically to
every service profile generated by the template.
6-114
Modify Template
Modify Template
Pool assignments and policies can be changed after template
creation.
DCUCI v4.06-19
After a service profile template has been generated, it can be modified in a very similar manner
to a manually created service profile. Certain changes made to an updating template will be
propagated to every service profile generated by the template.
6-115
SAP
Web Server
B250-M2
B230-M1
B200-M2
256 Gb RAM
128 Gb RAM
8 Gb RAM
No local HD
Local HD
M81-KR VIC x 2
M71-KR-Q
M71-KR-E
Xeon 5660 x2
Xeon 6560 x2
Xeon 5620 x 1
RSS support
RSS support
RSS support
Jumbo frames
Jumbo frames
Standard MTU
Hyperthreading off
Hyperthreading on
Hyperthreading on
DCUCI v4.06-21
The figure shows the power and flexibility of using templates for differentiated policy. Because
groups of applications share similar or identical requirements, service profile templates can be
created to seamlessly provide the identity of server resources that are needed to serve the
application. One of the important operational benefits to this approach is consistency of policy
across the entire class of applications.
6-116
DCUCI v4.06-23
Another benefit of using service profile templates is that the Cisco UCS administrator can
automate the provisioning of one to hundreds of compute nodes into simple operations. After a
service profile template is built and points to identity and resource pools with sufficient
resources, automation can begin.
Select the service profile template and the organization where the new service profiles are to be
created in the navigation pane. In the content pane, click the link Create Service Profiles
From Template.
6-117
DCUCI v4.06-24
The dialog box prompts you for a naming prefix and the number of service profiles to be
generated with that prefix. Immediately after you click OK, service profiles will appear under
the organization. If the server assignment in the template points to a server pool, a new service
profile will immediately begin to associate with the next available server in the pool.
6-118
Result of Change
UUID Pool
WWNN Pool
WWPN Pool
MAC Pool
Boot Order
vNIC/vHBA Placement
BIOS Policy
No reboot
DCUCI v4.06-26
Changes to updating templates are immediately propagated to any service profiles that were
generated from that template. If none of the generated service profiles are associated to a
compute node, there is no risk to an update. However, if certain changes are made to the
updating template, it will cause all linked compute nodes to reboot. A summary of template
modifications and their associated reactions are shown in the table.
6-119
DCUCI v4.06-27
Beginning with Cisco UCS Manager version 1.2, the system warns you that if the modification
to the updating template is executed, all impacted compute nodes will reboot immediately.
The best practice in this case is to perform the update in a scheduled and approved maintenance
window that provides for the graceful shutdown of all compute nodes that the change will
affect.
6-120
DCUCI v4.06-29
One of the consequences of generating service profiles from a template is that the resulting
service profiles are created in a read-only state. The example displays a conspicuous warning
message alerting the Cisco UCS administrator that no changes can be made to the service
profile unless it is unbound from the template. By clicking the unbind link, a small dialog box
appears asking the administrator to confirm the operation. When the operation is confirmed, the
service profile no longer displays the warning or its link to its parent template.
6-121
DCUCI v4.06-30
It is also possible to bind a manually created service profile to a template. If the previously
created service profile is bound to an initial or updating template, it will retain the identity
information for UUID, WWNN, WWPN, and MAC address unless the template uses different
pools. If the template uses a different named pool, identity information will be replaced with
data pulled from the pools of the template.
6-122
DCUCI v4.06-32
Service profiles and service profile templates can both be cloned. Simply right-click on the
name of the service profile or service profile template and select Create a Clone. The result of
this operation is that all pooled identities in the clone will be refreshed with unique values. The
boot order is cloned verbatim.
6-123
Clone Destination
Clone Destination
Unique values for MAC and WWN will be immediately
assigned to the profile from the appropriate pool.
Select the destination organization where the clone should be
created.
DCUCI v4.06-33
When you commit to creating a clone, a pop-up dialog box prompts you for the name of the
clone and destination organization where the clone will be created.
Note
6-124
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Service profile templates require all identity and resources to be
derived from a pool.
Differentiated service profile templates allow for the consistent
application of policy in heterogeneous computing environments.
Service profile templates can be leveraged to create an arbitrary
number of service profiles associated to compute nodes.
Updating templates are useful for maintaining consistent policy
across a large population of compute nodes, but modifying
certain parameters will cause all linked compute nodes to
reboot.
Service profiles created from templates cannot be changed
unless they are unbound from the template.
Clones made from existing service profiles maintain the boot
order for beta, but select new identities from identity pools.
DCUCI v4.06-34
6-125
6-126
Lesson 4
Objectives
Upon completing this lesson, you will be able to disassociate and associate service profiles
from compute nodes. You will recognize which parameters of a service profile trigger a
disruptive change. This ability includes being able to meet these objectives:
Use Cisco UCS Manager to associate and disassociate a service profile to a server blade
Describe what changes to a service profile trigger a Cisco UCS utility operating system
update, and outage to a server
Describe the importance of planning the organization where a service profile is created
Use Cisco UCS Manager to move a service profile to a new server blade in the event of
hardware failure
DCUCI v4.06-4
If you decided not to immediately assign a service profile to a compute node, you can select the
desired service profile in the navigation pane. In the General tab of the content pane, click the
link labeled Change Service Profile Association.
6-128
DCUCI v4.06-5
A pop-up dialog box prompts you to select either from an existing unassociated server or server
pool. Unlike service profile templates, service profiles that are not bound to a template are not
required to select a server from a pool only. Click OK to begin the association process.
6-129
DCUCI v4.06-6
You can follow the complete process of association by clicking the FSM tab in the content
pane. Recall from Monitoring System Events that service profile association and
disassociation are complex processes that are assigned to a finite state machine (FSM).
If the service profile is unable to associate with the compute node that is selected, the FSM will
provide information on which step of the process a failure occurred. This is very useful for
troubleshooting service profile association issues.
Note
6-130
Be aware that the FSM status indicator may appear to stop and lock up. Some stages of the
association process can take one minute or longer to complete. This is normal.
DCUCI v4.06-7
The processes that occur during service profile association and disassociation are very
interesting. The first step in associating a service profile to a compute node begins by powering
up the server. Next, the server Preboot Execution Environment (PXE) boots a small Linux
distribution over a private network connection to the fabric interconnect.
The screenshot in the figure highlights the term pnuosimg. Before Cisco UCS was released to
the public, this Linux operating system was referred to as Processor Node Utility Operating
System, or PNuOS. The official name for this is Cisco UCS Utility Operating System. The old
terminology still appears in some contexts.
Note
The black text on white background was reversed from the standard keyboard, video,
mouse (KVM) output of white text on black background in a graphics program, for
readability. The KVM does not have a choice of text or background colors.
6-131
DCUCI v4.06-8
During PXE boot, the server obtains a DHCP address over the private network. This network is
completely isolated from the in-band and out-of-band connections that are processed by the
fabric interconnect and servers.
The purpose of booting this Linux operating system is to program the compute node. You will
see identity information such as universally unique identifier (UUID), MAC address, world
wide network node (WWNN), world wide port name (WWPN), BIOS configuration, adapter
policies, and so on.
6-132
DCUCI v4.06-9
To disassociate a service profile from its compute node, select the service profile in the
navigation pane. In the content pane click the link Disassociate Service Profile. A pop-up
warning dialog asks you to verify the operation. Note also the small comment about observing
the process in the FSM tab.
6-133
DCUCI v4.06-10
Both association and disassociation processes are monitored by FSM. Click the FSM tab in the
content pane to observe the process of disassociation.
6-134
Yes
Yes
Yes
Maybe
Yes
Yes
No
No
No
No
UCSuOS = UCS Utility Operating System
DCUCI v4.06-12
It is important to understand what types of service profile modifications can be made outside of
a change control maintenance window. The table summarizes changes that will trigger a Cisco
UCS Utility Operating System to run. As of version 1.2 of Cisco UCS Manager, the system
alerts you to the changes that will result in the compute node being immediately rebooted.
6-135
DCUCI v4.06-14
If a given Cisco UCS deployment creates all identity, resource, pool, and policy objects in the
root organization, no special planning or considerations are required.
If the Cisco UCS administrators have created a significant number of objects under the root
organization and decide to apply administrative hierarchy at a later date, significant work will
be required. Every object that needs to move into a nonroot organization will need to be deleted
and re-created in the new organization.
When an object has been created in a given organization, it cannot be renamed or moved.
6-136
DCUCI v4.06-15
In the example in the figure, a service profile was created in the Americas organization. If this
company employs role-based access control (RBAC) and limits administrative scope based on
locale, administrators in the Boston organization may have no control over the service profile
BOS_Oracle_RAC1. The service profile must be deleted and re-created in the appropriate
organization.
6-137
DCUCI v4.06-17
The blade server in slot 8 has experienced a severe failure. An administrator used Cisco UCS
Manager to decommission the blade in slot 8. The service profile that is associated with this
compute node is automatically de-linked.
6-138
DCUCI v4.06-18
Because the service profile received its server assignment from a server pool, it automatically
selected a new server from the same named pool without any further action from the
administrator. After the service profile reassociated to the new compute node, the operating
system or hypervisor automatically rebooted on the new compute node.
6-139
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Cisco UCS Manager is used to associate and disassociate a
service profile with a compute node.
Certain modifications to service profiles already associated
with the compute node can trigger Cisco UCS Utility
Operating System to reboot the computer node.
Service profiles, service profile templates, policies, and pools
cannot be moved or renamed after they are created under a
given organization.
Cisco UCS Manager allows the administrator to move a
service profile from a failed compute node to a replacement.
6-140
DCUCI v4.06-19
Module Summary
This topic summarizes the key points that were discussed in this module.
Module Summary
Policies, identity pools, and resource pools are used by
service profiles and service profile templates.
Service profiles allow for stateless computing by abstracting
identity values normally tied to hardware.
Service profile templates allow for consistent application of
policy, yet are flexible enough for differentiation of policy in
heterogeneous computing environments.
Service profiles, service profile templates, pools, and policies
cannot be renamed or moved once they are created in an
organization.
DCUCI v4.06-1
6-141
Each compute node requires a unique service profile. There is a one-to-one mapping of service
profiles to compute nodes. When the Cisco UCS administrator begins the association process,
the Preboot Execution Environment (PXE) boots a special-purpose Linux distribution on a
private network connection to the fabric interconnect. The Cisco UCS Utility Operating System
is responsible for programming all elements of identity and policy to the compute node.
Because some identities and policies are integral to the operation of the compute node, caution
should be observed when modifying certain elements of the service profile. Many elements of
the service profile will cause the node to reboot to run the Cisco UCS Utility Operating System
to apply the request to change.
References
For additional information, refer to these resources:
Gai, S., Salli, T., et al (2009). Project California: a Data Center Virtualization Server.
Raleigh, NC: Lulu.com.
Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Configuring Server-Related Pools.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCS
M_GUI_Configuration_Guide_1_3_1_chapter24.html
Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Creating Server Pool Policy Qualifications.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCSM_
GUI_Configuration_Guide_1_3_1_chapter25.html#task_43C18874A0D245D6987C3530BD
4F06C7
Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Configuring Service Profiles.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCS
M_GUI_Configuration_Guide_1_3_1_chapter26.html
Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Ethernet and Fibre Channel Adapter Policies.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCS
M_GUI_Configuration_Guide_1_3_1_chapter23.html#concept_C113E10277D74C5FB66
C92A87293B696
Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Configuring QoS System Classes.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCSM_
GUI_Configuration_Guide_1_3_1_chapter18.html#task_2F15A8D4D2B34ED79B7917877
D749B47
Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Setting the vNIC/vHBA Placement.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCS
M_GUI_Configuration_Guide_1_3_1_chapter26.html#task_9E40E74DB2EE43EDA8A74
C44AFC9323A
6-142
Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Creating a Service Profile Template.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCS
M_GUI_Configuration_Guide_1_3_1_chapter26.html#task_23DCA7911736413A9D0317
9A23523A0A
Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Cloning a Service Profile.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCS
M_GUI_Configuration_Guide_1_3_1_chapter26.html#task_8593C436D54F4C0BA2467
C01FFBC0F81
6-143
6-144
Module Self-Check
Use the questions here to review what you learned in this module. The correct answers and
solutions are found in the Module Self-Check Answer Key.
Q1)
Q2)
Which statement is true regarding the creation of identity pools? (Source: Creating
Identity and Resource Pools)
A)
B)
C)
D)
Q3)
How many bits of the UUID are represented in the UUID suffix blocks? (Source:
Creating Identity and Resource Pools)
A)
B)
C)
D)
E)
Q5)
Which three items are reasons for adopting pooled identity resources? (Choose three.)
(Source: Creating Identity and Resource Pools)
A)
B)
C)
D)
E)
F)
Q4)
Service profiles, policies, templates, and pools can be moved from organization
to organization.
Service profiles, policies, templates, and pools can be renamed.
Service profiles, policies, templates, and pools are the foundation of stateful
computing.
Service profiles, policies, templates, and pools cannot be moved from
organization to organization.
Service profiles, policies, templates, and pools are used to enable syslog.
32
48
64
128
256
How many bits are represented in the MAC address prefix (OUI)? (Source: Creating
Identity and Resource Pools)
A)
B)
C)
D)
E)
32
48
64
128
256
6-145
Q6)
Q7)
Which item accurately describes where to configure a QoS system class? (Source:
Creating Service Profiles)
A)
B)
C)
D)
Q8)
Which two of these actions can be configured in the expert service profile wizard but
not in the simple service profile wizard? (Choose two.) (Source: Creating Service
Profile Templates and Cloning Service Profiles)
A)
B)
C)
D)
E)
6-146
TCP port 22
TCP port 23
TCP port 623
UDP port 22
UDP port 23
UDP port 623
What are two functions of a scrub policy? (Choose two.) (Source: Creating Service
Profiles)
A)
B)
C)
D)
E)
Q11)
KVM over IP
SoL
BMC
IPMI
QoS
FCoE
Which port does SoL use to communicate? (Source: Creating Service Profiles)
A)
B)
C)
D)
E)
F)
Q10)
What are three functions of the Cisco Integrated Management Controller? (Choose
three.) (Source: Creating Service Profiles)
A)
B)
C)
D)
E)
F)
Q9)
Q12)
Which part of a cloned service profile is copied verbatim? (Source: Creating Service
Profile Templates and Cloning Service Profiles)
A)
B)
C)
D)
E)
Q13)
What must an administrator configure to modify the service profile created from a
template? (Source: Creating Service Profile Templates and Cloning Service Profiles)
A)
B)
C)
D)
Q14)
MAC pool
WWNN pool
WWPN pool
VLAN
template
Which three configuration changes to a service profile will trigger a Cisco UCS Utility
Operating System reboot? (Choose three.) (Source: Managing Service Profiles)
A)
B)
C)
D)
E)
F)
Q17)
Q16)
Q15)
UUID
MAC address
WWNN
WWPN
boot order
Which three configuration changes to a service profile are safe to perform during
production (no automatic reboot)? (Choose three.) (Source: Managing Service
Profiles)
A)
B)
C)
D)
E)
F)
6-147
Q18)
Which item correctly describes where Cisco UCS Utility Operating System updates its
IP address? (Source: Managing Service Profiles)
A)
B)
C)
D)
E)
Q19)
What are two properties of service profiles after they have been created in an
organization? (Choose two.) (Source: Managing Service Profiles)
A)
B)
C)
D)
E)
Q20)
What is the correct method to move a service profile from a failed compute node to a
replacement? (Source: Managing Service Profiles)
A)
B)
C)
D)
6-148
Unbind the service profile from its template and it will automatically seek a
new compute node.
Select Change Service Profile Association from the Equipment tab.
The service profile will automatically reassociate without any administrator
intervention.
Select Change Service Profile Association from the Servers tab.
Q2)
Q3)
B, C, F
Q4)
Q5)
Q6)
Q7)
Q8)
A, B, D
Q9)
Q10)
B, E
Q11)
C, D
Q12)
Q13)
Q14)
Q15)
Q16)
A, B, C
Q17)
D, E, F
Q18)
Q19)
B, E
Q20)
6-149
6-150
Module 7
Module Objectives
Upon completing this module, you will be able to describe the capabilities, features, and
benefits of the Cisco Nexus 1000V switch; the installation method and capabilities of the Cisco
Nexus 1000V VSM; and the configuration of VN-Link in hardware with VMware PTS. This
ability includes being able to meet these objectives:
Describe the Cisco Nexus 1000V switch and its role in a virtual server networking
environment
7-2
Lesson 1
Objectives
Upon completing this lesson, you will be able to compare VMware virtual networking options
and the Cisco Nexus 1000V DVS. This ability includes being able to meet these objectives:
Describe the Cisco virtual switching solution for the VMware vDS
App
App
App
OS
OS
OS
Server
Admin
Network
Admin
Access
Ports
Access
Switches
Distribution
Switches
DCUCI v4.07-4
Before virtualization, each server ran its own operating system, usually with a single
application running in addition to the operating system. The network interface cards (NICs)
were connected to access layer switches to provide redundancy. Network security, quality of
service (QoS), and management policies were created on these access layer switches and
applied to the access ports that corresponded to the appropriate server.
If a server needed maintenance or service, it was disconnected from the network. During that
time, any critical applications needed to be manually offloaded to another physical server.
Connectivity and policy enforcement were very static and seldom required any modifications.
Server virtualization has made networking, connectivity, and policy enforcement much more
challenging. By using VMware vMotion, the devices that run the applications can move from
one physical host to another, which leads to several challenges:
Providing network visibility from the virtual machine (VM) virtual NIC (vNIC) to the
physical access switch
Providing consistent mobility of the policies that are applied to the VMs during a vMotion
event
Further complications exist because objects have been virtualized and can move around.
7-4
vSwitch
vSwitch
vSwitch
ESX
Hosts
Server
Admin
Network
Admin
VLAN
Trunks
Access
Switches
Distribution
Switches
DCUCI v4.07-5
The VMware server-virtualization solution extends the access layer into the VMware ESX
server by using the VM networking layer. Several components are used to implement servervirtualization networking:
Physical networks: Physical devices connect ESX hosts for resource sharing. Physical
Ethernet switches are used to manage traffic between ESX hosts, like in a regular LAN
environment.
Virtual networks: Virtual devices run on the same system for resource sharing.
Virtual Ethernet switch (vSwitch): Like a physical switch, the vSwitch maintains a table
of connected devices. This table is used for frame forwarding. The vSwitch can be
connected, via uplink, to a physical switch by using a physical VM NIC (VMNIC). The
vSwitch does not provide the advanced features of a physical switch.
Physical NIC (VMNIC): The VMNIC is used to uplink the ESX host to the external
network.
7-5
Current
vSwitch
vSwitch
vSwitch
vNetwork
DCUCI v4.07-6
VMware vSphere 4 introduces the vDSa DVS. With vDS, multiple vSwitches within an ESX
cluster can be configured from a central point. The vDS automatically applies changes to the
individual vSwitches on each ESX host.
The feature is licensed and relies on VMware vCenter server. The vDS cannot be used for
individually managed hosts.
The VMware vDS and vSwitch are not mutually exclusive. Both devices can run in tandem on
the same ESX host. This type of configuration is necessary when running the Cisco Nexus
1000V Virtual Supervisor Mode (VSM) on a controlling host. In this scenario, the VSM runs
on a vSwitch that is configured for VSM connectivity, and the VSM controls a DVS that runs a
Cisco Nexus 1000V Virtual Ethernet Module (VEM) on the same host.
7-6
Current
vSwitch
vSwitch
vSwitch
vNetwork
vNetwork Platform
vNetwork Platform
DCUCI v4.07-7
The Cisco server virtualization solution uses technology that was jointly developed by Cisco
and VMware. The network access layer is moved into the virtual environment to provide
enhanced network functionality at the VM level.
This solution can be deployed as a hardware- or software-based solution, depending on the data
center design and demands. Both deployment scenarios offer VM visibility, policy-based VM
connectivity, policy mobility, and a nondisruptive operational model.
VN-Link
Cisco Virtual Network Link (VN-Link) technology was jointly developed by Cisco and
VMware and has been proposed to the IEEE for standardization. The technology is designed to
move the network access layer into the virtual environment to provide enhanced network
functionality at the VM level.
7-7
W2003EE-32-B
Host2
W2003EE-32-A2
Host3
W2003EE-32-B2
W2003EE-32-A3
W2003EE-32-B3
Host4
W2003EE-32-A4
W2003EE-32-B4
Distributed
Virtual Port
Group
DistributedvSwitch
VM Network
DVS
vDS
DCUCI v4.07-8
The vDS adds functionality and simplified management to the VMware network. The vDS adds
the ability to use private VLANs (PVLANs), perform inbound rate limiting, and track VM port
state with migrations. Additionally, the vDS is a single point of network management for
VMware networks. The vDS is a requirement for the Cisco Nexus 1000V DVS.
7-8
Model
Details
vSwitch
Host based:
1 or more per
ESX host
vDS
Distributed:
1 or more per
data center
Distributed:
1 or more per
data center
Virtual networking concepts are similar with all virtual switch alternatives.
2011 Cisco Systems, Inc. All rights reserved.
DCUCI v4.07-9
With the introduction of vSphere 4, VMware customers can enjoy the benefits of three virtual
networking solutions: vSwitch, vDS, and the Cisco Nexus 1000V.
The Cisco Nexus 1000V bypasses the VMware vSwitch with a Cisco software switch. This
model provides a single point of configuration for the networking environment of multiple ESX
hosts. Additional functionality includes policy-based connectivity for the VMs, network
security mobility, and a nondisruptive software model.
VM connection policies are defined in the network and applied to individual VMs from within
vCenter. These policies are linked to the Universally Unique ID (UUID) of the VM and are not
based on physical or virtual ports.
7-9
Operation and
Management
Organizational
Structure
Simplifies
management and
troubleshooting with
VM-level visibility
Enables flexible
collaboration with
individual team
autonomy
Simplifies and
maintains existing VM
management model
DCUCI v4.07-11
Cisco Nexus 1000V has the same Cisco IOS command-line interface (CLI) and remote
management capabilities as Cisco Catalyst and Cisco Nexus physical switches, but it provides a
feature set that has been optimized for VMware virtualized environments. The Cisco Nexus
1000V solution maintains compatibility with VMware advanced services such as vMotion,
Distributed Resource Scheduler (DRS), Fault Tolerance (FT), and High Availability (HA). The
solution extends VM visibility to the physical access switch, while maintaining the rich,
advanced feature set offered by physical switches.
From an operations perspective, the network administrator may manage the Cisco Nexus
1000V solution from a console connection or remotely by using Telnet or Secure Shell (SSH).
The solution preserves the network administrator function and enables administrators to
effectively create advanced security, QoS, and VLAN policies. Because the Cisco Nexus
1000V switch communicates directly with vCenter, configuration changes are reflected directly
within the vCenter inventory. Therefore, the server administrator can consume the policies by
applying them to either virtual uplinks or VM vNICs.
7-10
VM View of Resources
VM View of Resources
With virtualization, VMs
have a transparent view
of their resources
DCUCI v4.07-12
Server virtualization has created many networking challenges, especially at the VM level. VMs
have a transparent view of their CPU, memory, storage, and networking resources because the
hypervisor services all resource requests that the VMs make.
As a result, creating and assigning advanced networking policies (such as security, QoS, port
channels, and so on) to virtual networks at the VM level has been difficult.
7-11
DCUCI v4.07-13
VMware vSphere enables CPU and memory to be reserved for VMs that run critical
applications. However, this resource-reservation mechanism applies to neither storage nor
networking resources.
Because a single VMNIC uplink may be shared, it becomes difficult to identify multiple virtual
I/O streams from a single, physical server uplink.
7-12
VM Transparency
VM Transparency
Scaling globally depends
on maintaining transparency
while providing operational
consistency.
DCUCI v4.07-14
VMs are likely to move from one assigned ESX host to another. The ability to identify unique,
individual VMs and treat them differently from a security and network-performance
perspective would also be valuable.
To deliver operational consistency, three important parameters must be supported:
Having the ability to maintain the rich, advanced feature set that is offered by highperformance access switches
Enabling the application of the advanced feature policies at the VM level and assuring that
they move with the VM in a vMotion event
7-13
Operations and
Management
Organizational
Structure
Applied at physical
servernot the
individual VM
Lack of VM visibility,
accountability, and
consistency
Impossible to enforce
policy for VMs in
motion
Inefficient
management model
and inability to
effectively troubleshoot
Muddled ownership
because
server administrator
must configure
virtual network
Organizational
redundancy creates
compliance challenges
DCUCI v4.07-15
7-14
VN-Link technology:
VLAN
101
Cisco DVS
2011 Cisco Systems, Inc. All rights reserved.
VMware vMotion is a feature that is used within ESX environments. This feature permits you
to move VMs, either automatically or manually (via notifications), from one physical machine
to another, as resources become overutilized in response to a physical server failure or a
VMware fault-tolerance event. This process can involve several issues:
The policy that is associated with the VM must be able to follow the VM to the new
physical location.
Viewing or applying policies to locally switched traffic when using the vSwitch that is
internal to the ESX host can be difficult.
Determining with which VLAN the VMs should be associated can be difficult.
Cisco VN-Link uses the vDS framework to deliver a portfolio of networking solutions that can
operate directly within the distributed hypervisor layer, offering a feature set and operational
model that is familiar and consistent with other Cisco networking products. Cisco VN-Link
specifically enables individual VMs to be identified, configured, monitored, migrated, and
diagnosed in a way that is consistent with current network operational models.
VN-Link indicates the creation of a logical link between a vNIC on a VM and a Cisco switch
that is enabled for VN-Link. This logical creation is the equivalent of using a cable to connect a
NIC to a port on an access layer switch.
A switch that is enabled for VN-Link uses the concept of virtual Ethernet (vEthernet)
interfaces, which are dynamically provisioned based on network policies that are stored on the
switch. These policies are the result of VM provisioning operations by the hypervisor
management layer (vCenter). The vEthernet interface maintains network configuration
attributes, security, and statistics for a given virtual interface across mobility events.
7-15
VM
VM
VM
Software Based
Policy-Based
VM Connectivity
Nexus
1000V
VEM
vSphere
Cisco Nexus
1000V VSM
Non-Disruptive
Operational Model
DCUCI v4.07-17
With the introduction of the vDS framework, VMware permits third-party networking vendors
to provide their own implementation of distributed virtual switches. When deploying the Cisco
Nexus 1000V solution, the vSwitch and port group configuration is offloaded to the network
administrator. This process helps to ensure a consistent network policy throughout the data
center.
In the Cisco Nexus 1000V, traffic between VMs is switched locally at each instance of a VEM.
Each VEM is responsible for interconnecting the local VMs with the rest of the network,
through upstream access layer network switches. The Cisco Nexus 1000V Virtual Supervisor
Module (VSM) manages the control plane protocols and configuration of the VEMs. The VSM
never takes part in the actual forwarding of packets.
7-16
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Cisco Nexus 1000V is a third-party DVS co-developed with
VMware.
Cisco Nexus 1000V preserves existing operating
environments while delivering extended VM policies.
DCUCI v4.07-18
7-17
7-18
Lesson 2
Objectives
Upon completing this lesson, you will be able to describe the unique features of the Cisco
Nexus 1000V solution and compare it to the Cisco Unified Computing System virtualization
adapter. This ability includes being able to meet these objectives:
Describe how the Cisco Nexus 1000V integrates into VMware vDS
Describe the unique features that the Cisco Nexus 1000V brings to VMware vDS
VMware vDS
This topic discusses the use of the distributed virtual switch (DVS) in VMware environments.
vDS Configuration
vDS Configuration
ESX 1
ESX 2
ESX 3
ESX 4
App
App
App
App
App
App
App
App
OS
OS
OS
OS
OS
OS
OS
OS
vSwitch
vSwitch
vSwitch
vSwitch
vDS
vCenter
DCUCI v4.07-4
The VMware vNetwork Distributed Switch (vDS) extends the features and capabilities of
virtual networks. At the same time, the vDS simplifies provisioning and the ongoing process of
configuration, monitoring, and management.
With VMware ESX 3.5 and prior releases, virtual networks were constructed by using VMware
vNetwork Standard Switches (vSwitches). Each ESX host would use one or more vSwitches to
connect the virtual machines (VMs) with the server network interface cards (NICs) and the
outside physical network.
7-20
App
App
App
OS
App
Configuration
App
OS App
OS
OS
OS*
App
App
App
OS
OS
OS
OS
VMware vDS
DCUCI v4.07-5
In addition to continuing support for the vSwitch, VMware vSphere introduces an additional
choice for VMware virtual networking: the vDS. The vDS eases the management burden of
per-host virtual-switch configuration management by treating the network as an aggregated
resource. Individual, host-level virtual switches are abstracted into one large vDS that spans
multiple hosts at the datacenter level. Port groups become distributed virtual port groups that
span multiple hosts and ensure configuration consistency for the VMs and virtual ports that are
necessary for such functions as VMware vMotion.
7-21
VI3 vSwitches
configured manually
on each host
DVSwitch0
Physical Adapters
VM Network
Physical
Adapters
Vmnic1
1000
Full
7 virtual machines
| VLAN
ID *Port Group
Virtual
Machine
Physical
Adapters
Vmnic1
1000
Full
AD Server
DHCP Server
VM Network
AD Server
WebApp1
WebApp2
VLAN ID: --
vmnic1 vc1.ucs.local
DHCP Server
AD Server
FileServ
WebApp1
DB1
WebApp2
DB2
FileServ
DHCP Server
AD1
AD2
WebApp1
WebApp2
DB1
Service Console Port
Service Console
FileServ
DB2
Vswif0: 10.1.100.10
DB1
Service Console Port
Service Console
DB2
Vswif0: 10.1.100.10
Service Console Port
Service Console
Vswif0: 10.1.100.10
DB1
DB2
DB3
DHCP
Web1
Web2
- DVSwitch0-DVUplinks i
- Uplink0 (4 NIC Adapters)
i
i
i
i
i
i
i
i
vmnic1 vc2.ucs.local
vmnic1 vc3.ucs.local
vmnic1 vc4.ucs.local
DCUCI v4.07-6
The figure shows the conceptual difference in management of a standard vSwitch environment
versus a vDS environment. The vSwitch requires a separate configuration from a separate
management panel. The vDS requires just one management panel for the single switch that
spans multiple hosts.
7-22
vDS Enhancements
vDS Enhancements
The VMware vDS offers several enhancements to VMware
switching:
Port state migration (statistics and port state follow VM)
Rx rate limiting
PVLANs
DVSwitch0
VM Network
x
i
VLAN ID: --
AD2
DB1
DB2
DB3
DHCP
Web1
Web2
DVSwitch0-DVUplinks
Uplink0 (4 NIC Adapters)
vmnic1 vc1.ucs.local
i
i
i
i
i
i
i
i
vmnic1 vc2.ucs.local
vmnic1 vc3.ucs.local
vmnic1 vc4.ucs.local
DCUCI v4.07-7
Private VLAN (PVLAN) support enables broader compatibility with existing networking
environments that use PVLAN technology. PVLANs enable users to restrict communication
between VMs on the same VLAN or network segment. This ability significantly reduces the
number of subnets that are needed for certain network configurations.
PVLANs are configured on a DVS, with allocations made to the promiscuous PVLAN, the
community PVLAN, and the isolated PVLAN. Distributed virtual port groups can then use one
of these PVLANs, and VMs are assigned to a distributed virtual port group. Within the subnet,
VMs on the promiscuous PVLAN can communicate with all VMs. VMs on the community
PVLAN can communicate among themselves and with VMs on the promiscuous PVLAN. VMs
on the isolated PVLAN can communicate only with VMs on the promiscuous PVLAN.
Note
Adjacent physical switches must support PVLANs and must be configured to support the
PVLANs that are allocated on the DVS.
Network vMotion is the tracking of VM networking state (for example, counters or port
statistics) as the VM moves from host to host on a vDS. This tracking provides a consistent
view of a virtual network interface, regardless of the VM location or vMotion migration
history. This view greatly simplifies network monitoring and troubleshooting activities when
vMotion is used to migrate VMs between hosts.
DVS expands upon the egress-only traffic-shaping feature of standard switches by providing
bidirectional traffic-shaping capabilities. Egress (from VM to network) and ingress, or receive
(Rx) rate limiting, (from network into VM) traffic-shaping policies can now be applied on
distributed virtual port group definitions.
Traffic shaping is useful when you want to limit the traffic to or from a VM or group of VMs,
to protect a VM or other traffic in an oversubscribed network. Policies are defined by three
characteristics: average bandwidth, peak bandwidth, and burst size.
7-23
i x
Physical Adapters
Vmnic0 1000 Full
VM Network
VLAN ID: --
i
i
i
i
i
i
i
i
DVSwitch0-DVUplinks
Uplink0 (4 NIC Adapters)
AD Server
vmnic1 vc1.ucs.local
vmnic1 vc2.ucs.local
DHCP Server
vmnic1 vc3.ucs.local
WebApp1
vmnic1 vc4.ucs.local
WebApp2
FileServ
DB1
DB2
DCUCI v4.07-8
The VMware vSwitch and vDS are not mutually exclusive and can coexist within the same
VMware vCenter management environment. Physical VM NICs (VMNICs) may be assigned to
either the vSwitch or the vDS on the same VMware ESX or ESXi host.
You can also migrate the ESX service console and VMware VMkernel ports from the vSwitch,
where they are assigned by default during ESX installation, to the vDS. This migration
facilitates a single point of management for all virtual networking within the vCenter datacenter object.
7-24
vCenter Server
VEM
VSM
Cisco VEM
Cisco VEM
VM1
VM2
VM3
VM4
VM5
VM6
VM7
Cisco VEM
VM7
VM9
VM10
VM11
VM12
DCUCI v4.07-10
The Cisco Nexus 1000V provides Layer 2 switching functions in a virtualized server
environment. Cisco Nexus 1000V DVS replaces virtual switches within the ESX servers. This
replacement allows users to configure and monitor the virtual switch by using the Cisco Nexus
Operating System (NX-OS) command-line interface (CLI). The Cisco Nexus 1000V also
provides visibility into the networking components of the ESX servers and access to the virtual
switches within the network.
The vCenter server defines the data center that the Cisco Nexus 1000V will manage. Each
server is represented as a line card and is managed as if it were a line card in a physical Cisco
switch.
Two components are part of the Cisco Nexus 1000V implementation:
Virtual Supervisor Module (VSM): The Cisco Nexus 1000V VSM is the control software
of the Cisco Nexus 1000V DVS. The VSM runs either on a VM or as an appliance and is
based on Cisco NX-OS.
Virtual Ethernet Module (VEM): The Cisco Nexus 1000V VEM actually switches the data
traffic and runs on a VMware ESX 4.0 host. VSM can control several VEMs, with the
VEMs forming a switch domain that should be in the same virtual data center that is
defined by VMware vCenter.
7-25
VM
#1
VM
#2
VM
#3
VM
#4
VM
#5
VM
#6
DCUCI v4.07-11
Cisco Nexus 1000V is effectively a virtual chassis: It is modular, and ports can be physical or
virtual. The servers are modules on the switch, with each physical NIC port on a module being
a physical Ethernet port. Modules 1 and 2 are reserved for the VSM, and the first server or host
is assigned automatically to the next available module number. The ports to which the vNIC
interfaces connect are virtual ports on the Cisco Nexus 1000V and are assigned a global
number.
7-26
Local Intrfce
Holdtme
N1KV-Rack10
N1KV-Rack10
Eth 1/8
Eth 2/10
136
136
Capability
S
S
Platform
Port ID
Nexus 1000V
Nexus 1000V
Eth2/2
Eth3/2
Cisco VSMs
Cisco VEM
DCUCI v4.07-12
To the upstream switches, the Cisco Nexus 1000V appears as a single switch from the control
and management plane. Protocols such as Cisco Discovery Protocol and Simple Network
Management Protocol (SNMP) operate as a single switch.
The show cdp neighbor command that is shown in the figure indicates that two VEMs are
associated with the virtual switch. Therefore, two ports connect the Cisco Nexus 1000V to the
upstream switch.
7-27
VM in the Server
Admin Domain
Cisco VEM
Cisco VEM
VSM-VA
VM1
VM2
VM3
VM4
VM5
VM6
Cisco VEM
VM7
VM8
VM9
VM10
VM11
DCUCI v4.07-13
7-28
VSM virtual appliance: The VSM runs on an ESX host as an ESX virtual appliance, with
support for 64 VEMs. Installation of the VSM virtual appliance is provided though an ISO
or Open Virtual Appliance (OVA) file.
VSM physical appliance: A Cisco Nexus 1010 Physical Server can host four VSM virtual
appliances. The VSM physical appliance typically is deployed in pairs, for redundancy
purposes.
VM
VM
VSM
Primary
VM
ESX Host B
VM
VM
VM
VSM
Secondary
ESX Host A
DCUCI v4.07-14
The Cisco Nexus 1000V provides high availability of control plane functionality by using
active (primary) and standby (secondary) VSMs.
Both the primary and secondary VSMs install as VMs on ESX or ESXi hosts. To provide
maximum redundancy and high availability, each VSM should be installed on separate hosts.
The Cisco Nexus 1010 architecture requires the deployment of two devices in a highavailability configuration.
7-29
Cloud
DCUCI v4.07-15
Maintaining communication between the VSM (control plane) and the individual VEMs (data
forwarding) is of paramount importance for successful operation. The control protocol that is
used by the VSM to communicate with the VEM is borrowed from the Cisco Nexus data center
switch and Cisco Multilayer Director Switch (MDS) 9000 Series Fibre Channel switches.
The VSM-to-VEM communication may be implemented by using a Layer 2 model that uses
control and packet VLANs or by using a Layer 3 cloud control capability.
7-30
Cisco VSMs
Layer 2
Cloud
Cisco VEM
DCUCI v4.07-16
Communication between the VSM and VEM is provided through two distinct virtual interfaces:
the control and packet interfaces.
The control interface carries low-level messages to each VEM, to ensure proper configuration
of the VEM. A 2-second heartbeat is sent between the VSM and the VEM, with a 6-second
timeout. The control interface maintains synchronization between primary and secondary
VSMs. The control interface is like the Ethernet out-of-band channel (EOBC) in switches such
as Cisco Nexus 7000 Series Switches.
The packet interface carries network packets, such as Cisco Discovery Protocol or Internet
Group Management Protocol (IGMP) control messages from the VEM to the VSM.
Customers may choose to implement separate VLANs for the control, management, and packet
VLANs, or they can share the same VLAN.
Being VLAN interfaces, the control and packet interfaces require Layer 2 connectivity.
7-31
VM
VM
VM
Layer 3
Cloud
Cisco Nexus
1000V VSM
DCUCI v4.07-17
With the release of the Cisco Nexus 1000V Release 4.0(4)SV1(2) image, you can now manage
VSM-to-VEM communication by using a Layer 3 network. The administrator specifies a
separate IP subnet and sufficient IP addresses to span all participating ESX or ESXi hosts,
which requires IP connectivity between the VSM and the ESX or ESXi hosts.
The Cisco Nexus 1000V uses Advanced Interprocess Communications (AIPC), a backplane
protocol that the Cisco Nexus 7000 Series Switches also uses. AIPC encapsulates the control
and packet interface data in an Ethernet over IP (EoIP) tunnel, which reduces the number of
broadcasts that are used.
7-32
VM
VM
VM
VM
Layer 3
Cloud
Cisco Nexus
1000V VSM
DCUCI v4.07-18
Effective Layer 3 control requires important considerations and requirements. The round-trip
time between the VSM and VEM must not exceed 100 ms. The network administrator must
also assure that duplicate IP addresses do not appear.
Layer 2 adjacency must be maintained between active and standby VSMs that share the same
control and packet VLANs. The VLAN that is configured to carry the Layer 3 traffic must be
specified as the system VLAN in both the virtual Ethernet (vEthernet) and uplink port profiles.
7-33
vCenter Server
DCUCI v4.07-19
Communication between the VSM and vCenter is provided through the VMware VIM
application programming interface (API) over Secure Sockets Layer (SSL). The connection is
set up on the VSM and requires installation of a vCenter plug-in, which is downloaded from the
VSM.
After communication between the two devices is established, the Cisco Nexus 1000V is created
in vCenter.
This interface is known as the out-of-band (OOB) management interface and should be in the
same VLAN as your vCenter and host-management VLAN, although that is not a requirement.
7-34
VSM1 - PRODUCTION
DID 15 CMD
DID 25 CMD
X
Cisco VEM DID 15
*The same VLANs can be used between Cisco Nexus 1000V instances with different domain IDs.
2011 Cisco Systems, Inc. All rights reserved.
DCUCI v4.07-20
The domain ID configuration setting within the Cisco Nexus 1000V CLI provides a means to
uniquely identify and separate multiple instances of the Cisco Nexus 1000V DVS within a
vCenter management environment.
The domain ID may be any number between 1 and 4095.
7-35
OD
vCenter Server
Cisco VEM
OD
Cisco VEM
OD
Cisco VEM
OD
DCUCI v4.07-21
The domain ID is a parameter of the Cisco Nexus 1000V and is used to identify a VSM and
VEM that relate to one another. The domain ID of the Cisco Nexus 1000V is defined when the
VSM is first installed and becomes part of the opaque data that is transmitted to vCenter. Each
command that the VSM sends to any associated VEM is tagged with this domain ID. When a
VSM and VEM share a domain ID, the VEM accepts and responds to requests and commands
from the VSM. If the VEM receives a command or configuration request that is not tagged with
the proper domain ID, then the VEM ignores that request. Similarly, if the VSM receives a
packet that is tagged with the wrong domain ID from a VEM, the VSM ignores that packet.
7-36
VMware Admin
per ESX Host
Network Admin
vCenter based
per vSwitch
vCenter based
vCenter
vCenter
DCUCI v4.07-23
The figure shows the challenges that server administrators face when using the vSwitch or vDS
to manage network connectivity and policies. The network administrator has little or no
participation, despite the fact that these issues are part of their area of expertise and skill set.
Shifting the burden of network connectivity, policy creation, and configuration from the server
to the network administrator vastly improves operations, management, and continuity. Server
administrators are tasked only with consuming or assigning the port profile (port groups within
the vCenter), which is well within their comfort zone.
7-37
VMware Admin
Automated
Automated
Unchanged;
vCenter based
Automated
Unchanged;
vCenter based
Policy-Based
VM specific
Unchanged;
vCenter based
Network Admin
Same as physical
network
Policy based
Port channel
optimized
When a new VM is provisioned, the server administrator selects the appropriate port profile.
The Cisco Nexus 1000V creates a new switch port that is based on the policies that are defined
by the port profile. The server administrator can reuse the port profile to provision similar VMs,
as needed. Port profiles are also used to configure the physical NICs in a server. These port
profiles, which are known as uplink port profiles, are assigned to the physical NICs as part of
the installation of the VEM on an ESX host.
7-38
VM
VM
VM
VM
Cisco
Nexus
1000V
VEM
Hypervisor
VN-Link
Generic Adapter
DCUCI v4.07-26
The Cisco Nexus 1000V was developed with VMware to deliver transparency to various server
hardware platforms. The Cisco Nexus 1000V may be used with generic NICs on generic x86based servers. In addition, the upstream physical access layer switch may also be generic.
This generic support enables the Cisco Nexus 1000V to be installed and configured within
existing architectures, minimizing disruption and maximizing functionality.
7-39
vEth
VM
VM
VM
VM
VM
VM
Cisco
Nexus
1000V
VEM
Cisco
Nexus
1000V
VEM
Hypervisor
Hypervisor
VM
VM
Server
Server
802.1Q Switch
vCenter
DCUCI v4.07-27
The Cisco Nexus 1000V is similar to physical Ethernet switches. For packet forwarding, the
Cisco Nexus 1000V uses the same techniques that other Ethernet switches apply, keeping a
MAC address-to-port mapping table that is used to determine where packets should be
forwarded. The Cisco Nexus 1000V maintains forwarding tables in a slightly different manner
than other modular switches. Unlike physical switches with a centralized forwarding engine,
each VEM maintains a separate forwarding table. No synchronization exists between
forwarding tables on different VEMs. In addition, there is no concept of forwarding from a port
on one VEM to a port on another VEM. Packets that are destined for a device that is not local
to a VEM are forwarded to the external network, which in turn may forward the packets to a
different VEM.
7-40
10GbE/FCoE
Eth
FC
Cisco
VIC
FC
Eth
User
Definable
vNICs
0
127
DCUCI v4.07-28
The Cisco M81KR/P81E is a VIC that operates within the Cisco Unified Computing System.
The VIC adapter is a hardware-based VN-Link solution, which uses a virtual network tag
(VNTag) to deliver VM visibility at the physical access switch. The VIC adapter assigns a
VNTag to each vNIC that is created as part of the service profile within Cisco UCS Manager.
The assigned VNTag is locally significant and has visibility between the VIC and Cisco UCS
6100 Fabric Interconnects.
The VNTag provides traffic separation between VMs that share the same VIC adapter and is a
reliable method for virtualizing each VM vNIC onto the physical access layer switchport.
7-41
Hypervisor
VM
VM
VM
VM
Complete bypass
Falls back to previous deployment
when vMotion is needed
Cisco VIC
Hypervisor
Cisco VIC
VN-Link
2011 Cisco Systems, Inc. All rights reserved.
DCUCI v4.07-29
The VIC adapter offers the Pass-Through Switch (PTS) feature. When configured, this feature
enables I/O processing to be offloaded from and bypass the local hypervisor to the hardware
VIC. Therefore, the VM vNIC communicates directly with the VIC adapter. The M81KR VIC
mezzanine adapter option works within the Cisco UCS B-Series platform, and the P81E
Peripheral Component Interconnect (PCI) adapter works within the Cisco UCS C-Series
platform.
7-42
VM
VM
VM
VM
VM
VM
VM
Hypervisor
Hypervisor
Cisco VIC
VM
Cisco VIC
vETH
vCenter
DCUCI v4.07-30
Packet flow within a Cisco Unified Computing System cluster that includes the VIC begins
with the operating system, constructing frames in a traditional fashion. The VIC adds the
VNTag, which consists of a new, 6-byte file that is inserted directly behind the source MAC
address within the Ethernet frame format. The VNTag also has a new EtherType that is
assigned for this service.
The virtual interface switch provides both ingress and egress processing, and the VIC
forwarding is based upon the VNTag value. VNTag assignment and removal remain local to
the Cisco Unified Computing System cluster. Finally, the operating system stack receives the
frame transparently, as though the frame is directly connected to the physical access layer
switch.
7-43
Generic
Adapter
Cisco
VIC
Cisco
VIC
Cisco
VIC
DCUCI v4.07-31
The main solution that VN-Link offers is the concept of a VM adapter with a virtual patch cord
that is connected to a physical access layer switch. This solution essentially translates to
providing VM vNIC adapter visibility and virtualizing this interface onto the physical switch
port.
This solution may be implemented by using the Cisco Nexus 1000V with a generic Ethernet
adapter, in which the VEM assigns the vEthernet interface, which survives vMotion events.
The Cisco Nexus 1000V may also be combined with the VIC adapter within a Cisco Unified
Computing System cluster. In this configuration example, the VEM would assign the vEthernet
interface to deliver the VN-Link implementation.
Within a Cisco Unified Computing System cluster, the VIC adapter offers the hardware-based
implementation of VN-Link, by using VNTag assignment on the VIC. This option offers two
unique choices that are based on performance: the PTS feature or the use of VMDirectPath.
7-44
Generic
Adapter +
Cisco
Nexus
1000V
VIC +
Cisco
Nexus
1000V
VIC + Cisco
UCS 6100
VIC +
Cisco UCS
6100
VMDirect
Path
DCUCI v4.07-32
Cisco has created a flexible set of networking options to support various existing and new
virtual environments. For large, diverse environments that require advanced networking feature
sets and flexible deployments, the use of the Cisco Nexus 1000V and Cisco Nexus 1010,
combined with either generic or VIC adapters, is a solid choice.
For environments that may require higher performance and more predictable I/O management,
the use of a Cisco UCS B- or C-Series server and the VIC adapter, with or without the
configured PTS feature, is another deployment option.
The Cisco VN-Link solution caters to a wide continuum of networking requirements within
virtualized server environments.
7-45
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
VMware vDS allows a centralized configuration point for vSwitches
within a VMware ESX cluster.
Cisco Nexus 1000V has separate control plane (VSM) and data
plane (VEM) functionality that ensures policy mobility and preserves
network and server administration functionality.
Cisco Nexus 1000V represents a VN-Link software-based solution.
Cisco Virtual Interface Card (VIC) represents a VN-Link hardwarebased solution.
7-46
DCUCI v4.07-33
Lesson 3
Objectives
Upon completing this lesson, you will be able to characterize the architecture of the Cisco
Nexus 1000V and its capabilities in delivering VN-Link. This ability includes being able to
meet these objectives:
VM
1
NetFlow collection
VM
4
VMware ESX
NIC
SPAN
Mobility of network policy
and security to the virtual
network
Server
VM
VM
2
3
NIC
Cisco Nexus
1000V
LAN
DCUCI v4.07-4
The Cisco Nexus 1000V Series Switch is a software solution that is used in place of the
VMware Standard Switch (vSwitch) to provide advanced functionality. The Cisco Nexus
1000V allows all physical network best practices to be applied at the virtual machine (VM)
level. Achieving this level of service with the VMware vSwitch is not possible.
7-48
Nondisruptive
Operational Model
Server
VM
1
VM
2
Server
VM
3
VM
4
VM
5
VM
6
VM
7
VM
8
Defined Policies
VMware ESX
WEB Apps
VM connection policy
HR
DB
Applied in vCenter
Linked to VM UUID
Compliance
vCenter
DCUCI v4.07-5
The distributed architecture of the Cisco Nexus 1000V deployment allows a single Cisco Nexus
Operating System (Cisco NX-OS) software-based supervisor module to manage the switching
capabilities of as many as 64 VMware ESX servers.
Each ESX server Virtual Ethernet Module (VEM) acts as a remote line card of the Virtual
Supervisor Module (VSM) and is configured as such. The VSM can run as a VM that runs on a
VMware ESX or ESXi host, or as a physical appliance that is known as a VMware vCenter
virtual appliance.
VM
1
VM
2
VM
3
Nondisruptive
Operational Model
Server
VM
4
VM
1
VM
2
VM
3
VM
4
Server benefits
Maintains existing VM
management
Reduces deployment time
Improves scalability
Reduces operational workload
Virtual Center
Enables VM-level visibility
VMware ESX
Network benefits
Unifies network management and
operations
Improves operational security
Enhances VM network features
Ensures policy persistence
Enables VM-level visibility
DCUCI v4.07-6
7-49
A new type of port has been created to identify the individual VMs. This port type is a virtual
Ethernet (vEthernet) port. This port type represents a virtual network interface card (vNIC),
regardless of which physical server it is on. vEthernet ports are not tied to a specific server or
physical VM NIC (VMNIC), but instead represent the vNIC of a virtual server. The vEthernet
port for a VM will remain the same even if the VM migrates. The vEthernet port remains the
same, so if a VM obtains vEthernet 1 on host 1 and is moved to host 2, then the VM retains
vEthernet 1 as its port. Keeping the same vEthernet port allows for network configuration and
policy mobility.
Nondisruptive
Operational Model
Server
VM
VM1
#5
VM
VM2
#6
VM
VM3
#7
VM
VM4
#8
VMware ESX
Property mobility
vMotion for the network
Ensures VM security
Maintains connection state
vCenter
DCUCI v4.07-7
This figure demonstrates the policy mobility that the Cisco Nexus 1000V Distributed Virtual
Switch (DVS) offers. A VMware vMotion event (either manual or automatic) has occurred,
causing the VMs on the first ESX server to be moved to the second ESX server, which is now
running all eight VMs.
The policies have been assigned to the VM vEthernet interfaces. Because these interfaces
remain with the VM, then by inheritance, the policies that are assigned to the vEthernet
interfaces remain with the VM.
7-50
2. Push
3. Consume (Server
Admin)
ESX Server
VEM
ESX Server
VEM
1. Produce
(Network
Admin)
VSM
DCUCI v4.07-8
The Cisco Nexus 1000V provides an ideal model in which network administrators can define a
network policy that virtualization or server administrators can use as new VMs are created.
Policies that are defined on the Cisco Nexus 1000V are exported to vCenter. The server
administrator then assigns specific network policies as new VMs require access. This concept is
implemented on the Cisco Nexus 1000V by using a feature called port profiles. With the port
profile feature, the Cisco Nexus 1000V eliminates the requirement for the server administrator
to create or maintain vSwitch and port group configurations on ESX hosts.
Port profiles separate network and server administration. For network administrators, the Cisco
Nexus 1000V feature set and the capability to define a port profile by using the same syntax
that is used for existing physical Cisco switches helps ensure consistent policy enforcement,
without the burden of managing individual switch ports. The Cisco Nexus 1000V solution also
provides a consistent network management, diagnostic, and troubleshooting interface to the
network operations team, allowing the virtual network infrastructure to be managed like the
physical infrastructure.
7-51
VM 1
VM N
Cisco NX-OS
VSM
VMnet
Control Path
API
INIT
VMnet
VMnet
vNIC/vNICs
Virtual Switch
Cisco NX-OS VEM
Control Path
API
DCUCI v4.07-10
The figure depicts the basic architecture of the Cisco Nexus 1000V. The Cisco Nexus 1000V
uses a control path application programming interface (API) to communicate with the data
plane. This API simulates out-of-band (OOB) management and does not interfere with the data
plane. In the figure, the Cisco NX-OS VEM replaces the vSwitch for assigned VMNICs and
vNICs. The VEM is controlled by the VSM on an OOB channel (control VLAN). VMware
vCenter provides the API that the Cisco NX-OS VSM uses to control the VEM. The VSM
resides logically outside of the host, although the VSM can be a VM that resides on the host.
In Cisco Nexus 1000V deployments, VMware provides the vNIC and drivers, whereas the
Cisco Nexus 1000V provides switching and management of switching.
Note
7-52
A NIC in VMware is represented by the VMNIC interface. The VMNIC number is allocated
during VMware installation.
VLAN Type
System Function
System
Control
VEM-to-VSM communications
Management
Packet
vMotion
VM traffic
DCUCI v4.07-11
7-53
Cisco VSMs
Layer 2
Cloud
vCenter Server
DCUCI v4.07-12
The management VLAN is one of three mandatory VLANs for Cisco Nexus 1000V
communications services. The management VLAN has two primary purposes:
7-54
Providing access to the VSM command-line interface (CLI) console via Secure Shell (SSH)
or Telnet
Cisco VSMs
Control
Extended AIPC such as those within a
physical chassis (6k, 7k, MDS).
Carries low level messages to ensure
proper configuration of the VEM.
Maintains a 2-second heartbeat with the
VSM to the VEM (timeout 6 seconds).
Maintains synchronization between primary
and secondary VSMs.
Packet
Carries any network packets, such as CDP
or IGMP control, from the VEM to the VSM.
Layer 2
Cloud
Cisco VEM
DCUCI v4.07-13
In a traditional modular switch, the supervisor has physical connections to its line cards. In the
Cisco Nexus 1000V, the VSM uses IP packets to communicate with the VEMs as remote line
cards. Two separate channels are required:
Control VLAN
Packet VLAN
Allowance for Cisco Discovery Protocol packets to be exchanged from the VSM
and VEM
Allowance for port channel setup on VEM ports, via Link Aggregation Control
Protocol (LACP)
Allowance for Internet Group Management Protocol (IGMP) join and leave
messaging for multicast communications
7-55
DCUCI v4.07-14
In this configuration example, three ESX servers are managed by the vCenter server that is
shown in the lower left corner of the figure. A standalone VSM has been installed and
configured on host ESX 1. Hosts ESX 2 and ESX 3 have VEMs that are installed to provide the
data traffic forwarding for the host.
Four VLANs have been configured to support VSM-to-VEM, VSM-to-vCenter, and VSM-tomanagement network connectivity. VLAN 260 is used to support control, packet, and
management functions, and VLAN 20 is used for data traffic forwarding.
7-56
VEM-to-VSM Communication
VEM-to-VSM Communication
VEM1
VEM2
VEM3
2
2
VSM
Virtualization
Platform
2011 Cisco Systems, Inc. All rights reserved.
Domain ID/SSL
/SOAP/WSDL
vCenter
DCUCI v4.07-15
Like the VSM, each VEM has control and packet interfaces. The end user cannot manage or
directly configure these interfaces.
1. The VEM uses the opaque data that vCenter provides to configure the control and packet
interfaces with the correct VLANs. The VEM then applies the correct uplink port profile to
the control and packet interfaces, to establish communication with the VSM. After the
VSM recognizes the VEM, a new module is virtually inserted into the Cisco Nexus 1000V
virtual chassis. The VSM CLI notifies the network administrator that a new module has
powered on, much as happens with a physical chassis. The module assignment is
sequential, meaning that the VEM is assigned the lowest available module number between
3 and 66. When a VEM comes online for the first time, the VSM assigns the module
number and tracks that module by using the Universally Unique Identifier (UUID) of the
ESX server. This process helps ensure that if the ESX host loses connectivity or is powered
down for any reason, the VEM will retain its module number when the host comes back
online.
2. Cisco Nexus 1000V implements a solution called domain IDs. A domain ID is a parameter
of the Cisco Nexus 1000V and is used to identify a VSM and VEM that relate to each
other. The domain ID of the Cisco Nexus 1000V is defined when the VSM is first installed
and becomes part of the opaque data that is transmitted to vCenter. Each command the
VSM sends to any associated VEMs is tagged with this domain ID. When a VSM and
VEM share a domain ID, the VEM accepts and respond to requests and commands from
the VSM. If the VEM receives a command or configuration request that is not tagged with
the proper domain ID, then the VEM ignores that request. Similarly, if the VSM receives a
packet that is tagged with the wrong domain ID from a VEM, the VSM ignores that packet.
7-57
3. The VSM maintains a heartbeat with its associated VEMs. This heartbeat is transmitted at
2-second intervals. If the VSM does not receive a response within 8 seconds, the VSM
considers the VEM to be removed from the virtual chassis. If the VEM is not responding
because of a connectivity problem, then the VEM continues to switch packets in its last
known good state. When communication is restored between a running VEM and the VSM,
the VEM is reprogrammed, causing a slight (1 to 15 second) pause in network traffic. All
communication between the VSM and the VEM is encrypted with a 128-bit algorithm.
OD
OD
vCenter Server
Cisco VEM
OD
Cisco VEM
OD
Cisco VEM
OD
DCUCI v4.07-16
The term opaque data is applied to information that is encrypted with a Master Key that the
VSM and vCenter hold. The vCenter server imports the VSM digital certificate, to establish a
confidential channel to the VSM. As each VEM is enabled, vCenter exchanges a protected
element that includes configuration information that is authenticated by the encrypted domain
ID. This process ensures that only trusted vCenter servers push port profiles to the VEMs and
that vCenter allows communications only to trusted VSMs.
7-58
Policy-Based VM Connectivity
Policy-Based VM Connectivity
Port profiles are created and pushed to
vCenter as port groups.
Server 1
VM
1
VM
2
VM
3
VM
4
VMware ESX
Web apps:
PVLAN 108, Isolated
Security Policy = Port 80 & 443
Rate Limit = 100 Mb/s
QoS Priority = Medium
Remote Port Mirror = Yes
vCenter
2011 Cisco Systems, Inc. All rights reserved.
HR
DB
Compliance
VSM
DCUCI v4.07-17
Network administrators configure VM port profiles that contain the policies for specific server
types. These port profiles are passed to vCenter as port groups and are assigned to individual
vNICs by the VMware administrator.
7-59
Server
VM
1
VM
2
VM
3
VM
4
Rate limiting
QoS marking (CoS/DSCP)
Remote port mirror (ERSPAN)
vCenter
2011 Cisco Systems, Inc. All rights reserved.
DCUCI v4.07-18
The figure summarizes the various security and traffic flow options that can be applied to port
profiles:
7-60
NetFlow collection
Rate limiting
Quality of service (QoS) marking (class of service [CoS]/ differentiated services code point
[DSCP])
2.
Server 2
Server 1
VM
1
#1
VM
2
VM
3
VM
5
VM
6
VM
7
VM
8
VM
4
VMware ESX
2.
1.
Port profile
Cisco Nexus 1000V
Network persistence:
Flow statistics
Remote port mirror session
2011 Cisco Systems, Inc. All rights reserved.
vCenter
VM port configuration
state
VM monitoring statistics
VSM
DCUCI v4.07-19
As VMs migrate between hosts through manual or automatic processes, vCenter and the Cisco
Nexus 1000V VSM work together to maintain the state and port policies of the VMs. This
cooperation helps to ensure network security and consistency in a rapidly adapting VMware
environment.
7-61
Server 2
Server 1
VM
2
VM
3
VM
4
VM
VM
1
#5
VM
6
VM
7
VM
8
VMware ESX
3.
Cisco Nexus 1000V
Network update
vCenter
VSM
DCUCI v4.07-20
7-62
Enables security setting, network policy, and connectivity state to move with vMotion
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
The Cisco Nexus 1000V architecture provides control plane and
data plane separation.
The control plane and management architecture is provided by the
VSM, and the data traffic forwarding is implemented by using the
VEM.
DCUCI v4.07-21
7-63
7-64
Lesson 4
Objectives
Upon completing this lesson, you will be able to describe the installation, connection, and
configuration of the Cisco Nexus 1000V VSM and compare the Cisco Nexus 1000V and the
Cisco Nexus 1010 Virtual Services Appliance. This ability includes being able to meet these
objectives:
Describe the configuration of the certificate exchange and connection from the VSM to
vCenter
Differentiate between the capabilities of the Cisco Nexus 1010 Virtual Services Appliance
and the Cisco Nexus 1000V software VSM
DCUCI v4.07-4
You must install the VMware Enterprise Plus license on each VMware ESX server, to run a
VEM on that host.
You can run the VSM in a virtual machine (VM) that is running either VMware ESX 3.5 or 4.0
or ESXi 4.0. You must run ESX or ESXi 4.0 to support the VEMs.
The VSM VMs require at least 4 GB of memory, two physical network interface cards (NICs)
on the ESX host (one NIC for management and one for the rest of the traffic), and a 64-bit
processor.
The upstream switches must be configured to trunk the VLANs that you intend to use for
control, management, packet, VMware vMotion, and VM traffic networks.
Last, you must configure a data-center object to contain your ESX servers in vCenter.
7-66
Port Group
VLAN
Description
First
Control
e1000
Second
Management
Third
Packet
e1000
DCUCI v4.07-5
The Cisco Nexus 1000V VSM requires at least three types of port groups to be created on an
existing or new vSwitch. These port groups correspond to the VSM communication VLANs, as
described previously.
7-67
DCUCI v4.07-6
To create a VMware port group that the VSM can use, follow these steps:
1. Choose the host on which the VSM will reside.
2. Choose the Configuration tab.
3. Choose Networking.
4. Choose Properties.
7-68
DCUCI v4.07-7
7-69
DCUCI v4.07-8
DCUCI v4.07-9
After you choose Next, verify the configuration of the control VLAN for the VSM. Click
Finish. Repeat for the management and packet port groups.
7-70
The control,
management, and
packet port groups
must be created
prior to Cisco
Nexus 1000V
installation.
2011 Cisco Systems, Inc. All rights reserved.
DCUCI v4.07-10
In the traditional Cisco Nexus 1000V installation, you create different port groups and VLANs
for the control, management, and packet interfaces, as shown in the figure. These port groups
must exist before you install the Cisco Nexus1000V.
7-71
DCUCI v4.07-12
There are several ways to install and deploy the VSM. The preferred method is to use an Open
Virtual Appliance (OVA) file. This method provides the highest degree of guidance and errorchecking for the user.
Open Virtualization Format (OVF) files are standardized file structures that are used to deploy
VMs. You can create and manage OVF files by using the VMware OVF Tool.
OVA files are like OVF files.
7-72
DCUCI v4.07-13
3 GB disk
2 GB memory
To create the VM, right-click the host on which you want to install the VSM, then choose New
Virtual Machine.
Choose Typical, then click Next.
7-73
Cisco
DCUCI v4.07-14
In the dialog box, name the VM and choose a data center. Click Next.
7-74
DCUCI v4.07-15
7-75
DCUCI v4.07-16
7-76
DCUCI v4.07-17
Ensure that you have configured the correct virtual disk size, per VSM requirements. Click
Next.
7-77
Cisco
DCUCI v4.07-18
So that you can modify the remainder of the settings for the VSM, check Edit the Virtual
Machine Settings before completing the configuration. Click Continue.
7-78
DCUCI v4.07-19
Change the memory size to 2048; the VSM requires a minimum of 4 GB of memory. Then
choose New NIC.
7-79
DCUCI v4.07-20
7-80
DCUCI v4.07-21
7-81
DCUCI v4.07-22
7-82
DCUCI v4.07-23
Verify the configuration and click Finish. Repeat this process to add two additional NIC
adapters, as required by the VSM.
The NIC adapters should be assigned in ascending order:
Control: NIC0
Management: NIC1
Packet: NIC2
7-83
DCUCI v4.07-24
After the VM has been created for the VSM, choose the VSM. Click Edit Settings. Verify that
the control, management, and packet adapters are assigned in ascending order. The VSM
operating system will ensure that the appropriate traffic is assigned to each adapter, based on
lowest-to-highest numbering.
7-84
DCUCI v4.07-25
To attach the VSM to a bootable Cisco Nexus1000V, install the .iso boot file or disk. Follow
these steps:
1. Choose CD/DVD Drive.
2. Choose Datastore ISO File.
3. Browse to the Cisco Nexus 1000V VSM .iso file.
4. Click OK.
7-85
DCUCI v4.07-27
To access the VSM console and to power up the VM, right-click the VSM VM. Choose Open
Console. When the console screen opens, click the Power On icon (the green triangle).
7-86
DCUCI v4.07-28
After the virtual machine is powered up, a boot menu appears. Choose Install Nexus 1000V.
This choice will bring up the new image.
Note
7-87
Initial Setup
Initial Setup
When prompted, enter and confirm the administrator
password. Enter yes to enter basic configuration.
---- System Admin Account Setup ---Enter the password for "admin": Qwer12345
Confirm the password for "admin": Qwer12345
[#########################################] 100%
---- Basic System Configuration Dialog VDC: 1 ---This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime to
skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): yes
2011 Cisco Systems, Inc. All rights reserved.
DCUCI v4.07-29
The Cisco Nexus 1000V runs the Cisco Nexus Operating System (NX-OS). As with most Cisco
devices, a basic setup script is provided when booting a new switch; issuing a write, erase, or
reload; or issuing the command setup at the command-line interface (CLI).
The basic setup script leads the administrator through basic connectivity options and switch
parameters.
7-88
DCUCI v4.07-30
Assign an IP address, netmask, and default gateway for the management interface. By default,
Telnet is enabled and Secure Shell (SSH) is disabled. If you enable SSH, then you must
configure the security settings according to your needs.
You can configure Network Time Protocol (NTP) to ensure that the VSM clock is
synchronized with an external time source.
You will also assign VLAN IDs for the control and packet interfaces. Be careful to use the
control and packet VLANs that you configured in vCenter before the Cisco Nexus 1000V
installation. This simplistic example uses VLAN 1 for all the traffic (control, management, and
packet); however, in a production environment, you should use individual VLANs for each
type of traffic (control, management, and packet).
DCUCI v4.07-31
After the dialog is complete, you are presented with a summary of all the configurations that
you entered. Review the configurations and enter n to quit the wizard and save the
configurations or y to make changes.
7-89
http://10.1.1.10
DCUCI v4.07-33
For the VSM to connect to the vCenter server, a plug-in must be installed. You can download
the plug-in for the vCenter server from the Cisco Nexus 1000V. After installing the Cisco
Nexus 1000V, open an Internet browser. Navigate to the IP address of the Cisco Nexus 1000V
VSM and download the file cisco_nexus1000v_extension.xml.
7-90
VSM Plug-In
VSM Plug-In
The VSM plug-in is an authentication method for the VSM. Without
the plug-in installed, the VSM is not authorized to communicate with
vCenter.
DCUCI v4.07-34
7-91
Step 2
Right-click the white space and
click New Plug-in.
Step 3
Choose the plug-in downloaded
from the VSM.
Click Open.
2011 Cisco Systems, Inc. All rights reserved.
DCUCI v4.07-35
7-92
DCUCI v4.07-36
Choose Register Plug-in, to complete the process of making the Cisco Nexus 1000V a plug-in
of vCenter.
7-93
Verify Connectivity
Verify Connectivity
Ping the default gateway to verify connectivity.
The first ping is typically lost because the switch uses Address
Resolution Protocol (ARP) to gain the gateway MAC address.
DCUCI v4.07-37
After you complete the initial VSM setup and vCenter plug-in registration, you should verify
Layer 3 connectivity to the default gateway of the management network, by issuing the ping
command.
You can use additional pings to validate connectivity to the vCenter server and the ESX hosts
that will be added to the Cisco Nexus 1000V switch instance.
7-94
DCUCI v4.07-38
To connect from the VSM to the vCenter server, an SVS connection must be created. The SVS
connection requires these parameters:
7-95
DCUCI v4.07-39
After the VSM is connected to vCenter, the DVS appears in the networking inventory panel of
vCenter. You should see the port groups that you configured for control, management, and packet
traffic. Some other port groups are also created by default: One is the Unused_Or_Quarantined
DVUplinks port group (which connects to physical NICs) and the other is the
Unused_Or_Quarantined VMData port group (which is VM facing).
7-96
DCUCI v4.07-41
If you run your VSM in a high-availability configuration, then you can configure the secondary
VSM as you did the primary VSM. There are two exceptions: You need a unique name for the
secondary VSM, and you must define its role as secondary to the primary VSM.
7-97
Standby
State
Synch
VSM
VSM
VEM
VEM
ESX 1
ESX 2
vCenter
DCUCI v4.07-42
The Nexus 1000V supports high availability by using primary and standby supervisors. Only
one active running configuration is managed by connecting to the primary VSM. The primary
and standby VSMs are state synchronized. In the event of a manual or automatic switchover,
the standby supervisor assumes the role of the primary, whereas the former primary (now the
standby) attempts to complete the state synchronization process after it reboots.
During initial setup of the Cisco Nexus 1000V, you need to configure only one management IP
address for remote management. The primary supervisor seizes the management IP address
because that supervisor is responsible for the active running configuration.
7-98
Supervisor Modes
Supervisor Modes
In Dual Supervisor mode, the active (primary) supervisor is the
supervisor in slot 1; the standby (secondary) supervisor resides in slot 2.
All services are run on the active supervisor and exist in standby mode
on the second supervisor. These services are maintained in synch for
failover.
All management and configuration is performed on the active supervisor.
Configuration of primary and secondary can be performed during initial
switch setup.
Active
Cisco Nexus 1000V
VSM
2011 Cisco Systems, Inc. All rights reserved.
Standby
State
Synch
VSM
DCUCI v4.07-43
After the initial setup, the primary supervisor appears in slot 1 (or module 1, if the show
module command is issued from the CLI). The standby, if one is created, appears in slot 2 (or
module 2, if the show module command is issued from the CLI).
If only a standalone supervisor has been configured, then module 2 does not appear when the
show module command is issued from the CLI.
7-99
DCUCI v4.07-44
The output of the show system redundancy status command shows the primary VSM as the
supervisor that is associated with the running configuration. The standby supervisor appears in
the active state as the high-availability standby. This output confirms the state synchronized
mode of operation when dual VSMs are installed and configured.
7-100
DCUCI v4.07-45
The output of the show module command shows the primary supervisor in slot 1 and the highavailability standby in slot 2. The first ESX host that is added to the Cisco Nexus 1000V switch
instance appears in slot 3.
7-101
NAM VSB
Virtual Security Gateway (VSG)
DCUCI v4.07-47
The Cisco Nexus 1010 Virtual Services Appliance is a member of the Cisco Nexus 1000V
Series Switches. This appliance hosts the Cisco Nexus 1000V VSM and supports the Cisco
Nexus 1000V Network Analysis Module (NAM) Virtual Service Blade (VSB), to provide a
comprehensive solution for virtual access switching. Because the Cisco Nexus 1010 provides
dedicated hardware for the VSM, the platform makes virtual access switch deployment much
easier for the network administrator. With its support for additional VSBs, such as the Cisco
Nexus 1000V NAM VSB, the Cisco Nexus 1010 is a crucial component of a virtual access
switch solution.
7-102
DCUCI v4.07-48
With the introduction of the Cisco Nexus 1010, the VSM now has dedicated hardware.
Therefore, network administrators can install and configure virtual access switches like they
install physical switches. The dedicated VSM hardware is especially helpful in a data-center
power-up, because there is no dependency in finding server resources for the VSM. Thus, the
Cisco Nexus 1010 enables network administrators to manage the Cisco Nexus 1000V virtual
access switch just like other physical switches and to scale server-virtualization deployments.
The figure summarizes the hardware characteristics, based on the Cisco UCS C200 Series
Rack-Mount Server architecture.
7-103
DCUCI v4.07-49
The Cisco Nexus 1010 runs Cisco NX-OS. The appliance supports as many as four instances of
the VSM, running in high-availability mode (active-standby pairs). The appliance also supports
automatic restart of the VSM, as well as automatic placement of the active VSM on the highavailability pair.
The Cisco Nexus 1010 also supports the VSB and NAM, both of which are licensed separately.
7-104
NAM
DCUCI v4.07-50
This figure shows the internal architecture of the Cisco Nexus 1010. The Cisco Nexus 1010
contains Cisco Nexus 1010 Manager, which is based on Cisco NX-OS. The appliance can host
as many as four VSMs and supports the Cisco Nexus 1000V NAM VSB. Therefore, in addition
to hosting Cisco Nexus 1000V VSMs, the Cisco Nexus 1010 becomes a platform for other
networking services. The appliance will also support other VSBs in the future.
Because the Cisco Nexus 1010 uses the same VSM as the Cisco Nexus 1000V Series, the Cisco
Nexus 1000V Series solution with Cisco Nexus 1010 has all the features of the Cisco Nexus
1000V Series. Because Cisco Nexus 1010 Manager is based on Cisco NX-OS, the network
administrator has a familiar interface for installing and configuring Cisco Nexus 1010. Cisco
Nexus 1010 Manager also supports Cisco NX-OS high availability, allowing a standby Cisco
Nexus 1010 appliance to become active if the primary Cisco Nexus 1010 appliance fails.
7-105
DCUCI v4.07-51
Cisco Nexus 1000V Series Switches are intelligent VM access switches that are designed for
VMware vSphere environments running Cisco NX-OS. Operating inside the VMware ESX
hypervisor, the Cisco Nexus 1000V Series supports Cisco Virtual Network Link (VN-Link)
server-virtualization technology. This technology provides the following:
Policy-based VM connectivity
When server virtualization is deployed in the data center, virtual servers typically are not
managed like physical servers. Server virtualization is treated as a special deployment. This
treatment leads to longer deployment times and requires a greater degree of coordination
among server, network, storage, and security administrators. With the Cisco Nexus 1000V
Series, you can have a consistent networking feature set and provisioning process, all the way
from the VM access layer to the core of the data-center network infrastructure. Virtual servers
can now use the same network configuration, security policy, diagnostic tools, and operational
models as their physical-server counterparts that are attached to dedicated physical network
ports. Virtualization administrators can access predefined network policy that follows mobile
VMs to ensure proper connectivity. This approach saves valuable time that administrators can
use to focus on VM administration.
7-106
DCUCI v4.07-52
The rear panel image of the Cisco Nexus 1010 appliance illustrates the management
connectivity that is provided by the serial port that is used for serial over LAN (SoL), as well as
the management port, which provides access to the Cisco Integrated Management
Controller(IMC).
Network connectivity may be provided by using the two Gigabit Ethernet LAN on
Motherboard (LOM) ports or the four Gigabit Ethernet ports on the Broadcom PCI NIC.
7-107
Within the Cisco Nexus 1010 CLI, these options are called network options
and influence on which interface different traffic will be configured.
There are four types of traffic available on the system:
Management
Control
Packet
Data
DCUCI v4.07-53
The figure shows the rear panel of the Cisco Nexus 1010, with the physical connectivity
provided by the four-port, Broadcom Gigabit Ethernet NIC. The administrator has flexibility in
configuring connectivity of the control, management, packet, and data traffic types, by using
the network options command within the Cisco Nexus 1010 CLI.
7-108
The VSMs on both Cisco Nexus 1010 appliances should back up each other. The
primary VSM should be created on one Cisco Nexus 1010 appliance; the
secondary VSM should be created on the second Cisco Nexus 1010 appliance.
The Cisco NX-OS manager will take charge in load-balancing active/standby VSMs
across the two Cisco Nexus 1010 appliances in the high-availability pair.
DCUCI v4.07-54
The figure shows the high availability that is built into Cisco Nexus 1010 Manager. Because
Cisco Nexus 1010 Manager is built on Cisco NX-OS, it offers active-standby redundancy. If
one Cisco Nexus 1010 Manager instance fails, the other Cisco Nexus 1010 Manager instance
automatically takes over and continues operation. In addition, Cisco Nexus 1010 Manager
automatically places the active VSM to balance the distribution and reduce the potential fault
domain.
7-109
DCUCI v4.07-55
The VSB provides expansion capabilities so that new features can be added in the future. Cisco
Nexus 1010 Manager enables customers to install, configure, and manage VSBs. If the VSB
stops, Cisco Nexus 1010 Manager automatically restarts it. The Cisco Nexus 1000V NAM
VSB takes advantage of these capabilities to provide a robust, complete solution for the
virtualized data center.
7-110
<Output Cut>
virtual-service-blade:
HA Oper role: ACTIVE
Status:
VSB POWERED ON
Location: PRIMARY
SW version: 4.0(4)SV1(3)
DCUCI v4.07-56
The output of the show virtual-service-blade command that is issued from the Cisco Nexus
1010 primary VSM CLI shows parameters that are associated with the VSB. The hardware
version, hostname, and management IP address of the primary VSM is listed, as are the control,
management, and packet VLANs.
The operational status is powered on and the high-availability operational role is active.
7-111
Architectural Comparison
Architectural Comparison
VSM on VM
VM VM VM VM
VM VM VM
1000V
VSM x 1
Nexus
1000V
Nexus
1000V
VEM
vSphere
vSphere
Server
Server
1000V
VSM x 4
Physical Switches
Physical Switches
DCUCI v4.07-58
The Cisco Nexus 1000V VSM installs as a VM on an ESX or ESXi host and supports high
availability. The VSM communicates directly with the VEM that is installed on every ESX or
ESXi host that is a part of the Cisco Nexus 1000V instance. A single instance of the Cisco
Nexus 1000V VSM can support as many as 64 ESX hosts that are managed within a single
data-center object within vCenter. Additional Cisco Nexus 1000V instances may be added to
expand beyond 64 hosts.
The Cisco Nexus 1010 VSM is supported in hardware as a virtual appliance. The VSM
connects directly to the physical access switch and requires the proper configuration of control,
management, and packet VLANs. The Cisco Nexus 1010 VSMs communicate directly with the
VEMs, which are installed on every ESX or ESXi host that is a part of the Cisco Nexus 1010
instance.
The Cisco Nexus 1010 can support as many as four high-availability instances (primary and
standby VSMs), scaling to manage as many as 256 ESX or ESXi hosts.
7-112
Feature Comparison
Feature Comparison
VSM as VM
VSM on
Cisco
Nexus 1010
Software-only deployment
DCUCI v4.07-59
7-113
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
The vSwitch on which the VSM resides must be configured with three port
groups: control, management, and data.
A VSM VM must be configured with 2 GB memory and a 3 GB disk drive
on a 64-bit Linux VM.
Initial configuration of the VSM is performed via VMware console access;
this access is like console cable access for physical switches.
A plug-in XML file must be downloaded from each VSM and added to
vCenter before VSM attachment.
VSMs can be run in a redundant mode as an active/standby configuration.
The Cisco Nexus 1010 Virtual Services Appliance is a hardware device
that offers expanded capabilities over the Cisco Nexus 1000V, for larger
virtualized environments.
7-114
DCUCI v4.07-60
Lesson 5
Objectives
Upon completing this lesson, you will be able to describe the networking capabilities of the
Cisco Nexus 1000V, including the use of port profiles, VEM installation, and VSM
configuration backup. This ability includes being able to meet these objectives:
Describe the vMotion process and Cisco Nexus 1000V port profile mobility
App
OS
OS = Operating System
2011 Cisco Systems, Inc. All rights reserved.
DCUCI v4.07-4
The Cisco Nexus 1000V uses a new concept called port profiles. Port profiles are used to
provide port configuration for the ports that are assigned to the profile. There are two types of
port profiles: virtual machine (VM) and uplink. Uplink profiles provide all outbound
communication for VMs, as well as all VSM-to-VEM communication on the control and
packet VLANs. VM port profiles provide the configuration for VM virtual network interface
card (vNIC) ports.
Port profiles are used as a central configuration point for multiple ports. By using port profiles,
groups of ports that require the same network configuration can be configured rapidly. When a
port profile has been created and configured, ports can be assigned to the profile and receive the
configuration that is contained in that profile.
Port profiles tie directly to VMware port groups on the distributed virtual switch (DVS) that the
Cisco Nexus 1000V controls. After a profile is created, a VMware port group name can be
assigned to the profile. By default, the port group name is the same as the port profile name, but
this setting is configurable. When a port profile is enabled, the corresponding port group is
created on the DVS within VMware vCenter Server.
VMware administrators assign ports to a port group from the VMware vSphere client. When a
port is made a member of a port group, the corresponding port profile configuration is applied
to the port on the VEM. Any configuration changes to the port profile are immediately assigned
to member ports and are stored in the Cisco Nexus 1000V running configuration file.
7-116
VSM
2011 Cisco Systems, Inc. All rights reserved.
vCenter
DCUCI v4.07-5
When a port profile is created and enabled, a corresponding port group is created in VMware.
By default, this port group has the same name as the profile, but this name is configurable.
VMware administrators use the port profile to assign network settings to VMs and uplink ports.
When a VMware ESX host port (a physical VM NIC, or VMNIC) is added to a DVS that the
Cisco Nexus 1000V switch controls, an available uplink port group is assigned and those
settings are applied. When a NIC is added to a VM, an available VM port group is assigned and
the network settings that are associated with that profile are inherited.
A NIC in VMware is represented by a VMNIC interface. The VMNIC number is allocated
during VMware installation.
7-117
DCUCI v4.07-6
Uplink profiles are used to provide outbound connections from the VEM to the VSM, as well
as to carry VM data traffic to the network. Uplink profiles are used to configure VMNICs or
physical host ports. These profiles typically contain information such as trunking and port
channel behavior.
7-118
VM ProfilesType vEthernet
VM ProfilesType vEthernet
VM profiles provide the configuration for VM ports.
VM vNIC ports are assigned to a VM port profile.
VM port profiles require an uplink port profile to access the
physical network.
VM profiles can be configured in a VLAN with no
corresponding uplink profile, to create internal VM networks.
App
OS
2011 Cisco Systems, Inc. All rights reserved.
OS = Operating System
DCUCI v4.07-7
VM profiles are used to provide configuration for VMs. These profiles require an uplink profile
to access the physical network.
VM profiles can be configured without a corresponding uplink profile, to create internal VM
networks. If a VM profile is created by accessing a VLAN that is not trunked to the physical
network, then the assigned VMs can communicate only with other VMs that are assigned to the
profile in the same host. This configuration is like creating internal-only VMware vNetwork
Standard Switches (vSwitches) or port groups within a standard VMware networking
environment.
7-119
DCUCI v4.07-8
Port profiles can be enabled or disabled as a whole. The configuration of a port profile is
applied to assigned ports only if the profile is in an enabled state. Additionally, no VMware
port group is created for disabled port groups.
The enabled and disabled state of a port group is separate from the shut and no shut parameters
within the profile. The enabled and disabled state applies to the profile itself, whereas shut and
no shut apply to the port configuration of member ports.
7-120
VM Profile DB
App
App
OS
OS
VM Profile WebApp
App
App
App
OS App
OS
OS
OS
OS = Operating System
2011 Cisco Systems, Inc. All rights reserved.
DCUCI v4.07-9
You should use port profiles for all port configurations on the Cisco Nexus 1000V. Using port
profiles for port configurations helps to ensure consistency among ports that have similar
characteristics and speeds the deployment process. Any configuration that is performed on an
individual port overrides the port profile configuration. Configuring individual ports is typically
necessary only for testing and troubleshooting purposes.
7-121
DCUCI v4.07-10
Port profiles are a key element of port configuration on the Cisco Nexus 1000V. VM ports and
uplink ports are configured as port profiles. Port profiles are then assigned to virtual machines
or virtual switches.
7-122
Preventing VM Sprawl
Preventing VM Sprawl
A default state of shutdown within a port profile can assist
with the prevention of VM sprawl.
In this state, when a new VM is assigned to a VMware port
group, its port defaults to a shut state until enabled by a
network administrator.
This state can prevent new VMs from gaining network access
without network administrator approval.
VSM-1(config)# port-profile type vEthernet DataProfile
VSM-1(config-port-prof)# description VM Traffic
VSM-1(config-port-prof)# vmware port-group DataProfile
VSM-1(config-port-prof)# switchport mode access
VSM-1(config-port-prof)# switchport access vlan 102
VSM-1(config-port-prof)# shutdown
VSM-1(config-port-prof)# state enabled
VSM-1(config-port-prof)# exit
DCUCI v4.07-11
VM sprawl is a new data-center challenge that has come about with virtualization. Virtual
servers typically make new server deployment very easy. As such, servers may be deployed
more than would be typical in a standard physical architecture. By using the port profiles on the
Cisco Nexus 1000V, VM sprawl can be minimized by preventing VMs from accessing physical
network resources without network administrator intervention.
By dictating a port profile default port state of down, a network administrator can ensure that
new VMs have access to the network only with approval from network teams. After a new VM
is brought online, the VMware administrator needs to contact the network team to enable the
port. Until the port is enabled, the virtual server exists but has no outbound network
connectivity.
7-123
VLAN Configuration
This topic discusses configuring VLANs.
VLAN Configuration
VLAN Configuration
VLANs are used to separate traffic types on a single physical interface.
The Cisco Nexus 1000V supports VLAN options similar to physical
switches.
VSM-1(config)# VLAN 100
VSM-1(config-vlan)# ?
exit
ip
Configure IP features
media
name
no
private-vlan
service-policy
shutdown
state
DCUCI v4.07-13
The VLAN command is used to define a VLAN on the Cisco Nexus 1000V Virtual Supervisor
Module in a manner similar to VLAN creation on a physical switch. When in VLAN
configuration mode, additional options such as private VLANs, an IP address for a switched
virtual interface (SVI), or operational state can be configured.
7-124
DCUCI v4.07-14
After you configure VLANs on the switch, you can assign port profiles to use them. Ports
inherit VLAN settings from the port profile. Two port modes are typically used for VLANs:
Trunk ports: Trunk ports carry traffic for multiple VLANs. By default, these ports carry
data for all VLANs, but the list of allowed VLANs can be pruned back to minimize traffic
or increase security.
Access ports: Access ports are members of a single VLAN. Access ports are typically
configured for the VLAN of which they are members. In most cases, single server instances
(one operating system and one application) running on bare metal or virtual hardware are
assigned to access ports.
7-125
DCUCI v4.07-15
Because the Cisco Nexus 1000V acts as a fully functional switch within the virtual
environment, any VLANs that the attached ports use need to be created on the Cisco Nexus
1000V, to facilitate switching for those ports. VLAN creation is done from the configuration
level of the Cisco Nexus Operating System (Cisco NX-OS) command-line interface (CLI).
7-126
D CU CI v 4 .0 7 -1 7
In the configuration example that the figure shows, the secondary VLANs 300, 301, and 302
are associated with the primary VLAN 3. In addition, the port profile specifies a promiscuous
port, which will manage traffic flow into and out of the private VLAN association.
The configuration has been applied to the port profile named pvlanprof and may be assigned to
vNIC interfaces.
7-127
DCUCI v4.07-18
The output of the show port-profile name pvlanprof command can be used to validate the
configuration and to assure correct secondary-to-primary VLAN association and the
promiscuous port assignment.
The output indicates that the pvlanprof port profile is the primary (promiscuous) VLAN
number 3. VLANs 300 and 301 are secondary VLANs. Depending on the port profile that is
created for the secondary VLANs, they can be isolated or community ports.
7-128
DCUCI v4.07-20
The figure shows how to create a port profile for uplink ports.
7-129
DCUCI v4.07-21
The figure shows the next steps in creating a port profile for uplink ports.
7-130
DCUCI v4.07-22
The figure shows how to verify the port profile configuration within vSphere.
7-131
DCUCI v4.07-23
7-132
DCUCI v4.07-25
The figure describes the process of creating a VMware data port profile.
7-133
DCUCI v4.07-26
The figure shows the process of verifying VMware data port profile configuration. From within
vSphere, choose Inventory > Networking within the navigation pane. The network inventory
objects appear, including the newly created port profile named pod1VMdata. The vSphere
Recent Tasks window shows that the creation of the new port profile has been completed
successfully.
7-134
DCUCI v4.07-28
Port channels are used to bundle links, to increase availability. The Cisco Nexus 1000V uses
VMNICs to create port channels to upstream switches. These port channels allow VM data that
is destined for the physical network to be load-balanced actively across available links. Without
port channels, NIC teaming from the Cisco Nexus 1000V is performed in an active-passive
fashion.
7-135
DCUCI v4.07-29
Port channels create a single logical link that comprises multiple physical links. When a port
channel has been selected as an egress interface, the switch must choose a physical link, within
the port channel, on which to transmit data. The process of selecting a physical link is based on
load-balancing algorithms that provide fairness among the links. The Cisco Nexus 1000V
supports several load-balancing algorithms, the most granular of which is source and
destination TCP/User Datagram Protocol (UDP) port number. This support means that using
the source and destination port number will provide the greatest fairness when distributing
traffic load across physical links.
7-136
Cisco VEM
Cisco VEM
Cisco VEM
Cisco VEM
DCUCI v4.07-30
The Cisco Nexus 1000V does not support port channels across multiple hosts. Each VEM on
each host requires its own port channel. These port channels can exist between the VEM and a
single upstream switch or between the VEM and two upstream switches. Multiple port channels
are supported on a single VEM or VSM.
7-137
VMware ESX
App
App
App
App
OS
OS
OS
OS
VEM
VEM
OS = Operating System
2011 Cisco Systems, Inc. All rights reserved.
DCUCI v4.07-31
The Cisco Nexus 1000V supports two types of port channels: standard, and virtual port channel
host mode (vPC-HM).
Standard is the most common type of port channel and requires all members of the port channel
to belong to the same device on each side of the port channel. In this configuration, all
VMNICs that are members of the port channel must be cabled to the same upstream switch.
In vPC-HM, VMNIC port channel members can be cabled to two upstream switches, for
redundancy. This mode has the following requirements:
The uplinks from the host go to multiple upstream switches, but the load-balancing
algorithm is source MAC-based.
The upstream switch can and does enable Cisco Discovery Protocol.
The Cisco Nexus 1000V port channel is configured as enabled for Cisco Discovery
Protocol.
Cisco Discovery Protocol information allows the Cisco Nexus 1000V to address issues that
typically occur in asymmetric configurations:
7-138
MAC address flapping: The Cisco Nexus 1000V prevents MAC address flapping, by
using MAC-based load balancing.
Duplicate packets that are received during floods, broadcasts, and multicasts: The
Cisco Nexus 1000V checks for a match between the subgroup ID, where a packet was
received, and the destination subgroup ID, and accepts only matching packets.
Packets that are sent back to the host during floods and broadcasts: The Cisco Nexus
1000V verifies that ingress packets are not the same packets that it sent out.
DCUCI v4.07-32
Before beginning this procedure, you must confirm or complete these tasks:
In vPC-HM, the port channel member ports connect to multiple upstream switches, and the
traffic must be managed in separate subgroups.
When you create a port channel, an associated channel group is automatically created.
vPC-HM is supported only in port channels that are configured in the on mode. vPC-HM is
not supported for Link Aggregation Control Protocol (LACP) channels that use the active
and passive modes.
You need to know whether Cisco Discovery Protocol is configured in the upstream
switches. If configured, then Cisco Discovery Protocol creates a subgroup for each
upstream switch to manage its traffic separately. If Cisco Discovery Protocol is not
configured, then you must manually configure subgroups to manage the traffic flow on the
separate switches.
If you are using Cisco Discovery Protocol with the default Cisco Discovery Protocol timer
(60 seconds), then links that advertise that they are in service and then out of service in
quick succession can take as much as 60 seconds to be returned to service.
If any subgroup has more than one member port, then you must configure a port channel
for the member ports of each subgroup on the upstream switch.
If vPC-HM is not configured when port channels connect to multiple upstream switches,
then the VMs behind the Cisco Nexus 1000V receive duplicate packets from the network
for unknown unicast floods, multicast floods, and broadcasts.
The subgroup command that is used in this procedure overrides any subgroup configuration
that is specified in the port profile that the port channel interface inherits.
7-139
DCUCI v4.07-33
Port channels should be configured by using port profiles rather than at the individual port
level. This method ensures consistent configuration across hosts and compatibility for advanced
functionality, such as VMware vMotion, high availability, and Distributed Resource Scheduler
(DRS).
For standard port channels, enter the channel-group auto mode on command under the port
profile configuration. When VMNICs (physical NICs on ESX hosts) are added to this profile,
port channels are automatically created on each host, and additional VMNICs are added to the
channel. A port channel cannot span multiple hosts, so an individual port channel is created
from the VEM of each ESX host.
7-140
DCUCI v4.07-34
The figure illustrates the verification of the uplink port profile standard port channel
configuration.
7-141
DCUCI v4.07-35
vPC-HM port channels rely on source MAC address load balancing and subgroups, to prevent
MAC address instability and multiple frame copies on upstream switches. Loops are prevented
automatically because frames that are received on uplink ports are not forwarded to other
uplink ports.
7-142
MAC table instability: In asymmetric mode, the Cisco Nexus 1000V relies on source
MAC hashing to avoid MAC address instability. Source MAC hashing ensures that traffic
from a source VM always uses the same uplink port in asymmetric configurations. This
method prevents upstream switches from having issues that can be caused when a source
MAC is seen on multiple switches and from having to make constant MAC table changes.
Subgroups: Using Cisco Discovery Protocol, the Cisco Nexus 1000V can analyze traffic
and create subgroups for the separate upstream switches in a vPC-HM port channel. This
method allows the Cisco Nexus 1000V to prevent multiple copies of the same frame that
are received on separate ports from being forwarded to a VM.
Add a Host
Add a Host
This procedure assumes that
VMware Update Manager is
being used or that the VEM has
previously been installed on the
host.
To manually install a VEM on a
VMware ESX host, refer to the
Cisco Nexus 1000V VEM
Software Installation and
Upgrade Guide.
1.
2.
3.
DCUCI v4.07-37
The VMware administrator adds hosts to a VSM. The administrator uses the data-center
Networking view to assign hosts and their VMNICs to a Cisco Nexus 1000V DVS.
The procedure assumes that VMware vCenter Update Manager is being used or that the VEM
has been installed on the host.
If you need to manually install a VEM on an ESX host, refer to the Cisco Nexus 1000V VEM
Software Installation and Upgrade Guide.
7-143
5.
6.
Click Next.
DCUCI v4.07-38
On the Add Host to Distributed Virtual Switch screen, follow these steps:
7-144
Click Next.
DCUCI v4.07-39
After you have added the host, verify the configuration, then click Finish. Repeat these steps
for all the hosts that need to be added.
7-145
Ports
Module-Type
---
-----
Model
248
ok
248
ok
Nexus1000V
Status
active *
...
Mod
MAC-Address(es)
---
--------------------------------------
Serial-Num
----------
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
NA
02-00-0c-00-03-00 to 02-00-0c-00-03-80
NA
02-00-0c-00-04-00 to 02-00-0c-00-04-80
NA
...
Slots 1 and 2 are reserved for VSMs. New host VEMs will begin at slot 3.
2011 Cisco Systems, Inc. All rights reserved.
DCUCI v4.07-40
After a host has been added and the VEM has been installed successfully, the VEM appears as
a module on the VSM CLI, like modules that are added to a physical chassis.
Note
7-146
Slots 1 and 2 are reserved for the VSMs. New host VEMs start from slot 3.
Status
UUID
---
-----------
----
powered-up
34343937-3638-3355-5630-393037415833
powered-up
34343937-3638-3355-5630-393037415834
DCUCI v4.07-41
The show module vem map command shows the status of all VEMs, as well as the
Universally Unique Identifier (UUID) of the host on which the VEM runs. This command can
be used for verification purposes.
7-147
DCUCI v4.07-42
To add a VM to a VSM port group, right-click the VM, then choose Edit Settings.
DCUCI v4.07-43
The VMware administrator adds VMs to the Cisco Nexus 1000V DVSs. The port group
becomes available for selection as a network label within the VM configuration, after the port
profile has been created and the corresponding port group has been pushed to vCenter.
7-148
DCUCI v4.07-45
The figure shows the various copy commands that can be used within the Cisco Nexus 1000V
CLI to manage configuration file images and Cisco NX-OS kickstart and system images. These
commands can be issued from EXEC mode.
The first command illustrates the command structure to copy a file from a source to a
destination location. The second command saves a copy of the active running configuration to a
remote switch. The third command copies a file in the bootflash of the active supervisor to the
bootflash of the standby supervisor.
7-149
DCUCI v4.07-46
The first command in this figure can be used to copy the active running configuration to the
bootflash. The saved file will survive a system reset (write erase and reload command sequence).
The second command downloads a copy of the system image file from a Secure Copy Protocol
(SCP) server to the bootflash. The third command copies a script file from a Secure File Transfer
Protocol (SFTP) server to the volatile file system.
7-150
Copy the file samplefile from the root directory of the bootflash file
system to the mystorage directory.
VSM-1# copy bootflash:samplefile
bootflash:mystorage/samplefile
DCUCI v4.07-47
The first command in this figure places a backup copy of the active running configuration on
the bootflash. The second command saves a file on the bootflash root directory to a different
bootflash directory. The third command copies the source file to the active running
configuration and configures the switch, line by line, as the file commands are compiled.
7-151
Policy-Based VM Connectivity
Policy-Based VM Connectivity
Policy definition supports:
VLAN and PVLAN
settings
Server
VM
VM
VM
VM
#1
#2
#3
#4
Rate limiting
VMware ESX
QoS marking
(CoS/DSCP)
Remote port mirror
(ERSPAN)
vCenter
VSM
2011 Cisco Systems, Inc. All rights reserved.
DCUCI v4.07-49
Port profiles are created within the Cisco Nexus 1000V CLI by the network administrator and
are pushed out to vCenter. On vCenter, the profiles appear in the vCenter inventory as a port
group.
These port profiles represent policies that include VLANs and PVLANs, ACLs, Layer 2 port
security, NetFlow collection, rate limiting, quality of service (QoS) marking that uses either the
differentiated services code point (DSCP) or class of service (CoS) values, and Encapsulated
Remote Shared Port Analyzer (ERSPAN).
7-152
Server 1
VM
VM
VM
#1
#2
#3
#4
VM
VMware ESX
3
Cisco Nexus 1000V
1
Available Port Groups
vCenter
WEB Apps
HR
DB
Compliance
VSM
DCUCI v4.07-50
In this configuration, a port profile named Web Apps has been created and assigns a security
policy for TCP ports 80 and 443 to the isolated secondary PVLAN 108. The profile rate-limits
the port to 100 Mb/s with a QoS medium priority level and enables remote port mirroring.
This policy has been state enabled and pushed out to vCenter, where it appears within the vCenter
inventory. The policy has been assigned to a vNIC within VM 1 on server 1.
7-153
Port policy
Flow statistics
Server 2
Server 1
VM
VM
VM
VM
VM
VM
VM
VM
#1
#2
#3
#4
#5
#6
#7
#8
Nexus 1000VVEM
Cisco
VMware ESX
VMware ESX
DCUCI v4.07-51
If a manual or automatic vMotion event occurs, the Cisco Nexus 1000V is notified of the
process. During VM replication on the secondary host, the primary VSM copies the port state
to the new host.
The mobility properties that are retained and copied to the new host include the port policy,
interface state and counters, flow statistics, and remote Switched Port Analyzer (SPAN)
session.
7-154
Server 2
Server 1
VM
VM
VM
VM
#1
#2
#3
#4
Nexus 1000VVEM
Cisco Nexus
VM
VM
#1#5
VM
VM
VM
#6
#7
#8
VMware ESX
VMware ESX
vCenter
VSM
DCUCI v4.07-52
After the server administrator, operating within vSphere, assigns these port profiles to either
uplink or vNIC ports, these policies survive both manual and automatic vMotion events.
When the vMotion process is completed, the port on the new ESX host is made active and the
VM MAC address is announced to the network.
7-155
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Port profiles are used to configure multiple ports globally.
Port profiles reduce administrative time and minimize
opportunities for error.
After a port profile has been created, it is passed to vCenter; new
port groups will be available for VMs.
VLANs and PVLANs can be configured within the CLI and
associated with a port profile.
Port channels are used for link aggregation, redundancy, and load
balancing.
The Cisco Nexus 1000V supports two types of port channels:
symmetric, from the Cisco Nexus 1000V to a single upstream
switch, and asymmetric, from the Cisco Nexus 1000V to two
upstream switches.
DCUCI v4.07-53
Summary (Cont.)
All port configuration for VM and uplink ports should be performed
by using port profiles.
Cisco Nexus 1000V image and configuration files can be managed
from the VSM CLI.
Port profile assignment to a VM assures policy mobility during a
manual or automatic vMotion event.
7-156
DCUCI v4.07-54
Lesson 6
Objectives
Upon completing this lesson, you will be able to describe the installation and configuration of
the Cisco VIC and the creation of vNICs, vHBAs, and port profiles. This ability includes being
able to meet these objectives:
Describe how to configure vMotion hosts and port profile mobility within Cisco M81KR
DCUCI v4.07-4
Before integrating Cisco UCS Manager with vCenter, you should verify that a few
requirements are in place. VMware vCenter requires an Enterprise Plus license to create a
distributed virtual switch (DVS), which is how Cisco Virtual Network Link (VN-Link) in
hardware appears in vCenter.
Support for DVSs in vCenter starts in VMware ESX 4.0, and support for the Pass-Through
Switch (PTS) DVS starts with ESX 4.0 Update 1. Cisco UCS Manager supports the VIC PTS,
starting in Cisco UCS Manager Release 1.2(1d).
Other requirements include configuring your upstream network switches with the proper
VLANs that you want to expose to your virtual machines (VMs). You must also configure a
datacenter object in vCenter to contain the PTS DVS.
7-158
Without wizard:
Same steps as wizard.
No guidance in process.
DCUCI v4.07-5
You can use two methods to integrate vCenter with Cisco UCS Manager: using a wizard or
manual integration. The wizard guides you through all the required steps. Manual integration
requires that you know which steps to perform and in which order.
7-159
DCUCI v4.07-6
The figure depicts the vCenter integration wizard. The left-hand side of the window shows the
four steps that the wizard guides you through:
1. Install the plug-in.
2. Configure the DVS.
3. Produce port profiles in Cisco UCS Manager.
4. Consume port profiles in vCenter.
7-160
DCUCI v4.07-7
The security plug-in is a Secure Sockets Layer (SSL) certificate that Cisco UCS Manager
generates. You can use the default key that Cisco UCS Manager creates (as shown in the
figure), or you can manually modify the key. Either way, you must download the certificate in a
file to your client machine so that you can then upload it into vCenter.
In a lab environment, set the lifecycle times to a range of 1 to 5 minutes. If you need to remove
all port profiles, Cisco UCS Manager locks the configurations for a default period of 15
minutes.
7-161
Before:
Right-click
white space
for menu.
After
DCUCI v4.07-8
To install the certificate into vCenter, navigate to the plug-in manager. Right-click the available
plug-ins area, and choose Install Plug-in from the menu.
You are prompted to navigate your client machine file system to the location to which you
downloaded the plug-in from Cisco UCS Manager.
7-162
Using Folders
Using Folders
Folders in vCenter
Organize objects (servers,
VMs, switches,
datacenters, and so on).
Assign privileges to users
to act on folder objects.
DVS objects
Datacenter objects
You can create a
folder to contain
datacenter
objects.
2011 Cisco Systems, Inc. All rights reserved.
DCUCI v4.07-10
Folders are container objects in vCenter. Folders are used to organize other vCenter objects,
such as servers, VMs, switches, and data centers. You can also use folders to assign privileges
to users so that they can act on the objects in the folder.
When you integrate vCenter with Cisco UCS Manager, the folders are used to contain DVS and
datacenter objects only. You must configure a folder to contain the DVS objects, whereas you
may decide whether or not to create folders to contain datacenter objects.
Note
Although your folder names do not need to match those used in vCenter, you should use the
same names to simplify correlation.
7-163
QoS Policy
QoS Policy
DCUCI v4.07-12
To associate a quality of service (QoS) policy with a port profile, you first must enable the
corresponding QoS system class and create a QoS policy with that class. By default, all
Ethernet traffic is delivered with best effort priority. You can enable and use bronze, silver,
gold, and platinum classes, as shown in the figure.
For each of these classes, you can define the class of service (CoS) value, whether to tolerate
packet drop, associated weight (for platinum class only, relative to best effort and Fibre
Channel traffic), maximum transmission unit (MTU), and whether to optimize multicast traffic.
If you enable jumbo frames for any interface, you further limit the total number of PTS
interfaces that you can configure, as described earlier in this lesson.
7-164
DCUCI v4.07-13
7-165
DCUCI v4.07-14
The network control policy allows you to enable or disable Cisco Discovery Protocol
membership, define which action Cisco UCS Manager takes on link failures, and decide
whether to allow MAC forging.
7-166
Static pinning
Manually assign vNICs to
uplinks
Admin responsible for load
No repinning on uplink
failure
DCUCI v4.07-15
The default behavior for Cisco UCS Manager is to dynamically assign (or pin) all vNICs to
available uplinks, by using a round-robin algorithm. The benefit of this approach is that Cisco
UCS Manager automatically repins a vNIC when the uplink to which it is pinned has failed.
The disadvantage of this approach is that the round-robin algorithm might not yield optimal
overall traffic distribution across uplinks.
Alternatively, you can statically pin vNICs to uplinks. The benefit of this approach is that you
can define the overall traffic distribution for all the vNICs that you statically pin. The
disadvantage of this approach is that Cisco UCS Manager does not automatically repin such a
vNIC upon uplink failure.
7-167
DCUCI v4.07-16
A port profile client defines the consumer of a port profile. The consumer is defined as a DVS
in a folder that is associated with a datacenter object in vCenter. You can associate the same
port profile with multiple port profile clients.
7-168
DCUCI v4.07-18
There are several limits to consider when you are trying to figure out how many static and
dynamic interfaces you can configure on an M81KR VIC. The M81KR can support a
maximum of 128 interfaces, but this number is further reduced by the limits that are defined in
the figure.
The maximum number of interfaces that you can configure is based on a physical limit that is
defined in the Cisco UCS 6100 Series Fabric Interconnects. The formula is defined as (15 * n)
2, where n is the number of server links that connect the I/O module (IOM) to the fabric
interconnect. For example, with a single link, you can configure (15 * 1) 2, or 13, total
interfaces. These interfaces include the sum of vHBAs, static vNICs, and dynamic (or PTS)
vNICs.
The maximum number of PTS interfaces that you can configure is given as [26 (#vHBA +
#vNIC)] * 4. For example, if you configure two vHBAs and four VNICs, then you are bound to
configure no more than [26 (2 + 4)] * 4, or 80, PTS interfaces. Recall, however, that you are
also bound by the first formula as a function of the number of server links.
7-169
DCUCI v4.07-19
A dynamic vNIC connection policy defines the number of PTS interfaces that you want to
instantiate on an M8lKR VIC. Recall the limitations that were defined earlier in this lesson.
Because you can configure varying numbers of server links to connect IOMs from different
chassis to the fabric interconnects, you can feasibly have some M81KR VICs with more PTS
interfaces than others. However, note that when you migrate a service profile with an
associated connection policy from one chassis to another, you might run into problems.
Specifically, if the number of IOM links on the target chassis is fewer than the number of IOM
links on the source chassis, then fewer PTS interfaces will be available on those blades.
7-170
DCUCI v4.07-20
When you associate a dynamic vNIC connection policy to a service profile, Cisco UCS
Manager calculates the total number of PTS interfaces that you can configure. If that number is
fewer than the number that you specify in your connection policy, Cisco UCS Manager gives
you a configuration failure.
Note
The server must reboot when you associate a connection policy to a service profile.
7-171
DCUCI v4.07-21
After successfully associating a connection policy to a service profile, you can view the
instantiated PTS interfaces in Cisco UCS Manager, as shown in the figure. These interfaces are
consumed dynamically as you configure network interfaces on your VMs.
7-172
DCUCI v4.07-22
The dynamic vNICs that have been configured on the ESX server can be viewed from the ESX
CLI, as shown in the figure. These dynamic vNICs are named vf_vmnicX.
7-173
DCUCI v4.07-24
To add a host to the DVS, navigate to the DVS in the networking inventory view. Right-click
Add Host to the Distributed Virtual Switch from the drop-down menu.
When adding a host to the DVS, you must select static interfaces that have already been
configured in Cisco UCS Manager to use as uplinks to the DVS. Unlike with Cisco Nexus
1000V, you do not manually configure the uplink port groups. These ports are configured
automatically for you.
Note
7-174
If you use VMware Update Manager to install the VEMs, then when adding hosts to the
DVS, VMware Update Manager remediates the ESX servers to ensure that they have the
correct version of the VEM installed.
DCUCI v4.07-25
After adding all hosts to the DVS, you can view their status from the networking inventory
view, as shown in the figure.
7-175
DCUCI v4.07-26
Port profiles that are defined in Cisco UCS Manager appear as port groups in vCenter. You can
view all available port groups in your port profile client by navigating to the appropriate
datacenter, folder, and DVS in your VMware vSphere client. In the figure, the port group
named rhvmprofile is available for consumption.
7-176
DCUCI v4.07-27
You can associate any port group available in the port profile client to any VM network
interface. The figure shows the association of the port group named rhprofile to the RH1 VM
network adapter.
7-177
Virtual/vEthernet
Interface Number
in Cisco Unified
Computing
System
Interface Number
on DVS in vCenter
DCUCI v4.07-28
Going back to the Cisco UCS Manager interface, you can now see, under the VM tab, which
port profiles have been consumed by which port profile clients, as shown in the figure. In this
example, an interface vNIC 1692 was instantiated on the DVS in the port profile client. This
interface is associated with a virtual interface 2448 in Cisco Unified Computing System.
7-178
DCUCI v4.07-29
Now that you know the vEthernet interface number that is associated with your VM network
interface, you can view packet counters and status for that interface in the connect nxos shell of
Cisco UCS Manager.
7-179
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
The PTS feature requires configuration on both Cisco UCS
Manager and vCenter, to provide two-way communication.
The Cisco UCS Manager extension is installed as a plug-in to
vCenter.
Datacenter, PTS, and folder objects are used to organize and
assign parameters to hosts and VMs within Cisco UCS Manager.
Uplink and vEthernet port profiles are created within Cisco UCS
Manager and pushed out to vCenter.
Dynamic NICs are created by using a policy wizard within Cisco
UCS Manager, are assigned within the service profile, and appear
within both the Cisco UCS Manager and vCenter inventories.
7-180
DCUCI v4.07-30
Module Summary
This topic summarizes the key points that were discussed in this module.
Module Summary
Cisco VN-Link enables policy-based VM connectivity and
automated virtual switch port provisioning.
VMware vDS allows a centralized configuration point for vSwitches
within a VMware ESX cluster.
The Cisco Nexus 1000V architecture consists of a virtual switch
chassis with a VSM and one VEM per ESX host.
Port profiles should be used for all interface configurations, to
ensure consistency across configuration of like devices.
The Cisco Nexus 1010 hardware appliance supports as many as
four domain instances of software-based virtual switching.
The Cisco M81KR/P81E VIC uses VNTag to deliver a hardwarebased VN-Link solution.
DCUCI v4.07-1
This module introduced the Cisco Nexus 1000V Distributed Virtual Switch (DVS), which is a
third-party plug-in to VMware vCenter. The features and benefits of the Cisco Nexus 1000V
DVS were discussed, as were detailed installation and configuration methods. In addition, a
comparison of the Cisco Nexus 1000V, Cisco Nexus 1010 Virtual Services Appliance, and the
Cisco M81KR/P81E Virtual Interface Card (VIC) were presented.
7-181
7-182
Module Self-Check
Use the questions here to review what you learned in this module. The correct answers and
solutions are found in the Module Self-Check Answer Key.
Q1)
What is a feature of the Cisco Nexus 1000V Series Switch? (Source: Evaluating the
Cisco Nexus 1000V Switch)
A)
B)
C)
D)
Q2)
Which VMware feature enables automated migration of VMs between physical hosts,
in response to defined thresholds? (Source: Evaluating the Cisco Nexus 1000V
Switch)
A)
B)
C)
D)
Q3)
link speed
VLANs
VM guest operating system
switch high availability
Which feature does the VMware vDS introduce that does not exist in traditional virtual
switches? (Source: Evaluating the Cisco Nexus 1000V Switch)
A)
B)
C)
D)
Q6)
network
development
storage
server
Which property can a Cisco Nexus 1000V port profile define? (Source: Configuring
Basic Cisco Nexus 1000V Networking)
A)
B)
C)
D)
Q5)
fault tolerance
DRS
PTS
clustering
Q4)
port profiles
Fibre Channel
hypervisor
vMotion
hypervisor
service console
port groups that span the data center
VMkernel
Which VN-Link implementation does the Cisco M81KR VIC use? (Source:
Configuring Cisco UCS Manager for VMware PTS)
A)
B)
C)
D)
software
virtual
hardware
integrated
7-183
Q7)
Cisco Nexus 1000V VSMs can be configured for which type of high availability?
(Source: Characterizing Cisco Nexus 1000V Architecture)
A)
B)
C)
D)
Q8)
What are the two types of Cisco Nexus 1000V port profiles? (Choose two.) (Source:
Configuring Basic Cisco Nexus 1000V Networking)
A)
B)
C)
D)
7-184
standalone
high-availability standby
primary
secondary
uplink
physical
vEthernet
server based
Q2)
Q3)
Q4)
Q5)
Q6)
Q7)
Q8)
A, C
7-185
7-186