0% found this document useful (0 votes)
737 views346 pages

Cisco DCUCI v4.0 Student Guide Volume 2

Cisco DCUCI v4.0 Student Guide Volume 2

Uploaded by

emcvilt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
737 views346 pages

Cisco DCUCI v4.0 Student Guide Volume 2

Cisco DCUCI v4.0 Student Guide Volume 2

Uploaded by

emcvilt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 346

DCUCI

Data Center Unified


Computing
Implementation
Volume 2
Version 4.0

Student Guide
Text Part Number: 97-3021-0

DISCLAIMER WARRANTY: THIS CONTENT IS BEING PROVIDED AS IS. CISCO MAKES AND YOU RECEIVE NO WARRANTIES IN
CONNECTION WITH THE CONTENT PROVIDED HEREUNDER, EXPRESS, IMPLIED, STATUTORY OR IN ANY OTHER PROVISION OF
THIS CONTENT OR COMMUNICATION BETWEEN CISCO AND YOU. CISCO SPECIFICALLY DISCLAIMS ALL IMPLIED
WARRANTIES, INCLUDING WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT AND FITNESS FOR A PARTICULAR
PURPOSE, OR ARISING FROM A COURSE OF DEALING, USAGE OR TRADE PRACTICE. This learning product may contain early release
content, and while Cisco believes it to be accurate, it falls subject to the disclaimer above.

Student Guide

2011 Cisco and/or its affiliates. All rights reserved.

Table of Contents
Volume 2
Server Resources Implementation
Overview
Module Objectives

Creating Identity and Resource Pools


Overview
Objectives
Rationale for Creating Identity and Resource Pools
Pools for Mobile Service Profiles
Logical Resource Pools
Physical Resource Pools
UUID Pools
UUID Use
UUID Format
Start the UUID Suffix Pool Wizard
Name the UUID Suffix Pool
Create a Block of UUID Suffixes
UUID Pool
MAC Pools
Start the MAC Pool Wizard
Name the MAC Pool and Create a MAC Block
Verify the MAC Block and Finish the Wizard
New Americas MAC Block
WWNN Pools
WWN Format
WWN Pool Considerations
Start the WWNN Pool Wizard
Name the Pool and Create a WWNN Block
Verify the WWNN Block and Finish the Wizard
New Americas WWNN Block
WWPN Pools
Start the WWPN Pool Wizard
Name the Pool and Create a WWPN Block
Verify the WWPN Block and Finish the Wizard
New Americas WWPN Block
Server Pools
Server Pools
Start the Server Pool Wizard
Name the Server Pool
Select Server Pool Members
New Server Pool Members
Server Pool Created
Automating Server Pool Membership Based on a Qualification Policy
Creating an Auto-Populating Pool
Create an Empty Server Pool for SAP-Qualified Servers
Create a New Qualification Policy
Server Selection Criteria
Create Server Pool Policy
Servers Auto-Populated After Discovery
Importance of Creating Pools in the Correct Organization
Objects Created in an Organization Cannot Move
Summary

6-1
6-1
6-1

6-3
6-3
6-3
6-4
6-4
6-5
6-6
6-7
6-7
6-8
6-9
6-10
6-11
6-12
6-13
6-13
6-14
6-15
6-16
6-17
6-17
6-18
6-19
6-20
6-21
6-22
6-23
6-23
6-24
6-25
6-26
6-27
6-27
6-28
6-29
6-30
6-31
6-32
6-33
6-33
6-34
6-35
6-36
6-37
6-38
6-39
6-39
6-40

Creating Service Profiles


Overview
Objectives
Benefits of Service Profiles
Service Profile Basics
Service Profile Contents
Configuration of a BIOS Policy
Locate BIOS Policies
Name New BIOS Policy
Enable CPU Performance Features
Enable Intel Direct I/O Support
BIOS Policy vSphereDirectIO Available
Configuration of an Adapter Policy
Locate Adapter Policies
Create a New Adapter Policy
Adapter Policy RSS_Enabled Available
Create a QoS System Class
Open LAN Uplinks Manager
Select QoS Tab in Content Pane
Modify a System QoS Class
Locate QoS Policies
Create a New QoS Policy
QoS Policy Enabled
Configuration of IPMI and SoL Policies
Locate IPMI Policies
Create a New IPMI Policy
IPMI Policy Oracle_RAC_IPMI Available
Locate SoL Policies
Create a New SoL Policy
SoL Policy Oracle_RAC_SoL Available
Configuration of a Scrub Policy for Local Disks and BIOS
Locate Scrub Policies
Create a Disk Scrub Policy
Scrub Policy Oracle_RAC_Scrub Available
Simple vs. Expert Service Profile Wizards
Service Profile Wizards
Comparison of Service Profile Wizards
Service Profile Expert Wizard
Launch Expert Service Profile Wizards
Name the Service Profile
Configure the Service Profile to Take Its UUID from a Pool
Create UUID Pool in Service Profile Wizard
Assign UUID from a Pool
Configuration of vHBAs
Create WWNN Pool Within the Service Profile
Assign WWNN from a Pool
Begin Creation of vHBAs
Create vHBA for Fabric A
Create vHBA for Fabric B
Configuration of vNICs
Simple View for Networking
Switch to Expert View for Networking
Define Properties of vNIC for Fabric A
Define Properties of vNIC for Fabric B
vNIC Configuration Summary
vNIC and vHBA Placement on Full-Slot Blades
Binding of a vHBA to a Fibre Channel Boot Target
Boot Order Creation
Drag and Drop Pop-Up for Fabric A
Drag and Drop Pop-Up for Fabric B
ii

Data Center Unified Computing Implementation (DCUCI) v4.0

6-41
6-41
6-41
6-43
6-43
6-44
6-45
6-45
6-46
6-47
6-48
6-49
6-50
6-50
6-51
6-52
6-53
6-53
6-54
6-55
6-56
6-57
6-58
6-59
6-59
6-60
6-61
6-62
6-63
6-64
6-65
6-65
6-66
6-67
6-68
6-68
6-69
6-70
6-70
6-71
6-72
6-72
6-73
6-74
6-74
6-75
6-76
6-77
6-78
6-79
6-79
6-80
6-81
6-82
6-83
6-84
6-85
6-85
6-86
6-87
2011 Cisco Systems, Inc.

Set Boot Target Menu


Set Boot Primary A
Set Boot Primary B
Boot Order Summary
Server Assignment
Assign Service Profile to Server from a Pool
Assign Service Profile Directly
Assign IPMI and SoL Policies to the Service Profile
Assign BIOS and Scrub Policies to the Service Profile
Required vs. Optional Components of the Service Profile Definition
Summary

6-88
6-89
6-90
6-91
6-92
6-92
6-93
6-94
6-95
6-96
6-97

Creating Service Profile Templates and Cloning Service Profiles

6-99

Overview
Objectives
Service Profile Templates
Creating a Service Profile Template
Name the New Template
Template Types
Apply UUID Pool
Apply WWNN Pool
Create vHBA for Fabric A
Create vHBA for Fabric B
vHBA Templates
Create vNIC for Fabric A
Create vNIC for Fabric B
vNIC Templates
Boot Order and Boot Target
Template Server Assignment
IPMI and SoL Policy Assignment
BIOS and Scrub Policy Assignment
Modify Template
Creating Differentiated Service Profile Templates
Automating Creation of a Server Farm Using Service Profile Templates
Creating Service Profiles from Template
Select Prefix and the Number of Profiles
Describe the Hidden Pitfalls When Using Updating Templates
Updating Template Issues to Consider
Updating Template Warning
Unbind a Service Profile from Its Template
Bind a Service Profile to a Template
Cloning a Service Profile
Service Profile Cloning
Clone Destination
Summary

6-99
6-99
6-100
6-100
6-101
6-102
6-103
6-104
6-105
6-106
6-107
6-108
6-109
6-110
6-111
6-112
6-113
6-114
6-115
6-116
6-117
6-117
6-118
6-119
6-119
6-120
6-121
6-122
6-123
6-123
6-124
6-125

Managing Service Profiles

6-127

Overview
Objectives
Associating and Disassociating a Service Profile to a Server Blade
Associate a Service Profile with a Compute Node
Associate a Service Profile with a Server Pool
Observe FSM Status During Service Profile Association
What Happens During Service Profile Association?
Cisco UCS Utility Operating System
Disassociate a Service Profile from a Compute Node
FSM Status During Service Profile Disassociation
Changes to a Service Profile that Trigger a Cisco UCS Utility Operating System Update
Planning the Organization Where a Service Profile Is Created
Creating Service Profiles in the Correct Organization
Creating Service Profiles in the Wrong Organization
2011 Cisco Systems, Inc.

Data Center Unified Computing Implementation (DCUCI) v4.0

6-127
6-127
6-128
6-128
6-129
6-130
6-131
6-132
6-133
6-134
6-135
6-136
6-136
6-137
iii

Moving a Service Profile to a New Server Blade in the Event of Hardware Failure
A Compute Node Hardware Has Failed
Automatic Service Profile Reassociation
Summary
Module Summary
References
Module Self-Check
Module Self-Check Answer Key

Virtual Server Networking

7-1

Overview
Module Objectives

7-1
7-1

Evaluating the Cisco Nexus 1000V Switch

7-3

Overview
Objectives
Cisco Virtual Switching Overview
Evolution of Virtual NetworkingBefore Virtualization
Evolution of Virtual NetworkingVirtual Switches
vNetwork Distributed Switch
VMware vNetwork Evolution
Distributed Virtual Networking
Virtual Switch Options with vSphere 4
Cisco Nexus 1000V Virtual Switching Feature Overview
Cisco Nexus 1000V Features
VM View of Resources
VM Transparency
Scaling Server Virtualization
VN-Link Brings VM-Level Granularity
VN-Link with the Cisco Nexus 1000V
Summary

7-3
7-3
7-4
7-4
7-5
7-6
7-7
7-8
7-9
7-10
7-10
7-11
7-13
7-14
7-15
7-16
7-17

Working with VMware Ethernet Networking

7-19

Overview
Objectives
VMware vDS
vDS Configuration
Distributed Virtual Switching
Virtual Network Configuration
vDS Enhancements
vSwitch and vDS
Cisco Nexus 1000V DVS
Cisco Nexus 1000V Components
Cisco Nexus 1000VSingle Chassis Management
Cisco Nexus 1000VVSM Deployment Options
Cisco Nexus 1000VVSM High-Availability Options
Cisco Nexus 1000V CommunicationExtending the Backplane
VSM and VEM CommunicationLayer 2 Connectivity
VSM and VEM CommunicationLayer 3 Connectivity
VSM and VEM CommunicationImportant Considerations for Layer 3 Control
Cisco Nexus 1000V ComponentvCenter Communication
Cisco Nexus 1000VDomain ID
Cisco Nexus 1000VOpaque Data
Cisco Nexus 1000V Administrator Roles
Standard VMware Administrator Roles
Cisco Nexus 1000V Administrator Roles
Comparing VN-Link in Software and Hardware
VN-Link Packet FlowCisco Nexus 1000V and a Generic Adapter
VN-Link ProductsCisco UCS 6100 and VIC
VN-Link DeploymentVIC and Cisco UCS 6100 Series with VMware VMDirectPath
iv

Data Center Unified Computing Implementation (DCUCI) v4.0

6-138
6-138
6-139
6-140
6-141
6-142
6-145
6-149

7-19
7-19
7-20
7-20
7-21
7-22
7-23
7-24
7-25
7-25
7-26
7-28
7-29
7-30
7-31
7-32
7-33
7-34
7-35
7-36
7-37
7-37
7-38
7-39
7-40
7-41
7-42

2011 Cisco Systems, Inc.

VN-Link Packet FlowVIC Cisco UCS 6100 Series


Summary of All VN-Link Offerings
Why All the Different Models?
Summary

Characterizing Cisco Nexus 1000V Architecture

7-43
7-44
7-45
7-46

7-47

Overview
Objectives
Cisco Nexus 1000V Overview
Cisco Nexus 1000V Series DVS
Cisco Nexus 1000V
Managing Network Policies with the Cisco Nexus 1000V
Cisco Nexus 1000V Architectural Overview
Cisco Nexus 1000V Architecture
Cisco Nexus 1000V VLANs
Cisco Nexus 1000v Management VLAN
Cisco Nexus 1000V Control and Packet VLANs
Cisco Nexus 1000V Configuration Example
VEM-to-VSM Communication
Cisco Nexus 1000V Opaque Data
Policy-Based VM Connectivity
Mobility of Security and Network Properties
Summary

7-47
7-47
7-48
7-48
7-49
7-51
7-52
7-52
7-53
7-54
7-55
7-56
7-57
7-58
7-59
7-61
7-63

Installing and Configuring the Cisco Nexus 1000V Switch

7-65

Overview
Objectives
Configure VSM vSwitch Networks
Preparing the ESX Servers
VSM Port Group Requirements
VSM Port Group Creation
VSM vSwitch Configuration Showing Port Groups
Install the VSM on a VM
Cisco Nexus 1000V VSM Installation Methods
Creating a VSM VMChoose VM Configuration
Creating a VMName the VM and Inventory Location
Creating a VMChoose Shared Storage for VSM VM Files
Creating a VMChoose VSM Guest Operating System
Creating a VMCreate a VSM Local Disk
Creating a VMReview VSM Options
Creating a VMAdjust the VSM Memory Size
Creating a VMAdd the VSM Port Groups
Creating a VMAdd the Port Group Adapters
Creating a VMChoose Adapter Driver
Creating a VMReview Adapter Options
Verify Port Group Configuration
Creating a VMChoose .iso Boot File
Initial VSM Configuration
Access the VSM Console
Initial Setup
Configure the VSM-to-vCenter Connection
Install and Register the Plug-In for the New VSM
VSM Plug-In
Install the Plug-In for the New VSM
Verify Connectivity
Configure VSM Connectivity
Verify VSM Connectivity
Cisco Nexus 1000V High-Availability Configuration
Deploy the Secondary VSM
Cisco Nexus 1000V High Availability
Supervisor Modes

7-65
7-65
7-66
7-66
7-67
7-68
7-71
7-72
7-72
7-73
7-74
7-75
7-76
7-77
7-78
7-79
7-80
7-81
7-82
7-83
7-84
7-85
7-86
7-86
7-88
7-90
7-90
7-91
7-92
7-94
7-95
7-96
7-97
7-97
7-98
7-99

2011 Cisco Systems, Inc.

Data Center Unified Computing Implementation (DCUCI) v4.0

Verifying High Availability


Cisco Nexus 1010 Virtual Services Appliance
Cisco Nexus 1010 Virtual Services Appliance
Cisco Nexus 1010 Hardware Configuration
Cisco Nexus 1010 Virtual Service Appliance (Cont.)
Cisco Nexus 1010 Internal Architecture
Cisco Nexus 1010 Benefits
Cisco Nexus 1010 Connectivity
Cisco Nexus 1010 High Availability
Cisco Nexus 1010 VSB
Comparing the Cisco Nexus 1000V and Cisco Nexus 1010 Virtual Services Appliance
Architectural Comparison
Feature Comparison
Summary

Configuring Basic Cisco Nexus 1000V Networking


Overview
Objectives
Port Profiles Within the VSM
Cisco Nexus 1000V Port Profiles
Port Profiles and Port Groups
Uplink Port Profiles
VM ProfilesType vEthernet
Port Profile States
Port Profile Usage
Port Profile Properties
Preventing VM Sprawl
VLAN Configuration
VLAN Configuration
VLAN Port Profile Configuration
Create VLANs on the VSM
Private VLAN Configuration
Create Private VLANs and Assign Them to a Port Profile
Validate the Private VLAN Port Profile
Creating Uplink Profiles
Create a System Uplink
Verify the System Uplink
Create Uplink Profiles for VM Data
Creating vEthernet Port Profiles
Create a VM Data Port Profile
Verify the New Port Group
Configuring Cisco Nexus 1000V Port Channels
Understanding Port Channels
Port Channel Load Balancing
Port Channel Guidelines
Port Channel Types
vPC Host Mode
Standard Port Channel Configuration
Standard Port Channel Verification
vPC-HM Port Channel Verification
Adding VEMs to the VSM
Add a Host
Verify the New VEMs from the VSM
Add a VM to a VSM Port Group
Backing up a VSM Configuration
Managing VSM Files and Configurations
vMotion and Port Profile Mobility
Policy-Based VM Connectivity
Mobility of Security and Network Properties
Summary
vi

Data Center Unified Computing Implementation (DCUCI) v4.0

7-100
7-102
7-102
7-103
7-104
7-105
7-106
7-107
7-109
7-110
7-112
7-112
7-113
7-114

7-115
7-115
7-115
7-116
7-116
7-117
7-118
7-119
7-120
7-121
7-122
7-123
7-124
7-124
7-125
7-126
7-127
7-127
7-128
7-129
7-129
7-131
7-132
7-133
7-133
7-134
7-135
7-135
7-136
7-137
7-138
7-139
7-140
7-141
7-142
7-143
7-143
7-146
7-148
7-149
7-149
7-152
7-152
7-154
7-156
2011 Cisco Systems, Inc.

Configuring Cisco UCS Manager for VMware PTS


Overview
Objectives
Install Cisco UCS Manager Extension in vCenter
Preparing the Environment
vCenter Integration Methods
vCenter Integration Wizard
Downloading the Security Plug-In in Cisco UCS Manager
Installing the Security Plug-In in vCenter
Configure Cisco UCS Manager to Connect to vCenter
Using Folders
Configure Uplink and vEthernet Profiles
QoS Policy
Creating a Port Profile
Network Control Policy
LAN Pin Group
Creating a Port Profile Client
Configure Service Profiles with Dynamic NICs
Calculating the Deployment Limits with the M81KR VIC
Configuring a Dynamic vNIC Connection Policy
Associating a Dynamic vNIC Connection Policy to a Profile
Viewing Dynamic vNICS in Cisco UCS Manager
Viewing Dynamic vNICS in ESX
Configure vMotion and M81KR Port Profile Mobility
Add the Hosts to the DVS in vCenter
View Hosts in vCenter
Viewing Available Port Profiles in vCenter
Assigning Port Profile to the VM Interface
Viewing Port Profile Consumption in Cisco UCS Manager
Displaying Virtual Interface/vEthernet Packet Counters in the Cisco UCS Manager CLI
Summary
Module Summary
Module Self-Check
Module Self-Check Answer Key

2011 Cisco Systems, Inc.

Data Center Unified Computing Implementation (DCUCI) v4.0

7-157
7-157
7-157
7-158
7-158
7-159
7-160
7-161
7-162
7-163
7-163
7-164
7-164
7-165
7-166
7-167
7-168
7-169
7-169
7-170
7-171
7-172
7-173
7-174
7-174
7-175
7-176
7-177
7-178
7-179
7-180
7-181
7-183
7-185

vii

viii

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Module 6

Server Resources
Implementation
Overview
Stateless computing and unified fabric are two of the cornerstone value propositions of Cisco
Unified Computing System (UCS). Now that physical connectivity and administrative and
operational procedures are in place, the focus of this module is logical connectivity. A blade
server in the Cisco UCS B-Series is merely a compute node. To establish LAN and SAN
connectivity, universal unique identifier (UUID), MAC Address, BIOS settings, and various
other policies, a service profile must be created to contain all of these elements. The great
benefit of abstracting policy settings and identities is that they are portable. If a blade server
fails in such a way that the operating system or hypervisor can no longer operate, the service
profile can simply be associated with the replacement blade. All of the elements of that server
are transferred.

Module Objectives
Upon completing this module, you will be able to implement a stateless computing architecture.
This ability includes being able to meet these objectives:

Create identity and resource pools

Create service profiles

Create service profile templates and clone service profiles

Manage service profiles

6-2

Data Center Unified Computing Support Specialist (DCUCI) v4.0

2011 Cisco Systems, Inc.

Lesson 1

Creating Identity and Resource


Pools
Overview
Identity and resource pools are containers that facilitate consistent application of abstracted
identities that are used by service profiles and service profile templates. While administrators
are free to track abstracted identities in a spreadsheet or text file, it is difficult to scale in a large
system.

Objectives
Upon completing this lesson, you will be able to configure identity and resource pools used by
service profiles in service profile templates. This ability includes being able to meet these
objectives:

Explain the rationale for creating identity and resource pools

Configure UUID pools

Configure MAC pools

Configure WWNN pools

Configure WWPN pools

Configure server pools

Configure the processes to automate server pool membership based on a qualification


policy

Demonstrate an understanding of the importance of creating pools in the correct


organization

Rationale for Creating Identity and Resource


Pools
This topic discusses the rationale for identity and resource pools.

Pools for Mobile Service Profiles


Pools for Mobile Service Profiles
To simplify mobile service profiles, use pools of identifiers.
The use of pools promotes consistent application of policy.
Pools ensure uniqueness of identities within Cisco UCS
Manager.

WWN
Pool

UUID Pool

2011 Cisco Systems, Inc. All rights reserved.

MAC
Pool

DCUCI v4.06-4

Stateless computing requires unique identity resources for universally unique identifiers
(UUIDs), MAC addresses, and world wide names (WWNs) for Fibre Channel. Using pooled
resources ensures consistent application of policy and reasonable assurances that identities are
unique within the Cisco UCS Manager.

6-4

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Logical Resource Pools


Logical Resource Pools
Identity pools supply abstracted identities to service profiles
in service profile templates.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-5

Logical resource pools provide abstracted identities that are used by service profiles in service
profile templates to facilitate stateless computing.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-5

Physical Resource Pools


Physical Resource Pools
Physical resource pools provide blade server resources to
service profiles to facilitate rapid provisioning.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-6

Physical resource pools are used to create groupings of blade servers that are based on arbitrary
administrative criteria. These pools can be used with service profile templates to provide rapid
provisioning of compute resources.

6-6

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

UUID Pools
This topic discusses the configuration of UUID pools.

UUID Use
UUID Use
UUIDs are essentially standardized serial numbers that
identify a particular server.
Traditional servers have a hardware UUID stored in the
system BIOS.
Operating systems and software licensing schemes may use
the UUID to detect if they have been moved between
physical servers.
Cisco UCS allows for the manual or automatic assignment of
UUIDs to enhance mobility of operating systems and
applications.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-8

UUIDs are designed as globally unique identifiers for each compute node on a network. UUIDs
are used in a number of different ways. In the context of Cisco UCS, the UUID refers to a 128bit identifier coded into the compute node BIOS.
Operating systems, hypervisors, and applications can leverage the UUID for processes like
activation, internal disk labels, and so on. Some applications may use the UUID as an internal
root value propagated very tightly within data structures. Therefore, UUIDs should be locally
administered in the service profile instead of derived from the BIOS. UUIDs within a service
profile are mobile. If the underlying compute node fails, the service profile carries the UUID to
the replacement compute node, eliminating the need for potentially time-consuming searchand-replace operations.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-7

UUID Format
UUID Format
UUIDs are globally unique 128-bit numbers.
Many schemes exist to define or generate the UUID.
Cisco UCS Manager uses a configurable 64-bit prefix and
allows you to specify a range of suffixes for use by compute
nodes.
It is recommended that prefixes be set to the same 24-bit
OUI as used in WWN pools, and pad as necessary.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-9

There are many schemas for deploying and formatting UUIDs. It is the responsibility of Cisco
UCS to determine what values to encode in the UUID prefix and suffix.

6-8

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Start the UUID Suffix Pool Wizard


Start the UUID Suffix Pool Wizard
To create a new UUID pool, right-click UUID Suffix Pools
and select Create UUID Suffix Pools or click +.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-10

To create a UUID pool, navigate to the Servers tab in the navigation pane. Navigate to Pools
and the organization in which the pool should be created. Right-click UUID Suffix Pools and
choose Create UUID Suffix Pool.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-9

Name the UUID Suffix Pool


Name the UUID Suffix Pool
Select a unique name for the UUID pool, and, optionally,
provide a description.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-11

Assign a name and optional description for the pool. There are two choices for creating the
UUID prefix. The prefix represents the first 8 bytes of the 16-byte value. If you select Derived,
Cisco UCS Manager supplies the prefix. If you select Other, Cisco UCS Manager will prompt
you to supply the first 16 bits of the UUID.

6-10

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Create a Block of UUID Suffixes


Create a Block of UUID Suffixes
Click Add, enter a suffix starting point, and choose how many
UUIDs will populate the pool.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-12

Click Add to create a starting point for the 16-bit UUID suffix. In the example, the first two
bytes of the suffix have been changed to 0X0718. The current maximum number of compute
nodes in Cisco UCS is 320. The designer has decided to preallocate all of the UUIDs that can
be used.
Note

2011 Cisco Systems, Inc.

It is a best practice to only preallocate the number of identities in a given pool that are based
on current and near-term forecast. Every identity resource that is allocated is a managed
object in the Cisco UCS Manager database.

Server Resources Implementation

6-11

UUID Pool
UUID Pool
UUIDs are now available for assignment.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-13

A pool of 320 UUIDs were created using the derived prefix that is combined with the 320
defined suffixes. They are immediately available for consumption by service profiles or service
profile templates.

6-12

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

MAC Pools
This topic discusses the configuration of MAC address pools.

Start the MAC Pool Wizard


Start the MAC Pool Wizard
Click the organization where the new pool will be created,
then click Create MAC Pool.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-15

A MAC pool consists of a range of MAC addresses. Create the MAC pool and assign a name.
With MAC pools, Cisco UCS administration is made easier when scaling server deployment of
service profiles by prompting stakeholders to define a set of MAC addresses before actual
deployment.
To create a MAC pool, navigate to the LAN tab in the navigation pane. Click the organization
that the pool should be created beneath. Click the Create MAC Pool link to start the wizard.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-13

Name the MAC Pool and Create a MAC Block


Name the MAC Pool and Create a MAC
Block
Select a unique name for the MAC pool and, optionally,
provide a description.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-16

Provide the MAC pool with a unique name and, optionally, a description. Click Next and
decide how many MAC addresses should be created in the pool. Cisco provides a three-byte
Organizationally Unique Identifier (OUI) assigned by the IEEE. It is recommended that you do
not modify the prefix.

6-14

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Verify the MAC Block and Finish the Wizard


Verify the MAC Block and Finish the
Wizard
Verify that the MAC pool is correct or click <Prev to go back
to creating addresses.
Click Finish to complete the wizard.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-17

When the MAC pool has been created, there is an opportunity to verify the addresses and go
back to the previous window if a mistake has been noticed. Otherwise, click Finish to complete
the wizard.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-15

New Americas MAC Block


New Americas MAC Block
The Americas MAC pool is now available for assignment.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-18

A MAC address pool was added beneath the Americas organization. These addresses are
immediately available for assignment.

6-16

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

WWNN Pools
This topic discusses the creation and configuration of world wide node names (WWNNs).

WWN Format
WWN Format
WWNs are 64-bit addresses
Extended format
2X:XX:YY:YY:YY:ZZ:ZZ:ZZ
Example: 20:00:00:25:B5:20:20:00
X = organizationally assigned
YY:YY:YY = OUI
ZZ:ZZ:ZZ = organizationally assigned

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-20

WWNs are 64-bit addresses that have many possible formats. The example that is shown is for
reference only, as Cisco UCS Manager enforces a specific format in all WWN pools.
A WWN pool can include only WWNNs or WWPNs in the ranges from
20:00:00:00:00:00:00:00 to 20:FF:FF:FF:FF:FF:FF:FF or from 50:00:00:00:00:00:00:00 to
5F:FF:FF:FF:FF:FF:FF:FF. All other WWN ranges are reserved.
To ensure the uniqueness of the Cisco UCS WWNNs and WWPNs in the SAN fabric, you
should use the following WWN prefix for all blocks in a pool:
20:00:00:25:B5:XX:XX:XX

2011 Cisco Systems, Inc.

Server Resources Implementation

6-17

WWN Pool Considerations


WWN Pool Considerations
WWNs should be globally unique.
Recommend creating locally administered OUI.
Possibly encode meaning into first octet of organizationally
assigned field, such as with MAC addresses.
WWNN should be distinguishable from WWPN.
WWNN might use 20:01:02:00:00:XX:XX:XX
WWPN might use 20:00:02:00:00:XX:XX:XX
Pools may only use the 20:XX:XX:XX:XX:XX:XX:XX format.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-21

Cisco UCS Manager enforces the use of WWNNs that begin with 20. All WWN pools must
begin with that value, with any remaining values that you create. In keeping with the global
standards set for WWNs, it is recommended that a locally administered OUI be selected and
used as the third through fifth octets. Additionally, it is useful if the WWNN and world wide
port name (WWPN) values are easily distinguishable. Because the second octet of a WWN can
be organizationally assigned, that octet might be used to encode meaning for the WWNN or
WWPN. This convention and address block should be agreed upon by all stakeholders in the
initial implementation phase of a Cisco UCS deployment.

6-18

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Start the WWNN Pool Wizard


Start the WWNN Pool Wizard
Click the organization where the new pool will be created.
Then click Create WWNN Pool.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-22

To create a WWNN pool, navigate to the SAN tab in the navigation pane. Click the
organization that the pool should be created beneath. Click Create WWNN Pool to start the
wizard.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-19

Name the Pool and Create a WWNN Block


Name the Pool and Create a WWNN
Block
Select a unique name for the WWNN pool and, optionally,
provide a description.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-23

Cisco supplies the first four bytes of the WWNN prefix by combining 20 with one of the
three-byte OUIs provided by Cisco. One WWNN is used by each service profile. In the
example that is shown, the administrator has decided to preallocate 320 addresses.

6-20

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Verify the WWNN Block and Finish the Wizard


Verify the WWNN Block and Finish the
Wizard
Verify that the WWNN Pool is correct or click <Prev to go
back to the addresses.
Click Finish to complete the wizard.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-24

After the WWNN pool has been created, there is an opportunity to verify the addresses and go
back to the previous window if you notice a mistake. Otherwise, click Finish to complete the
wizard.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-21

New Americas WWNN Block


New Americas WWNN Block
The Americas WWPN Pool is now available for assignment.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-25

A WWNN pool was added beneath the Americas organization. These addresses are
immediately available for assignment.

6-22

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

WWPN Pools
This topic discusses the creation of WWPN pools.

Start the WWPN Pool Wizard


Start the WWPN Pool Wizard
Click the organization where the new pool will be created.
Then click Create WWPN Pool.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-27

To create a WWPN pool, navigate to the SAN tab in the navigation pane. Click the
organization that the pool should be created beneath. Click Create WWPN Pool to start the
wizard.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-23

Name the Pool and Create a WWPN Block


Name the Pool and Create a WWPN
Block
Select a unique name for the WWPN pool and, optionally,
provide a description.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-28

Cisco supplies the first four bytes of the WWPN prefix by combining 20 with one of the
three-byte OUIs provided by Cisco. One WWNN is used by each service profile. In this
example, the administrator decided to preallocate 640 addresses. One WWPN is required for
each virtual host bus adapter (HBA).

6-24

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Verify the WWPN Block and Finish the Wizard


Verify the WWPN Block and Finish the
Wizard
Verify that the WWPN pool is correct or click <Prev to go
back to create addresses.
Click Finish to complete the wizard.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-29

After the WWPN pool has been created, there is an opportunity to verify the addresses and go
back to the previous window if you notice a mistake. Otherwise, click Finish to complete the
wizard.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-25

New Americas WWPN Block


New Americas WWPN Block
The Americas WWPN pool is now available for assignment.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-30

A WWPN pool was added beneath the Americas organization. These addresses are
immediately available for assignment.

6-26

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Server Pools
This topic discusses the creation of server pools.

Server Pools
Server Pools
Server pools can be manually populated or auto-populated.
Blade server can be in multiple pools at the same time.
Associate a service profile with a pool:
A compute node is selected automatically from the pool.
Cisco UCS Manager will only select a blade server not yet
associated with another logical server and not in the
process of being disassociated.
Pool Dev

2011 Cisco Systems, Inc. All rights reserved.

Pool QA

DCUCI v4.06-32

Pools can be manually populated or auto-populated using a server pool policy.


A compute node can be in multiple pools at the same time. The profile that is associated with a
specific compute node owns the node, regardless of the number of pools in which the blade
server resides.
To use a server pool, associate the service profile with the pool. Cisco UCS Manager
automatically selects an available compute node from the pool. An available blade server is one
that is currently discovered, but not associated with any profile and not in the process of being
associated or disassociated.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-27

Start the Server Pool Wizard


Start the Server Pool Wizard
To create a new server pool, right-click Server Pools and
select Create Server Pool, or click +.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-33

Server pools are configured under the LAN tab in the navigation pane. To create a new server
pool, right-click Server Pools and select Create Server Pool, or click the plus sign.

6-28

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Name the Server Pool


Name the Server Pool
Select a unique name for the server pool and, optionally,
provide a description.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-34

Enter a unique name and, optionally, a description for the new server pool and click Next.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-29

Select Server Pool Members


Select Server Pool Members
Select servers and click the >> button to add them to the
pool.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-35

Use the mouse to populate the new server pool. Hold down the shift key to select a range of
servers. Click the >> button to move the selected servers into the pool.

6-30

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

New Server Pool Members


New Server Pool Members
Click Finish to close the wizard and save the pool.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-36

Verify that the desired servers are members of the new pool. Click Finish to complete the
wizard.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-31

Server Pool Created


Server Pool Created

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-37

The content pane displays the IDs of the servers added to the pool. It includes information
about whether the server has been assigned. If assigned, a link to the service profile of the
server will display in the Assigned To column.

6-32

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Automating Server Pool Membership Based on a


Qualification Policy
This topic discusses how to configure a qualification policy to automatically add new blade
servers to a given pool.

Creating an Auto-Populating Pool


Creating an Auto-Populating Pool
Create an empty server pool.
Create server pool qualifications.
Mix and match various criteria, such as chassis slots,
CPU/RAM, and so on.
Create a server pool policy.
Associates the qualification policy to a specific pool.
Auto-placement of a compute node in a pool happens at
discovery time.
Previously discovered compute nodes can be
reacknowledged to be placed in a pool.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-39

The auto-population feature lets you do the following:

Specify qualifications that will be used for matching specific blade servers.

Specify server pool policies, which will put every blade server that matches a particular
qualification into a particular server pool.

Note

2011 Cisco Systems, Inc.

Server pool auto-population only happens as individual blade servers are discovered. If you
want auto-population to apply to existing compute node servers, they need to be
reacknowledged.

Server Resources Implementation

6-33

Create an Empty Server Pool for SAP-Qualified Servers


Create an Empty Server Pool for SAPQualified Servers

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-40

Create an empty server pool where servers matching a qualification criteria will be placed.

6-34

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Create a New Qualification Policy


Create a New Qualification Policy

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-41

In the Servers tab of the navigation pane, expand the Policies member of the navigation tree.
Right-click the Server Policy Pool Qualification policy and select Create Server Pool Policy
Qualification.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-35

Server Selection Criteria


Server Selection Criteria
Choose server selection criteria from the Actions box.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-42

In the Actions section of the policy wizard, there is a list of six categories that can be used for
selection criteria. The policy SAP_CRM_POOL is assigned to name this policy. In the
example, requirements for CPU memory and mezzanine card are specified.

6-36

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Create Server Pool Policy


Create a Server Pool Policy
Map the qualifications to a server pool.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-43

The next step is to create a server pool policy. The purpose of this policy is to map a
qualification policy to the empty pool that was created earlier.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-37

Servers Auto-Populated After Discovery


Servers Auto-Populated After Discovery

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-44

The example shows that two servers were automatically added to the pool SAP_Pool. As was
previously discussed, servers that have already been discovered will not automatically be
matched against a qualification policy. Both servers in the example were reacknowledged.
After discovery was completed, they were added to the SAP_Pool server pool. A new blade
server that is inserted into a chassis will be evaluated by all qualification policies that have been
configured.

6-38

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Importance of Creating Pools in the Correct


Organization
This topic discusses restrictions that are imposed on objects that are created in an organization.

Objects Created in an Organization Cannot Move


Objects Created in an Organization
Cannot Move
It is possible to copy a pool, policy, or service profile, but not
to paste it.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-46

Important lessons in IT are sometimes learned in the most difficult way possible. One such
example would be creating many service profiles, templates, policies, pools, and thresholds
under the root organization. If it is later decided to create suborganizations, it is not possible to
move any of the objects that are created under root to another organization. It is also not
possible to move an object from one nonroot organization to another. When you right-click on
a policy object or template, notice that there is no option for cut. There is the tantalizing
option of copy, but if one right-clicks on a different organization, it is apparent that there is
no option for paste.
The only way to remedy the situation is to delete and re-create every object in its appropriate
organization.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-39

Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
Identity and resource pools simplify the creation of mobile
profiles and help to ensure that policies are consistently applied.
UUID pools are created in the Server tab and are used to
uniquely identify each blade server.
MAC address pools are created in the LAN tab and are
consumed in the service profile by vNICs.
WWNN pools are created in the SAN tab and are consumed in
the service profile by virtual HBAs.
WWPN pools are created in the SAN tab and are consumed in
the service profile by virtual HBAs.
Server pools are created in the LAN tab and are consumed by
service profiles and service profile templates.
Servers can be automatically added to server pools during
discovery based on a set of qualification criteria.
It is extremely important to create policies, pools, and thresholds
in the correct organization, as they cannot be moved after they
are created.

2011 Cisco Systems, Inc. All rights reserved.

6-40

Data Center Unified Computing Implementation (DCUCI) v4.0

DCUCI v4.06-47

2011 Cisco Systems, Inc.

Lesson 2

Creating Service Profiles


Overview
Service profiles are the structural elements of the compute node. Service profiles contain all of
the identity resources and policies that are required to operate a blade server. The service
profile makes stateless computing possible by abstracting identity and policy information from
the physical server. If the compute node that a service profile is associated with becomes
unavailable, the service profile is simply reassociated with another compatible compute node.
When the operating system or hypervisor reboots on the replacement compute node, it believes
that it is running on the same hardware.

Objectives
Upon completing this lesson, you will be able to configure service profiles and service profile
templates. This ability includes being able to meet these objectives:

Describe the rationale for and benefits of service profiles

Configure a BIOS policy to enable virtualization features

Configure an adapter policy to enable RSS and set the failback timer for fabric failover

Create a QoS system class and allow all Ethernet traffic to use jumbo frames up to an MTU
of 9216

Configure IPMI and SoL policies

Configure a scrub policy for local disks and BIOS

Differentiate between the features available in the simple service profile wizard and the
expert wizard

Start the service profile expert wizard

Configure the service profile to take its UUID from a pool

Configure a vHBA for two fabrics and have the service profile take its assignment of
WWNNs and WWPNs from a pool

Configure a vNIC for two fabrics and have the service profile take its assignment of MAC
addresses from a pool

Configure vNIC and vHBA placement on full-slot blades

6-42

Configure the binding of a vHBA to a Fibre Channel boot target

Configure server assignment

Differentiate between required components and optional components of the service profile
definition

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Benefits of Service Profiles


This topic discusses the benefits of and rationale for creating service profiles.

Service Profile Basics


Service Profile Basics
Service profiles contain identity and state information for a
logical server.
LAN and SAN connectivity to a compute node is unavailable
without a service profile associated with it.
Every compute node needs its own unique service profile.

Profile
SAP_SJC
Profile
SAP_DFW

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-5
v4.06-5

Stateless computing requires unique identity resources for universally unique identifiers
(UUIDs), MAC addresses, and world wide names (WWNs) for Fibre Channel. Using pooled
resources ensures consistent application of policy and reasonable assurances that identities are
unique within the Cisco UCS Manager.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-43

Service Profile Contents


Service Profile Contents
State information for server
UUID
WWPN
Boot order
vNIC configuration
Service Profile

VLAN membership, MAC address


vHBA configuration
VSAN assignment, WWNs
Policies

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-6
v4.06-6

The service profile represents a logical view of a server without any ties to a specific physical
device. The profile object contains all the elements of server function. This identity contains the
unique information for that server, including MAC, world wide port name (WWPN), UUID,
boot order, and so on. Each profile can only be associated with a single blade server at any
given time, and every blade server requires a unique service profile.
Service profiles facilitate server mobility. Mobility is the ability to transfer server identity
seamlessly between compute nodes in such a way that the underlying operating system or
hypervisor does not detect any change in server hardware.
In environments where blades are managed as traditional individual servers, service profiles are
still required. Service profiles provide LAN and SAN connectivity configuration. Configuring
service profiles in this way is similar to the need to configure individual LAN and SAN ports
for traditional rack servers.

6-44

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Configuration of a BIOS Policy


This topic describes the configuration of BIOS policies that can be applied to service profiles in
order to enable virtualization features.

Locate BIOS Policies


Locate BIOS Policies
Expand policies in the Server tab and locate BIOS Policies.
Right-click on the organization where the new policy is to be
created.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-8
v4.06-8

In the Server tab of the navigation pane, expand the policies to locate BIOS Policies. Rightclick on the organization in the content pane where the new policy is to be created.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-45

Name New BIOS Policy


Name New BIOS Policy

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-9
v4.06-9

In earlier Cisco UCS Manager versions, the only BIOS setting that could be modified was boot
order. Beginning in Cisco UCS Manager version 1.3, control of nearly every BIOS setting is
available as a policy that can be assigned to a service profile.
The BIOS policy requires a name. There are three options for each BIOS setting:

6-46

Disabled: The setting or attribute is disabled.

Enabled: The setting or attribute is enabled.

Platform-default: The setting or attribute will remain at factory default.

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Enable CPU Performance Features


Enable CPU Performance Features
Enable performance-enhancing CPU features and
virtualization support.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-10
v4.06-10

The settings on the example shown in the figure enable Intel performance features of the CPU.
The service profile of this BIOS policy will be applied and can take complete and direct
advantage of the CPU features.
Feature

Description

Turbo Boost

Allows dynamic overclocking of the CPU if the


operating system requests.

Enhanced Intel Speedstep

This feature allows the CPU to lower clock


frequency to lower power consumption during
periods of low demand.

Hyper Threading

This is an Intel proprietary technique that can


improve instruction parallelization by presenting an
operating system or hypervisor with two virtual
CPUs for each physical core.

Virtualization Technology

This feature enables Intel virtualization-specific


CPU instructions to improve hypervisor
performance.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-47

Enable Intel Direct I/O Support


Enable Intel Direct I/O Support
Enable CPU support for acceleration of VMware DirectPath
I/O.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-11
v4.06-11

Intel Virtualization Technology for Directed I/O (VT-d) options can accelerate I/O operations
in virtualized environments where a physical interface is mapped directly to a virtual machine,
bypassing the hypervisor. VMware vSphere can benefit from enabling these options. Refer to
operating system or hypervisor documentation for guidance on the appropriate settings for
these options.

6-48

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

BIOS Policy vSphereDirectIO Available


BIOS Policy vSphereDirectIO Available
The new BIOS policy is available for assignment.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-12
v4.06-12

Click Finish in the wizard to save the new BIOS policy. The new policy is now available for
assignment to a service profile. When a service profile that references this policy is associated
to a blade server, there is no need for manual configuration of the BIOS at power-on self-test
(POST) time. This new capability can greatly simplify and accelerate the pace of server
provisioning.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-49

Configuration of an Adapter Policy


This topic describes the configuration of adapter policies that can be applied to service profiles
in order to enable RSS and set the failback timer for fabric failover.

Locate Adapter Policies


Locate Adapter Policies
Expand Policies in the Server tab, and locate Adapter Policies.
Right-click on the organization where the new policy is to be
created.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-14
v4.06-14

In the Server tab of the navigation pane, expand Policies to locate Adapter Policies. Right-click
on the organization in the content pane where the new policy is to be created.

6-50

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Create a New Adapter Policy


Create a New Adapter Policy
Enable Receive Side Scaling (RSS) and lower the Failback
Timeout to 2 seconds.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-15
v4.06-15

Receive-side scaling (RSS) relieves the single-server bottleneck that occurs in multicore CPU
systems. TCP packets that are received without RSS being enabled are only processed by a
single core. By enabling RSS, received packets are processed on all cores. RSS should
generally be enabled on any server with multicore CPUs. This adapter feature is disabled by
default.
The failback timer determines how long an adapter should wait to fail back to its original fabric
if a fabric failover event has occurred. As an example, if fabric interconnect A became
unavailable, servers would failover to their backup connections on the fabric interconnect B.
The 2-second timer that is employed here would apply as soon as fabric interconnect A
becomes available.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-51

Adapter Policy RSS_Enabled Available


Adapter Policy RSS_Enabled Available
The new adapter policy is available for assignment.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-16
v4.06-16

Click Finish in the wizard to save the new adapter policy. The new policy is now available for
assignment to a service profile.

6-52

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Create a QoS System Class


This topic describes the two-step process to modify QoS system classes and apply them to
adapter policies in order to allow all Ethernet traffic to use jumbo frames.

Open LAN Uplinks Manager


Open LAN Uplinks Manager
Click the LAN Uplinks Manager link.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-18
v4.06-18

You can create a quality of service (QoS) policy in two steps. To access a QoS system class,
the LAN Uplinks Manager must be opened.
From the Equipment tab of the navigation pane, select one of the fabric interconnects. In the
content pane, click the LAN Uplinks Manager link.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-53

Select QoS Tab in Content Pane


Select QoS Tab in Content Pane
Click the QoS tab to access QoS system classes.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-19
v4.06-19

Click the QoS tab in the content pane to open the dialog box to access QoS and modify system
classes.

6-54

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Modify a System QoS Class


Modify a System QoS Class
Enable the Gold class to allow Ethernet jumbo frames.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-20
v4.06-20

In the example, the Cisco UCS administrator has enabled the Gold QoS system class. In
addition, the administrator is configuring this as a member of the drop class and setting the
relative weighting to best-effort. These two parameters ensure that, from the perspective of
priority flow control and enhanced transmission selection, this traffic receives no special
handling. The goal of this policy is limited to enabling jumbo frame support for every virtual
network interface card (vNIC) that the policy is applied to.
Note

2011 Cisco Systems, Inc.

Disabling drop class and setting a weighting percentage other than best-effort will affect the
performance of service profiles that do not include the adapter policy that references this
system class.

Server Resources Implementation

6-55

Locate QoS Policies


Locate QoS Policies
Expand Policies in the LAN tab, and locate QoS Policies.
Select the organization where the new policy is to be created
and click the Create QoS Policy link in the content pane.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-21
v4.06-21

Select the LAN tab of the navigation pane and expand Policies to locate the QoS Policies item
in the tree. Select the organization where the policy will reside and click the Create QoS
Policy link.

6-56

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Create a New QoS Policy


Create a New QoS Policy
Name the policy and select the modified QoS system class.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-22
v4.06-22

Name the new policy and select the QoS system class gold from the priority drop-down list.
Because the goal of this policy is simply to enable jumbo frame support, leave the burst and
rate options at their defaults.
The Host Control option allows you to determine whether this QoS policy can be modified or
overridden by the administrator of the hypervisor or operating system. The default is None,
which acts as a lockout. Only a Cisco UCS administrator with sufficient privileges in this
organization can modify or delete a policy.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-57

QoS Policy Enabled


QoS Policy Enabled
The new QoS policy is available for assignment.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-23
v4.06-23

The QoS Policy Jumbo_9216 is now available to be assigned to a vNIC in a service profile.

6-58

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Configuration of IPMI and SoL Policies


This topic discusses the creation of Intelligent Platform Management Interface (IPMI) and
Serial over LAN (SoL) policies that can be consumed by service profiles.

Locate IPMI Policies


Locate IPMI Policies
Select the organization where the new policy is to be created.
Click the Create IPMI Profile link.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-25
v4.06-25

Click the Server tab and select the organization where the new IPMI policy will be created.
Click the Create IPMI Profile link to start the policy wizard.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-59

Create a New IPMI Policy


Create a New IPMI Policy
Name the policy and create at least one user.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-26
v4.06-26

Name the new policy and create at least one user. The role can be either admin or read-only.
Read-only users may query the IPMI system that is provided by the Cisco Integrated
Management Controller for the status of any IPMI sensor. Admin users can access sensors and
additionally perform power control operations of the server.

6-60

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

IPMI Policy Oracle_RAC_IPMI Available


IPMI Policy Oracle_RAC_IPMI Available
The new IPMI policy is available for assignment.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-27
v4.06-27

The Oracle_RAC_IPMI policy is now available for assignment to a service profile. User
jcasalleto is the sole administrative user.
Note

2011 Cisco Systems, Inc.

The users that are created in IPMI policies do not count against the 40-user limit on the local
user authentication database.

Server Resources Implementation

6-61

Locate SoL Policies


Locate SoL Policies
Expand Policies in the Server tab and locate SoL Policies.
Right-click on the organization where the new policy is to be
created.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-28
v4.06-28

Expand the Server tab in the navigation pane and locate Serial Over LAN policies. In the
content pane, right-click on the organization where the policy should be created.

6-62

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Create a New SoL Policy


Create a New SoL Policy
Provide a name and optional description for the policy.
Set administrative state and serial baud rate.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-29
v4.06-29

Name the new SoL policy, set the administrative state to Enabled, and set the serial baud rate
so that the connection will communicate. SoL connections use UDP port 623.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-63

SoL Policy Oracle_RAC_SoL Available


SoL Policy Oracle_RAC_SoL Available
The new SoL policy is available for assignment.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-30
v4.06-30

The new SoL policy is available for assignment to a service policy.

6-64

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Configuration of a Scrub Policy for Local Disks


and BIOS
This topic describes the configuration of scrub policies for local hard disks in the server BIOS.

Locate Scrub Policies


Locate Disk Scrub Policies
Expand Policies in the Server tab and locate Scrub Policies.
Click Create Scrub Policy in the organization where the new
policy is to be created.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-32
v4.06-32

Expand Policies in the Server tab and select the organization where the policy will be created.
Click Create Scrub Policy to begin the wizard.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-65

Create a Disk Scrub Policy


Create a Disk Scrub Policy
Provide a name and optional description for the policy.
Decide whether to scrub only disks, or disks and BIOS
settings.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-33
v4.06-33

Name the policy and, optionally, provide a description. Previous versions of Cisco UCS
Manager did not include the BIOS scrubbing capability of the new scrub policies. Cisco UCS
Manager version 1.3 introduces this new combined scrub policy.
When disk scrub is set to Yes, local disk drives will be completely erased upon disassociation
of a service profile.
When the BIOS settings scrub policy is set to Yes, all BIOS settings will revert to factory
defaults upon disassociation of a service profile.

6-66

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Scrub Policy Oracle_RAC_Scrub Available


Scrub Policy Oracle_RAC_Scrub
Available
The new scrub policy is available for assignment.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-34
v4.06-34

The new scrub policy is available for assignment to service profiles.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-67

Simple vs. Expert Service Profile Wizards


This topic discusses the differences between the simple and expert service profile wizards.

Service Profile Wizards


Service Profile Wizards
The basic wizard is a single-page form that allows the
creation of a service profile using all derived values.
Service profiles created with the basic wizard do not support
stateless computing and have limited options.
The expert service profile wizard provides the administrator
with a rich set of options for identity and policy assignment.
The expert wizard allows the creation of mobile service
profiles that can be moved from compute node to compute
node without the need to modify parameters in the operating
system or applications.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-36
v4.06-36

The primary difference between the simple service profile wizard and the expert service profile
wizard is the scope of tools available to manipulate within the wizard. The simple wizard
provides a fast, single-page form for the rapid provisioning of a blade server using all derived
identity values. The expert service profile wizard allows for the granular configuration of
policies, identities, and thresholds.

6-68

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Comparison of Service Profile Wizards


Comparison of Service Profile Wizards
Basic Wizard

Expert Wizard

Assign single-access VLAN

Assign access VLAN or trunk

Use derived MAC address only

Assign locally administered MAC


address or use pool

Use derived WWNN only

Assign locally administered


WWNN or use pool

Use derived WWPN only

Assign locally administered


WWPN or use pool

Use derived UUID only

Assign locally administered UUID


or use pool

Only two devices in the boot order

More than two devices in the boot


order

No policy or threshold assignment

Policy and threshold assignment


for each element in the service
profile

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-37
v4.06-37

The table summarizes the most important differences between the simple and expert service
profile wizards.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-69

Service Profile Expert Wizard


This topic discusses the configuration of the service profile using the expert service profile
wizard.

Launch Expert Service Profile Wizards


Launch Expert Service Profile Wizards
Select the organization where the new service profile should
be created.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-39
v4.06-39

From the Server tab in the navigation pane, select the organization for which a new service
profile will be created. In the content pane, click the link for Create Service Profile (expert).

6-70

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Name the Service Profile


Name the Service Profile

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-40
v4.06-40

Next, name the service profile and, optionally, provide a description. The name must not
exceed 16 characters and may not include special characters or spaces.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-71

Configure the Service Profile to Take Its UUID


from a Pool
This topic discusses how to assign UUIDs from a pool.

Create UUID Pool in Service Profile Wizard


Create UUID Pool in Service Profile
Wizard

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-42
v4.06-42

If you begin the service profile wizard and realize that you have forgotten to first create a
UUID pool, you can create the new pool from within the wizard. Some very useful capabilities
are available throughout the expert wizard for pooled values.

6-72

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Assign UUID from a Pool


Assign UUID from a Pool

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-43
v4.06-43

You can select the UUID pool directly from the drop-down list. After the assignment is made,
click Next to continue the wizard.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-73

Configuration of vHBAs
This topic discusses the creation of virtual host bus adapters (vHBAs) that derive their identity
from pools.

Create WWNN Pool Within the Service Profile


Create WWNN Pool Within the Service
Profile

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-45
v4.06-45

The figure shows an example of creating a pool from within a wizard.

6-74

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Assign WWNN from a Pool


Assign WWNN from a Pool

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-46
v4.06-46

Since Americas_WWNN pool was defined and populated with values, it can be directly
referenced from the drop-down list. The WWNN refers to the mezzanine card and there is only
one WWNN per service profile unless there are two mezzanine cards that are populated in a
full-slot blade. Click Next to continue the wizard.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-75

Begin Creation of vHBAs


Begin Creation of vHBAs
Click the + Add button to create the first vHBA.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-47
v4.06-47

Click the plus sign to open a dialog box to create the vHBA for fabric A.

6-76

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Create vHBA for Fabric A


Create vHBA for Fabric A
Define the configuration parameters for fabric A.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-48
v4.06-48

The vHBA requires a name, WWPN assignment, VSAN assignment, and fabric assignment.
The example creates a new vHBA named vHBA-A. It will pull its WWPN assignment from
Americas_WWPN pool, is a member of VSAN 11, and is associated with fabric A.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-77

Create vHBA for Fabric B


Create vHBA for Fabric B
Define the configuration parameters for fabric B.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-49
v4.06-49

Repeat the steps for creating the vHBA on fabric A to create the vHBA on fabric B. Be certain
to select a different VSAN for fabric B. Click Next to continue the wizard.

6-78

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Configuration of vNICs
This topic discusses the configuration of virtual network adapters in the assignment of identity
from pools.

Simple View for Networking


Simple View for Networking
The networking section of the wizard defaults to Simple view.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-51
v4.06-51

The expert wizard defaults to simple view on the opening page of networking configuration.
This view limits you to selecting derived MAC addresses and a single VLAN.
Note

2011 Cisco Systems, Inc.

The MK81-KR virtualization adapter does not have burned-in MAC addresses. Pooled or
manual address assignment is required on this mezzanine adapter.

Server Resources Implementation

6-79

Switch to Expert View for Networking


Switch to Expert View for Networking
Click the Expert radio button for greatest control.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-52
v4.06-52

Click the Expert radio button to reveal the complete suite of networking configuration tools
available within the expert wizard. Click the +Add button to open the dialog box and define a
new virtual network card.

6-80

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Define Properties of vNIC for Fabric A


Define Properties of vNIC for Fabric A

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-53
v4.06-53

The new vNIC requires the following configuration elements:

Name

MAC assignment

Fabric assignment and failover

Configuration of access to the LAN or trunk

Policies

This vNIC, named vNIC-A, will have its MAC address assignment from the Americas_MAC
identity pool. Because the mezzanine card that this vNIC will be associated with is an MK81KR, it is configured for hardware-based fabric failover.
A VLAN trunk with two tagged VLANs will be provided to the hypervisor.
The adapter and QoS policies enable RSS, reduced failback window, and jumbo frame support.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-81

Define Properties of vNIC for Fabric B


Define Properties of vNIC for Fabric B

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-54
v4.06-54

Repeat the process to create the vNIC for fabric B. It requires a unique name and it will be
assigned to fabric B. All other parameters will be identical to the vNIC for fabric A.

6-82

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

vNIC Configuration Summary


vNIC Configuration Summary

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-55
v4.06-55

The vNIC summary window is used to validate the configuration of the newly created virtual
interface cards. Click Next to continue the wizard.
Note

2011 Cisco Systems, Inc.

The MAC address assignment indicates derived because the actual assignment of the
address will not occur until the service profile wizard has completed.

Server Resources Implementation

6-83

vNIC and vHBA Placement on Full-Slot Blades


This topic discusses vNIC placement on blade servers with dual mezzanine cards.

Dual Mezzanine vNIC Placement


Full-slot blades have two mezzanine cards.
vNIC placement can be automatic or manual.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-57
v4.06-57

The Cisco UCS B250 and B440 full-slot blade servers include two slots for mezzanine cards.
Because a vNIC is a virtual definition of a network interface card, it could be placed on the
appropriate fabric on either of the mezzanine cards present in the full-slot server.
In a half-slot blade with a single mezzanine card, simply allow the system to select the only
mezzanine card. If manual control is desired, select Specify Manually from the Select
Placement drop-down list. vCon1 maps vNICs to the left mezzanine slot, and vCon2 maps a
vNIC to the right mezzanine slot (as viewed from the front panel of the blade server). Click
Next to continue in the wizard.

6-84

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Binding of a vHBA to a Fibre Channel Boot


Target
This topic discusses finding boot targets to vHBAs.

Boot Order Creation


Boot Order Creation
Drag and drop vHBAs into the boot order.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-59
v4.06-59

To select vHBAs to boot from a logical unit number (LUN), click and drag the first vHBA into
the boot order whitespace.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-85

Drag and Drop Pop-Up for Fabric A


Drag and Drop Pop-Up for Fabric A
Select the vHBA for fabric A as the primary boot device.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-60
v4.06-60

A pop-up window will appear with the name of the vHBA and the choice to make this the
primary or secondary boot device. Select Primary for the vHBA on fabric A, then click OK.

6-86

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Drag and Drop Pop-Up for Fabric B


Drag and Drop Pop-Up for Fabric B
Virtual HBA for fabric B will be the secondary.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-61
v4.06-61

A pop-up window will appear with the name of the vHBA and the choice to make this the
primary or secondary boot device. Select Secondary for the vHBA on fabric B, then click OK.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-87

Set Boot Target Menu


Set Boot Target Menu

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-62
v4.06-62

Click the Add SAN Boot Target below the vHBAs and select Add SAN Boot Target to SAN
Primary.

6-88

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Set Boot Primary A


Set Boot Primary A

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-63
v4.06-63

In the pop-up window, enter the boot LUN (always LUN 0 on Cisco UCS systems), the
WWPN of the boot target, and set the type to Primary.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-89

Set Boot Primary B


Set Boot Primary B

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-64
v4.06-64

Repeat the steps that are required to set the primary, but set the type to Secondary. In the event
that the primary boot device fails, the secondary device will attempt to boot the system from the
other vHBA.

6-90

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Boot Order Summary


Boot Order Summary

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-65
v4.06-65

After boot devices are configured, the boot order summary window allows you to verify and
make modifications to the boot order before committing the configuration.
There are two checkboxes:

Reboot on Boot Order Change: Requires that the blade associated with the service profile
reboot immediately.

Enforce vNIC/vHBA Name: Means that the system uses any vNICs or vHBAs in the
order that is shown in the Boot Order table. If not checked, the system uses the priority that
is specified in the vNIC or vHBA.

Note

2011 Cisco Systems, Inc.

If the configuration of a vHBA is changed (other than the boot order), the system will
immediately reboot.

Server Resources Implementation

6-91

Server Assignment
Server assignment is one of the final configuration decisions within the expert service profile
wizard. The Cisco UCS administrator can simply finish the wizard and manually assign the
service profile at a later time, or manually select a server from the list of unassociated servers.
Service profiles can only be associated with a server that does not have a profile that is actively
associated with it.

Assign Service Profile to Server from a Pool


Assign Service Profile to Server from a
Pool
Select a server pool from the drop-down list.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-67
v4.06-67

From the Server Assignment drop-down list, select an available pool. Cisco UCS Manager will
remove that server from all of the pools of which it is currently a member and associate the
service profile with that blade. Click Next to continue the wizard.

6-92

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Assign Service Profile Directly


Assign Service Profile Directly
Select a server from all unassociated server blades.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-68
v4.06-68

Choose Select Existing Server from the Server Assignment drop-down list. Click the radio
button of an unassociated server.
The power state radio button allows you to choose the initial power state after the service
profile has been successfully associated with the blade server. If the SAN team has not
provisioned boot LUN in time for the service profile, you should leave the power state down.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-93

Assign IPMI and SoL Policies to the Service Profile


Assign IPMI and SoL Policies to the
Service Profile

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-69
v4.06-69

Operational Policies is the last page of the wizard. Use the drop-down lists to select the IPMI
and SoL policies.

6-94

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Assign BIOS and Scrub Policies to the Service Profile


Assign BIOS and Scrub Policies to the
Service Profile

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-70
v4.06-70

While still on the Operational Policies page of the wizard, expand the BIOS Configuration and
Scrub Policy subwindows. Use the drop-down lists to select the BIOS and scrub policies.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-95

Required vs. Optional Components of the


Service Profile Definition
This topic differentiates between required elements and optional elements of the service profile.

Service Profile Requirements


Required

Optional

UUID

Additional vNICs

MAC address

Provide 802.1Q trunk


FCoE
Pooled or manually assigned
WWNN
WWPN
Policies
Thresholds

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-72
v4.06-72

The table summarizes required and optional elements of all service profiles.

6-96

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
A blade server requires a service policy to achieve external
communication through the mezzanine card.
Cisco UCS Manager version 1.3 includes manipulating BIOS
settings within a BIOS policy in the service profile.
Adapter policies allow configuration of RSS, checksum
offloading, failback timer, and transmit and receive buffers.
The LAN Uplinks Manager allows the modification of QoS
system classes to tune bandwidth priority, lossless fabric,
multicast, and MTU.
IPMI and SoL policies are applied to service profiles to allow
external access to the Cisco Integrated Management Controller
and serial console.
Scrub policy for local disks and BIOS can be applied to a service
profile that allows local disks and BIOS settings to be erased
upon disassociation.
The expert service profile wizard allows complete control over
the assignment of identity, policy, and thresholds.

2011 Cisco Systems, Inc. All rights reserved.

Course acronym
DCUCIvx.x#-73
v4.06-73

Summary (Cont.)
The expert service profile wizard is initiated from the Server tab in the
navigation pane.
UUID can be assigned from a pool, manually assigned, or derived from
the server BIOS.
WWNN and WWPN assignment can be performed from a pool,
manually assigned, or (depending on the mezzanine model) derived
from hardware.
MAC address assignment can be performed from a pool, manually
assigned, or derived from hardware.
Full-slot blade servers include two mezzanine slots in the service profile,
and offer manual or automatic selection binding vNICs and vHBAs to a
slot.
You must configure the binding of a vHBA to a Fibre Channel boot
target.
Server assignment can be directly selected from a list of unassociated
servers, assigned at a later time, or signed from a pool.
UUID and MAC address assignments are the only required elements in
a service profile.

2 0 1 1 Ci s co Sy s te m s , n
I c . A l ri g h ts re s er ve d .

2011 Cisco Systems, Inc.

Co u rs e a crDoCU
n ym
CI vvx4.x
.0 6
# -7
-744

Server Resources Implementation

6-97

6-98

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Lesson 3

Creating Service Profile


Templates and Cloning Service
Profiles
Overview
Service profile templates build on the idea of manually created service profiles. With the
potential for a large population of blade servers in a given Cisco Unified Computing System
(UCS), manual creation of service profiles would be both slow and error-prone. The use of
templates allows the Cisco UCS administrator the ability to create server definitions in a
consistent and rapid process.
Unlike manually created service profiles, service profile templates must use pools for identity
and server assignment. Derived hardware values are never used.

Objectives
Upon completing this lesson, you will be able to configure service profile templates and
automate the creation of service profiles that are based on the template. This ability includes
being able to meet these objectives:

Create a service profile template and describe the need for pooled resources and identities

Describe the reasons to create differentiated service profile templates to allow variations of
policy

Automate the creation of a server farm using service profile templates

Describe the hidden pitfalls when using updating templates

Unbind a service profile from its template

Clone a service profile and demonstrate understanding of cloning requirements

Service Profile Templates


This topic discusses the configuration of service profile templates and the need for pooled
resources and identities.

Creating a Service Profile Template


Creating a Service Profile Template
Process is similar to service profile creation
Requires server pool (cannot select individual blade)
Requires pooled identities (UUID, MAC, WWNN, WWPN)

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-4

The process of creating a service profile template is nearly identical to creating a service profile
manually. The principal difference is that service profile templates cannot be directly applied to
a compute node and no hardware elements can use a derived value.

6-100

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Name the New Template


Name the New Template
Process
similar to
service
profile
creation

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-5

Service profile templates require a name, just as manually created service profiles. Templates
can also contain an optional description.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-101

Template Types
Template Types
Initial templates
Updates to template are not propagated to service profiles
created using the initial template.
Updating templates
Changes to template are propagated to service profiles
created using the updating template.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-6

There are two types of templates. For both types, profiles that are created from a template
cannot be modified. The ability to bind or unbind a service profile from its template will be
discussed later in this lesson.

6-102

Initial templates: This type of template maintains no connection to service profiles created
from this definition. Changes to the template do not propagate to service profiles created
from the template.

Updating templates: This type of template maintains a semi-permanent link to service


profiles spawned from this definition. Any changes to an updating template will be
immediately propagated to all service profiles created from the template.

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Apply UUID Pool


Apply UUID Pool
All service profiles created from this template will use pooled
UUIDs.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-7

To facilitate stateless computing, the universally unique identifier (UUID) must be assigned
from a pool. The UUID is unique in that it is the only identity resource that has the option of
using the hardware default in the BIOS. Click Next to continue the wizard.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-103

Apply WWNN Pool


Apply WWNN Pool
All service profiles created from this template will use pooled
WWNNs.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-8

To enable Fibre Channel over Ethernet (FCoE) support to service profiles generated from this
template, enter the name of the world wide node name (WWNN) pool.

6-104

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Create vHBA for Fabric A


Create vHBA for Fabric A
All service profiles created from this template will use pooled
WWPNs.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-9

Click the Expert radio button to provide complete control over creating vHBAs. Click the Add
(+) button to create the virtual host bus adapter (vHBA) for fabric A. As in the service profile
creation wizard, enter a name, fabric affiliation, VSAN, and world wide port name (WWPN)
pool.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-105

Create vHBA for Fabric B


Create vHBA for Fabric B
All service profiles created from this template will use pooled
WWPNs.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-10

Click the Expert radio button to provide complete control over creating vHBAs. Click the Add
(+) button to create the vHBA for fabric B. As in the service profile creation wizard, enter a
name, fabric affiliation, VSAN, and WWPN pool.

6-106

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

vHBA Templates
vHBA Templates
All the elements of a vHBA can be stored in a template.
Templates are created under the SAN tab in Policies.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-11

vHBA templates allow Cisco UCS administrators to speed the repetitive task of entering vHBA
parameters in many templates or service profiles. Enter the information once in the template
and never set it again.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-107

Create vNIC for Fabric A


Create vNIC for Fabric A
All service profiles created from this template will use pooled
MACs.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-12

The definition of a virtual network interface card (vNIC) in the service profile template wizard
is identical to the way that the definition is created in the manual service profile wizard. Enter a
name for the vNIC, MAC address pool, access VLAN or VLANs associated with an 802.1Q
trunk, and adapter performance profiles. In this example, the receive-side scaling (RSS) and
jumbo frame policies will be bound to every service profile generated from this template.

6-108

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Create vNIC for Fabric B


Create vNIC for Fabric B
All service profiles created from this template will use pooled
MACs.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-13

The definition of the vNIC for fabric B is identical to the creation of the vNIC for fabric A,
except for the name.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-109

vNIC Templates
vNIC Templates
All the elements of a vNIC
can be stored in a
template.
Templates are created
under the LAN tab in
Policies.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-14

vNIC templates allow Cisco UCS administrators to speed the repetitive task of entering vNIC
parameters in many templates or service profiles. It is especially useful when many VLANs
must be selected for a trunk interface.

6-110

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Boot Order and Boot Target


Boot Order and Boot Target
Create the boot order as if it were a service profile and not a
template.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-15

The Cisco UCS administrator has two options regarding the boot order. The example illustrates
a SAN boot policy, where every initiator is mapped to the same target logical unit number
(LUN). This is only possible if the storage system maps the source WWPN with a unique LUN.
This method is very useful if the SAN administrators can provide premapped WWPNs to
LUNs.
The second option is to define the vHBAs as bootable, but leave the boot target definition for a
later time.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-111

Template Server Assignment


Template Server Assignment
Templates can only assign servers from a server pool.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-16

It is clear from the drop-down list for server assignment that manual assignment is not an
option. Servers must be assigned from a pool.

6-112

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

IPMI and SoL Policy Assignment


IPMI and SoL Policy Assignment
Add IPMI and SoL policies created earlier.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-17

The Intelligent Platform Management Interface (IPMI) and Serial over LAN (SoL) policies that
were created earlier can be applied to the template. All service profiles that are generated from
this template will inherit both policies.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-113

BIOS and Scrub Policy Assignment


BIOS and Scrub Policy Assignment
Add BIOS and scrub policies.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-18

The BIOS and Scrub policies are applied to the template, and will be assigned automatically to
every service profile generated by the template.

6-114

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Modify Template
Modify Template
Pool assignments and policies can be changed after template
creation.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-19

After a service profile template has been generated, it can be modified in a very similar manner
to a manually created service profile. Certain changes made to an updating template will be
propagated to every service profile generated by the template.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-115

Creating Differentiated Service Profile Templates


This topic discusses the need for differentiated service profile policy in heterogeneous
computing environments to allow variations of policy.

Using Templates for Differentiated


Policy
The use of templates allows for the consistent application of
policy to meet application requirements in heterogeneous
computing environments.
Oracle RAC

SAP

Web Server

B250-M2

B230-M1

B200-M2

256 Gb RAM

128 Gb RAM

8 Gb RAM

Local HD for swap

No local HD

Local HD

M81-KR VIC x 2

M71-KR-Q

M71-KR-E

Xeon 5660 x2

Xeon 6560 x2

Xeon 5620 x 1

RSS support

RSS support

RSS support

Intel VT-d BIOS on

Intel VT-d BIOS off

Intel VT-d BIOS off

Jumbo frames

Jumbo frames

Standard MTU

Hyperthreading off

Hyperthreading on

Hyperthreading on

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-21

The figure shows the power and flexibility of using templates for differentiated policy. Because
groups of applications share similar or identical requirements, service profile templates can be
created to seamlessly provide the identity of server resources that are needed to serve the
application. One of the important operational benefits to this approach is consistency of policy
across the entire class of applications.

6-116

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Automating Creation of a Server Farm Using


Service Profile Templates
This topic discusses automated compute node provisioning from templates.

Creating Service Profiles from Template


Creating Service Profiles from Template
Creating from template allows simultaneous provisioning of
multiple profiles.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-23

Another benefit of using service profile templates is that the Cisco UCS administrator can
automate the provisioning of one to hundreds of compute nodes into simple operations. After a
service profile template is built and points to identity and resource pools with sufficient
resources, automation can begin.
Select the service profile template and the organization where the new service profiles are to be
created in the navigation pane. In the content pane, click the link Create Service Profiles
From Template.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-117

Select Prefix and the Number of Profiles


Select Prefix and the Number of Profiles
Provide naming prefix and number of service profiles based
on the service profile template.
In the example, the naming prefix would create service
profiles Oracle_RAC1 through Oracle_RAC20.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-24

The dialog box prompts you for a naming prefix and the number of service profiles to be
generated with that prefix. Immediately after you click OK, service profiles will appear under
the organization. If the server assignment in the template points to a server pool, a new service
profile will immediately begin to associate with the next available server in the pool.

6-118

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Describe the Hidden Pitfalls When Using


Updating Templates
This topic discusses issues related to the use of updating templates that can directly affect
production operations.

Updating Template Issues to Consider


Updating Template Issues to Consider
Updating templates are a great way to update policy on a
large number of service profiles.
The danger is that changing some parameters in the template
will cause all linked compute nodes to reboot simultaneously.
Policy Change

Result of Change

UUID Pool

Reboot all linked compute nodes

WWNN Pool

Reboot all linked compute nodes

WWPN Pool

Reboot all linked compute nodes

MAC Pool

Reboot all linked compute nodes

Boot Order

Reboot can be avoided

vNIC/vHBA Placement

Reboot all linked compute nodes

Local Disk Policy

Reboot all linked compute nodes

BIOS Policy

Reboot all linked compute nodes

IPMI, SoL, and Scrub Policies

No reboot

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-26

Changes to updating templates are immediately propagated to any service profiles that were
generated from that template. If none of the generated service profiles are associated to a
compute node, there is no risk to an update. However, if certain changes are made to the
updating template, it will cause all linked compute nodes to reboot. A summary of template
modifications and their associated reactions are shown in the table.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-119

Updating Template Warning


Updating Template Warning
When a change to an updating template will result in the
disruption of compute operations, a warning dialog box
appears.
Disruptive changes should only be performed in a planned
maintenance window to allow all operating systems and
hypervisors to be gracefully shutdown

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-27

Beginning with Cisco UCS Manager version 1.2, the system warns you that if the modification
to the updating template is executed, all impacted compute nodes will reboot immediately.
The best practice in this case is to perform the update in a scheduled and approved maintenance
window that provides for the graceful shutdown of all compute nodes that the change will
affect.

6-120

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Unbind a Service Profile from Its Template


This topic discusses how to unbind a service profile from its template.

Unbind a Service Profile from Its


Template
Service profiles bound to a template are in a read-only state.
To modify the service profile, unbind the profile from the
template.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-29

One of the consequences of generating service profiles from a template is that the resulting
service profiles are created in a read-only state. The example displays a conspicuous warning
message alerting the Cisco UCS administrator that no changes can be made to the service
profile unless it is unbound from the template. By clicking the unbind link, a small dialog box
appears asking the administrator to confirm the operation. When the operation is confirmed, the
service profile no longer displays the warning or its link to its parent template.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-121

Bind a Service Profile to a Template


Bind a Service Profile to a Template
Stand-alone service profiles can be bound to a template at a
later time.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-30

It is also possible to bind a manually created service profile to a template. If the previously
created service profile is bound to an initial or updating template, it will retain the identity
information for UUID, WWNN, WWPN, and MAC address unless the template uses different
pools. If the template uses a different named pool, identity information will be replaced with
data pulled from the pools of the template.

6-122

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cloning a Service Profile


This topic discusses cloning operations and service profiles in service profile templates.

Service Profile Cloning


Service Profile Cloning
All pool associations are replicated to a cloned template.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-32

Service profiles and service profile templates can both be cloned. Simply right-click on the
name of the service profile or service profile template and select Create a Clone. The result of
this operation is that all pooled identities in the clone will be refreshed with unique values. The
boot order is cloned verbatim.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-123

Clone Destination
Clone Destination
Unique values for MAC and WWN will be immediately
assigned to the profile from the appropriate pool.
Select the destination organization where the clone should be
created.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-33

When you commit to creating a clone, a pop-up dialog box prompts you for the name of the
clone and destination organization where the clone will be created.
Note

6-124

Remember that once an object is created in a particular organization, it cannot be moved or


renamed. If an object is created in the wrong organization, it must be deleted and then recreated in the correct organization.

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
Service profile templates require all identity and resources to be
derived from a pool.
Differentiated service profile templates allow for the consistent
application of policy in heterogeneous computing environments.
Service profile templates can be leveraged to create an arbitrary
number of service profiles associated to compute nodes.
Updating templates are useful for maintaining consistent policy
across a large population of compute nodes, but modifying
certain parameters will cause all linked compute nodes to
reboot.
Service profiles created from templates cannot be changed
unless they are unbound from the template.
Clones made from existing service profiles maintain the boot
order for beta, but select new identities from identity pools.

2011 Cisco Systems, Inc. All rights reserved.

2011 Cisco Systems, Inc.

DCUCI v4.06-34

Server Resources Implementation

6-125

6-126

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Lesson 4

Managing Service Profiles


Overview
While a Cisco Unified Computing System (UCS) is operational, periodic maintenance to
service profiles may be necessary. Corporate policy and regulatory compliance requirements
can dictate changes to policy. Updates to Cisco UCS Manager software regularly introduce
support for new features, capabilities, and new blade servers. The pace of innovation within
Cisco UCS has been swift and dramatic. Cisco UCS has grown from a product with a single
blade in the fabric interconnect to a rich selection of components ready to meet the needs of the
most demanding applications in enterprise data centers and cloud computing. Service profile
maintenance will be an ongoing periodic process.

Objectives
Upon completing this lesson, you will be able to disassociate and associate service profiles
from compute nodes. You will recognize which parameters of a service profile trigger a
disruptive change. This ability includes being able to meet these objectives:

Use Cisco UCS Manager to associate and disassociate a service profile to a server blade

Describe what changes to a service profile trigger a Cisco UCS utility operating system
update, and outage to a server

Describe the importance of planning the organization where a service profile is created

Use Cisco UCS Manager to move a service profile to a new server blade in the event of
hardware failure

Associating and Disassociating a Service Profile


to a Server Blade
This topic discusses associating and disassociating service profiles with compute nodes.

Associate a Service Profile with a Compute Node


Associate a Service Profile with a
Compute Node
Select the unassociated service profile and click the link
Change Service Profile Association.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-4

If you decided not to immediately assign a service profile to a compute node, you can select the
desired service profile in the navigation pane. In the General tab of the content pane, click the
link labeled Change Service Profile Association.

6-128

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Associate a Service Profile with a Server Pool


Associate a Service Profile with a
Server Pool
Select the appropriate server pool from the Server
Assignment drop-down list.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-5

A pop-up dialog box prompts you to select either from an existing unassociated server or server
pool. Unlike service profile templates, service profiles that are not bound to a template are not
required to select a server from a pool only. Click OK to begin the association process.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-129

Observe FSM Status During Service Profile Association


Observe FSM Status During Service
Profile Association
You can observe the association process in the FSM tab in
the content pane of the service profile.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-6

You can follow the complete process of association by clicking the FSM tab in the content
pane. Recall from Monitoring System Events that service profile association and
disassociation are complex processes that are assigned to a finite state machine (FSM).
If the service profile is unable to associate with the compute node that is selected, the FSM will
provide information on which step of the process a failure occurred. This is very useful for
troubleshooting service profile association issues.
Note

6-130

Be aware that the FSM status indicator may appear to stop and lock up. Some stages of the
association process can take one minute or longer to complete. This is normal.

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

What Happens During Service Profile Association?


What Happens During Service Profile
Association?
A small Linux kernel (UuOS) is PXE booted using an
internally isolated network in the fabric interconnect.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-7

The processes that occur during service profile association and disassociation are very
interesting. The first step in associating a service profile to a compute node begins by powering
up the server. Next, the server Preboot Execution Environment (PXE) boots a small Linux
distribution over a private network connection to the fabric interconnect.
The screenshot in the figure highlights the term pnuosimg. Before Cisco UCS was released to
the public, this Linux operating system was referred to as Processor Node Utility Operating
System, or PNuOS. The official name for this is Cisco UCS Utility Operating System. The old
terminology still appears in some contexts.
Note

2011 Cisco Systems, Inc.

The black text on white background was reversed from the standard keyboard, video,
mouse (KVM) output of white text on black background in a graphics program, for
readability. The KVM does not have a choice of text or background colors.

Server Resources Implementation

6-131

Cisco UCS Utility Operating System


Cisco UCS Utility Operating System
The Cisco UCS Utility Operating System PXE booted from the
fabric interconnect applies all configuration and policy elements
of the service profile to a compute node, then exits and reboots
the compute node.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-8

During PXE boot, the server obtains a DHCP address over the private network. This network is
completely isolated from the in-band and out-of-band connections that are processed by the
fabric interconnect and servers.
The purpose of booting this Linux operating system is to program the compute node. You will
see identity information such as universally unique identifier (UUID), MAC address, world
wide network node (WWNN), world wide port name (WWPN), BIOS configuration, adapter
policies, and so on.

6-132

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Disassociate a Service Profile from a Compute Node


Disassociate a Service Profile from a
Compute Node
Select the
service
profile, and
click the link
Disassociat
e Service
Profile.
Click Yes in
the warning
dialog box to
confirm the
action.
Monitor the
process in
the FSM tab.
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-9

To disassociate a service profile from its compute node, select the service profile in the
navigation pane. In the content pane click the link Disassociate Service Profile. A pop-up
warning dialog asks you to verify the operation. Note also the small comment about observing
the process in the FSM tab.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-133

FSM Status During Service Profile Disassociation


FSM Status During Service Profile
Disassociation
You can observe the disassociation process in the FSM tab
in the content pane of the service profile.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-10

Both association and disassociation processes are monitored by FSM. Click the FSM tab in the
content pane to observe the process of disassociation.

6-134

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Changes to a Service Profile that Trigger a Cisco


UCS Utility Operating System Update
This topic discusses the type of service profile modifications that automatically trigger a Cisco
UCS Utility Operating System update, and outage to a server.

Changes Trigger Cisco UCS Utility


Operating System
When a service profile is associated with a compute node,
certain changes made to the service profile trigger a Cisco UCS
Utility Operating System update (and instant server reboot).
Configuration Change

Triggers UCSuOS to Run?

Change pooled identity

Yes

Change non-pooled identity

Yes

Change Fibre Channel boot target

Yes

Change boot order

Maybe

Change BIOS config policy

Yes

Change vNIC failover

Yes

Change adapter policy

No

Change VSAN on vHBA

No

Change VLAN on vNIC

No

Change from access VLAN to trunk

No
UCSuOS = UCS Utility Operating System

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-12

It is important to understand what types of service profile modifications can be made outside of
a change control maintenance window. The table summarizes changes that will trigger a Cisco
UCS Utility Operating System to run. As of version 1.2 of Cisco UCS Manager, the system
alerts you to the changes that will result in the compute node being immediately rebooted.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-135

Planning the Organization Where a Service


Profile Is Created
This topic discusses the importance of creating service profiles and other policy objects in the
correct organization.

Creating Service Profiles in the Correct Organization


Creating Service Profiles in the Correct
Organization
After a service profile has been created, it cannot be
renamed or moved to another organization.
If a service profile is created in the wrong organization, the
only way to correct the situation is to delete the service profile
and re-create it in the correct organization.
In a Cisco UCS deployment that includes role-based access
control and locales, the placement of the service profile has
ramifications for access control.
If only the root organization is employed, no special planning
nor considerations are required.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-14

If a given Cisco UCS deployment creates all identity, resource, pool, and policy objects in the
root organization, no special planning or considerations are required.
If the Cisco UCS administrators have created a significant number of objects under the root
organization and decide to apply administrative hierarchy at a later date, significant work will
be required. Every object that needs to move into a nonroot organization will need to be deleted
and re-created in the new organization.
When an object has been created in a given organization, it cannot be renamed or moved.

6-136

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Creating Service Profiles in the Wrong Organization


Creating Service Profiles in the Wrong
Organization
Administrators restricted to the Boston locale cannot manage
service profile BOS_Oracle_RAC1 created in the Americas
parent organization.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-15

In the example in the figure, a service profile was created in the Americas organization. If this
company employs role-based access control (RBAC) and limits administrative scope based on
locale, administrators in the Boston organization may have no control over the service profile
BOS_Oracle_RAC1. The service profile must be deleted and re-created in the appropriate
organization.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-137

Moving a Service Profile to a New Server Blade


in the Event of Hardware Failure
This topic discusses using Cisco UCS Manager to move a service profile to a new server blade
in case of hardware failure.

A Compute Node Hardware Has Failed


A Compute Node Hardware Has Failed
The service profile must be relocated to a functioning
compute node.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-17

The blade server in slot 8 has experienced a severe failure. An administrator used Cisco UCS
Manager to decommission the blade in slot 8. The service profile that is associated with this
compute node is automatically de-linked.

6-138

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Automatic Service Profile Reassociation


Automatic Service Profile
Reassociation
Because the compute node for service profile Oracle_RAC_1
was selected from a server pool, the service profile
automatically reassociated to an available server in that pool.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-18

Because the service profile received its server assignment from a server pool, it automatically
selected a new server from the same named pool without any further action from the
administrator. After the service profile reassociated to the new compute node, the operating
system or hypervisor automatically rebooted on the new compute node.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-139

Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
Cisco UCS Manager is used to associate and disassociate a
service profile with a compute node.
Certain modifications to service profiles already associated
with the compute node can trigger Cisco UCS Utility
Operating System to reboot the computer node.
Service profiles, service profile templates, policies, and pools
cannot be moved or renamed after they are created under a
given organization.
Cisco UCS Manager allows the administrator to move a
service profile from a failed compute node to a replacement.

2011 Cisco Systems, Inc. All rights reserved.

6-140

Data Center Unified Computing Implementation (DCUCI) v4.0

DCUCI v4.06-19

2011 Cisco Systems, Inc.

Module Summary
This topic summarizes the key points that were discussed in this module.

Module Summary
Policies, identity pools, and resource pools are used by
service profiles and service profile templates.
Service profiles allow for stateless computing by abstracting
identity values normally tied to hardware.
Service profile templates allow for consistent application of
policy, yet are flexible enough for differentiation of policy in
heterogeneous computing environments.
Service profiles, service profile templates, pools, and policies
cannot be renamed or moved once they are created in an
organization.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.06-1

This module examines the building blocks of stateless computing.


In traditional server environments, where a single operating system runs on the underlying
hardware, it is very challenging to replicate a consistent configuration across a large population
of servers. Adding to this challenge is the fact that unique network and storage addresses are
tied to the hardware. In Cisco UCS, these challenges are addressed by a flexible array of
reusable policy objects, identity pools, and resource pools. These reusable objects are
consumed by a construct that is called a service profile.
Service profiles act as a hardware abstraction layer. Instead of relying on identity locks to the
hardware, all elements of a server personality, or state, are encapsulated in this policy
construct. The benefit of this approach is that if the underlying hardware fails, the service
profile is mobile and can be reassociated with another compute node. The operating system or
hypervisor boots on the new compute node and is unaware that the hardware has changed. This
avoids time-consuming downtime that is associated with re-acclimating an operating system
and applications with all of the new hardware identifiers on the replacement server.
Cisco UCS extends the functionality of service profiles with profile templates. Templates offer
a way to automate the creation of many service profiles and ensure consistent policy. In
heterogeneous computing environments, differentiated policies and service profile templates
afford the Cisco UCS administrator a rich set of tools to manage large deployments.

2011 Cisco Systems, Inc.

Server Resources Implementation

6-141

Each compute node requires a unique service profile. There is a one-to-one mapping of service
profiles to compute nodes. When the Cisco UCS administrator begins the association process,
the Preboot Execution Environment (PXE) boots a special-purpose Linux distribution on a
private network connection to the fabric interconnect. The Cisco UCS Utility Operating System
is responsible for programming all elements of identity and policy to the compute node.
Because some identities and policies are integral to the operation of the compute node, caution
should be observed when modifying certain elements of the service profile. Many elements of
the service profile will cause the node to reboot to run the Cisco UCS Utility Operating System
to apply the request to change.

References
For additional information, refer to these resources:

Gai, S., Salli, T., et al (2009). Project California: a Data Center Virtualization Server.
Raleigh, NC: Lulu.com.

Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Configuring Server-Related Pools.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCS
M_GUI_Configuration_Guide_1_3_1_chapter24.html

Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Creating Server Pool Policy Qualifications.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCSM_
GUI_Configuration_Guide_1_3_1_chapter25.html#task_43C18874A0D245D6987C3530BD
4F06C7

Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Configuring Service Profiles.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCS
M_GUI_Configuration_Guide_1_3_1_chapter26.html

Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Ethernet and Fibre Channel Adapter Policies.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCS
M_GUI_Configuration_Guide_1_3_1_chapter23.html#concept_C113E10277D74C5FB66
C92A87293B696

Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Configuring QoS System Classes.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCSM_
GUI_Configuration_Guide_1_3_1_chapter18.html#task_2F15A8D4D2B34ED79B7917877
D749B47

Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Setting the vNIC/vHBA Placement.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCS
M_GUI_Configuration_Guide_1_3_1_chapter26.html#task_9E40E74DB2EE43EDA8A74
C44AFC9323A

6-142

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Creating a Service Profile Template.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCS
M_GUI_Configuration_Guide_1_3_1_chapter26.html#task_23DCA7911736413A9D0317
9A23523A0A

Cisco Systems, Inc. Cisco UCS Manager GUI Configuration Guide, Release 1.3(1)
Cloning a Service Profile.
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCS
M_GUI_Configuration_Guide_1_3_1_chapter26.html#task_8593C436D54F4C0BA2467
C01FFBC0F81

2011 Cisco Systems, Inc.

Server Resources Implementation

6-143

6-144

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Module Self-Check
Use the questions here to review what you learned in this module. The correct answers and
solutions are found in the Module Self-Check Answer Key.
Q1)

Which statement is valid regarding service profile placement? (Source: Creating


Identity and Resource Pools)
A)
B)
C)
D)
E)

Q2)

Which statement is true regarding the creation of identity pools? (Source: Creating
Identity and Resource Pools)
A)
B)
C)
D)

Q3)

ensures global uniqueness of identities in an entire network


leads to consistency of policy
enables scalable provisioning
ensures stateful computing
eliminates the need to license hardware-derived identities
allows for profile mobility

How many bits of the UUID are represented in the UUID suffix blocks? (Source:
Creating Identity and Resource Pools)
A)
B)
C)
D)
E)

Q5)

Each identity occupies no space in the database.


Pools cannot be extended after creation, but additional blocks of identities can
be added later.
Pools can be extended after creation, but additional blocks of identities cannot
be added later.
It is a best practice to always create the maximum number of addresses in a
pool (9999).

Which three items are reasons for adopting pooled identity resources? (Choose three.)
(Source: Creating Identity and Resource Pools)
A)
B)
C)
D)
E)
F)

Q4)

Service profiles, policies, templates, and pools can be moved from organization
to organization.
Service profiles, policies, templates, and pools can be renamed.
Service profiles, policies, templates, and pools are the foundation of stateful
computing.
Service profiles, policies, templates, and pools cannot be moved from
organization to organization.
Service profiles, policies, templates, and pools are used to enable syslog.

32
48
64
128
256

How many bits are represented in the MAC address prefix (OUI)? (Source: Creating
Identity and Resource Pools)
A)
B)
C)
D)
E)

2011 Cisco Systems, Inc.

32
48
64
128
256

Server Resources Implementation

6-145

Q6)

Which statement accurately describes RSS? (Source: Creating Service Profiles)


A)
B)
C)
D)
E)

Q7)

Which item accurately describes where to configure a QoS system class? (Source:
Creating Service Profiles)
A)
B)
C)
D)

Q8)

remove adapter policies when a service profile is disassociated


remove BIOS settings when a service profile is disassociated
remove threshold settings when the service profile is disassociated
remove disk settings when a service profile is disassociated
erase disks when the service profile is disassociated

Which two of these actions can be configured in the expert service profile wizard but
not in the simple service profile wizard? (Choose two.) (Source: Creating Service
Profile Templates and Cloning Service Profiles)
A)
B)
C)
D)
E)

6-146

TCP port 22
TCP port 23
TCP port 623
UDP port 22
UDP port 23
UDP port 623

What are two functions of a scrub policy? (Choose two.) (Source: Creating Service
Profiles)
A)
B)
C)
D)
E)

Q11)

KVM over IP
SoL
BMC
IPMI
QoS
FCoE

Which port does SoL use to communicate? (Source: Creating Service Profiles)
A)
B)
C)
D)
E)
F)

Q10)

LAN Uplinks Manager


Internal Fabric Manager
LAN tab under QoS
SAN tab under QoS

What are three functions of the Cisco Integrated Management Controller? (Choose
three.) (Source: Creating Service Profiles)
A)
B)
C)
D)
E)
F)

Q9)

a method of flow control applied to FCoE


a method of load-sharing across multiple interfaces
a method of load-sharing across multiple TCP connections
a method of distributing TCP sessions across multiple CPU cores
a method of load-sharing TCP sessions across multiple FCoE interfaces

Select access VLAN


Configure VSAN trunking
Configure VLAN trunking
Select MAC address from a pool
Use hardware-derived WWNN

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Q12)

Which part of a cloned service profile is copied verbatim? (Source: Creating Service
Profile Templates and Cloning Service Profiles)
A)
B)
C)
D)
E)

Q13)

What must an administrator configure to modify the service profile created from a
template? (Source: Creating Service Profile Templates and Cloning Service Profiles)
A)
B)
C)
D)

Q14)

MAC pool
WWNN pool
WWPN pool
VLAN
template

Which three configuration changes to a service profile will trigger a Cisco UCS Utility
Operating System reboot? (Choose three.) (Source: Managing Service Profiles)
A)
B)
C)
D)
E)
F)

Q17)

There is the potential to consume too many identity resources.


Modifying certain template parameters can cause a compute node to reboot.
Modifying certain template parameters can cause all linked nodes to reboot.
There is the potential to oversubscribe the server pool.

Which element allows the automated provisioning of service profiles? (Source:


Creating Service Profile Templates and Cloning Service Profiles)
A)
B)
C)
D)
E)

Q16)

unbind the service profile from the linked clone


unbind the service profile from the linked template
unbind the service profile from the linked compute node
unbind the service profile from the linked master

What is a concern when employing updating templates? (Source: Creating Service


Profile Templates and Cloning Service Profiles)
A)
B)
C)
D)

Q15)

UUID
MAC address
WWNN
WWPN
boot order

modify MAC pool


modify WWNN pool
modify boot target
modify VLAN
change from access VLAN to trunk
change adapter policy

Which three configuration changes to a service profile are safe to perform during
production (no automatic reboot)? (Choose three.) (Source: Managing Service
Profiles)
A)
B)
C)
D)
E)
F)

2011 Cisco Systems, Inc.

modify MAC pool


modify WWNN pool
modify boot target
modify VLAN
change from access VLAN to trunk
change adapter policy

Server Resources Implementation

6-147

Q18)

Which item correctly describes where Cisco UCS Utility Operating System updates its
IP address? (Source: Managing Service Profiles)
A)
B)
C)
D)
E)

Q19)

What are two properties of service profiles after they have been created in an
organization? (Choose two.) (Source: Managing Service Profiles)
A)
B)
C)
D)
E)

Q20)

Service profiles can be renamed.


Service profiles cannot be renamed.
Service profile boot target cannot be changed.
Service profiles can be moved to a different organization.
Service profiles cannot be moved to a new organization.

What is the correct method to move a service profile from a failed compute node to a
replacement? (Source: Managing Service Profiles)
A)
B)
C)
D)

6-148

dedicated private IP address in each compute node, supplied by the Cisco


Integrated Management Controller
DHCP server running on the CMC in the IOM
DHCP server running on the fabric interconnect
external DHCP server
BOOTP server running on the CMC in the IOM

Unbind the service profile from its template and it will automatically seek a
new compute node.
Select Change Service Profile Association from the Equipment tab.
The service profile will automatically reassociate without any administrator
intervention.
Select Change Service Profile Association from the Servers tab.

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Module Self-Check Answer Key


Q1)

Q2)

Q3)

B, C, F

Q4)

Q5)

Q6)

Q7)

Q8)

A, B, D

Q9)

Q10)

B, E

Q11)

C, D

Q12)

Q13)

Q14)

Q15)

Q16)

A, B, C

Q17)

D, E, F

Q18)

Q19)

B, E

Q20)

2011 Cisco Systems, Inc.

Server Resources Implementation

6-149

6-150

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Module 7

Virtual Server Networking


Overview
Virtualization has become a critical requirement of data center networking. This module
introduces the Cisco Nexus 1000V Distributed Virtual Switch (DVS), a plug-in designed by
Cisco that operates within the VMware vCenter environment. Implementers will be expected to
successfully integrate Cisco Nexus 1000V in a VMware environment that runs on a Cisco
Unified Computing System (Cisco UCS) infrastructure.
The module contrasts the capabilities, features, and benefits of the Cisco Nexus 1000V DVS
with the VMware vNetwork Standard Switch (vSwitch) and vNetwork Distributed Switch
(vDS). The module presents the installation method for the Cisco Nexus 1000V Virtual
Supervisor Module (VSM), as well as the creation of port profiles and the application of
policies to virtual machines (VMs). The module also introduces the Cisco Nexus 1010 Virtual
Services Appliance and compares its capabilities to the Cisco Nexus 1000V VSM. This module
also covers the configuration of Cisco Virtual Network Link (VN-Link) in hardware with
VMware Pass-Through Switching (PTS).

Module Objectives
Upon completing this module, you will be able to describe the capabilities, features, and
benefits of the Cisco Nexus 1000V switch; the installation method and capabilities of the Cisco
Nexus 1000V VSM; and the configuration of VN-Link in hardware with VMware PTS. This
ability includes being able to meet these objectives:

Describe the Cisco Nexus 1000V switch and its role in a virtual server networking
environment

Describe the VMware networking environment

Describe the Cisco Nexus 1000V network architecture

Install and configure a Cisco Nexus 1000V switch

Configure basic Cisco Nexus 1000V networking

Configure Cisco UCS Manager for VMware PTS

7-2

Data Center Unified Computing Support Specialist (DCUCI) v4.0

2011 Cisco Systems, Inc.

Lesson 1

Evaluating the Cisco Nexus


1000V Switch
Overview
Implementers must be able to compare the components that comprise the Cisco Nexus 1000V
Distributed Virtual Switch (DVS). This lesson describes VMware virtual networking options,
including VMware vNetwork Standard Switch (vSwitch) and vNetwork Distributed Switch
(vDS). The lesson compares these options to the Cisco Nexus 1000V DVS and presents its
capabilities and switching components.

Objectives
Upon completing this lesson, you will be able to compare VMware virtual networking options
and the Cisco Nexus 1000V DVS. This ability includes being able to meet these objectives:

Describe the Cisco virtual switching solution for the VMware vDS

Describe the switching components of the Cisco Nexus 1000V DVS

Cisco Virtual Switching Overview


This topic provides an overview of Cisco virtual switching.

Evolution of Virtual NetworkingBefore Virtualization


Evolution of Virtual Networking
Before Virtualization
One-to-one ratio between
servers, operating system, or
application and network port
Physical servers connected
into the network via access
ports (single VLAN)
Each host, operating system,
or application with its own
network policy controlled by
the Network Admin

App

App

App

OS

OS

OS

Server
Admin
Network
Admin

Clear demarcation between


Server Admin and Network
Admin roles and
responsibilities

Access
Ports
Access
Switches

Distribution
Switches

*OS = Operating System


2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-4

Before virtualization, each server ran its own operating system, usually with a single
application running in addition to the operating system. The network interface cards (NICs)
were connected to access layer switches to provide redundancy. Network security, quality of
service (QoS), and management policies were created on these access layer switches and
applied to the access ports that corresponded to the appropriate server.
If a server needed maintenance or service, it was disconnected from the network. During that
time, any critical applications needed to be manually offloaded to another physical server.
Connectivity and policy enforcement were very static and seldom required any modifications.
Server virtualization has made networking, connectivity, and policy enforcement much more
challenging. By using VMware vMotion, the devices that run the applications can move from
one physical host to another, which leads to several challenges:

Providing network visibility from the virtual machine (VM) virtual NIC (vNIC) to the
physical access switch

Creating and then applying policies to the vNICs

Providing consistent mobility of the policies that are applied to the VMs during a vMotion
event

Further complications exist because objects have been virtualized and can move around.

7-4

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Evolution of Virtual NetworkingVirtual Switches


Evolution of Virtual NetworkingVirtual
Switches
L2 switches become embedded within the
ESX hypervisor, to switch packets
between virtual servers (VMs) and the
outside world.
Multiple VMs are required to share the
same physical uplinks (VMNICs) to the
network, as well as the same network
policy.
No longer a one-to-one relationship
between server and network port.
Network visibility ends at the physical
access port.

vSwitch

vSwitch

vSwitch
ESX
Hosts

Server
Admin
Network
Admin

VLAN
Trunks
Access
Switches

Segmentation between VMs is provided


by 802.1Q VLANs.
Requires VLAN trunks for server
connectivity into the network.

Distribution
Switches

Server or virtualization admin owns the


virtual network configuration and
manages it through vCenter server.
*OS = Operating System
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-5

The VMware server-virtualization solution extends the access layer into the VMware ESX
server by using the VM networking layer. Several components are used to implement servervirtualization networking:

Physical networks: Physical devices connect ESX hosts for resource sharing. Physical
Ethernet switches are used to manage traffic between ESX hosts, like in a regular LAN
environment.

Virtual networks: Virtual devices run on the same system for resource sharing.

Virtual Ethernet switch (vSwitch): Like a physical switch, the vSwitch maintains a table
of connected devices. This table is used for frame forwarding. The vSwitch can be
connected, via uplink, to a physical switch by using a physical VM NIC (VMNIC). The
vSwitch does not provide the advanced features of a physical switch.

Port group: A port group is a subset of ports on a vSwitch for VM connectivity.

Physical NIC (VMNIC): The VMNIC is used to uplink the ESX host to the external
network.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-5

vNetwork Distributed Switch


vNetwork Distributed Switch
Simplified and unified network
virtualization management

Current

vSwitch

vSwitch

vSwitch

vCenter provides abstracted,


resource-centric view of networking
Simplifies network management
Moves away from host-level
network configuration (cluster
level)

vNetwork

Statistics and policies follow the


VM, simplifying debugging and
troubleshooting and enabling
enhanced security
vNetwork Distributed Switch

Builds foundation for networking


resource pools (view the
network as a clustered
resource)
*OS = Operating System

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-6

VMware vSphere 4 introduces the vDSa DVS. With vDS, multiple vSwitches within an ESX
cluster can be configured from a central point. The vDS automatically applies changes to the
individual vSwitches on each ESX host.
The feature is licensed and relies on VMware vCenter server. The vDS cannot be used for
individually managed hosts.
The VMware vDS and vSwitch are not mutually exclusive. Both devices can run in tandem on
the same ESX host. This type of configuration is necessary when running the Cisco Nexus
1000V Virtual Supervisor Mode (VSM) on a controlling host. In this scenario, the VSM runs
on a vSwitch that is configured for VSM connectivity, and the VSM controls a DVS that runs a
Cisco Nexus 1000V Virtual Ethernet Module (VEM) on the same host.

7-6

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VMware vNetwork Evolution


VMware vNetwork Evolution
vNetwork switch API provides interface
for third-party virtual switch
implementations.

Current

vSwitch

vSwitch

vSwitch

Support for third-party capabilities and


features, including monitoring and
management of the virtual network.
Cisco Nexus 1000V is the first thirdparty vDS.
*OS = Operating System

vNetwork

vNetwork Distributed Switch

Cisco Nexus 1000VThird-Party Switch

vNetwork Platform

vNetwork Platform

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-7

The Cisco server virtualization solution uses technology that was jointly developed by Cisco
and VMware. The network access layer is moved into the virtual environment to provide
enhanced network functionality at the VM level.
This solution can be deployed as a hardware- or software-based solution, depending on the data
center design and demands. Both deployment scenarios offer VM visibility, policy-based VM
connectivity, policy mobility, and a nondisruptive operational model.

Cisco Nexus 1000V


The Cisco Nexus 1000V is a software-based solution that provides VM-level network
configurability and management. The Cisco Nexus 1000V solution works with any upstream
switching system to provide standard networking controls to the virtual environment.

VN-Link
Cisco Virtual Network Link (VN-Link) technology was jointly developed by Cisco and
VMware and has been proposed to the IEEE for standardization. The technology is designed to
move the network access layer into the virtual environment to provide enhanced network
functionality at the VM level.

Cisco UCS 6100


With the Cisco Unified Computing System (UCS) 6100 Series Fabric Interconnects, VN-Link
can be deployed as a hardware-based solution that offers VM visibility, policy-based VM
connectivity, policy mobility, and a nondisruptive operational model.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-7

Distributed Virtual Networking


Distributed Virtual Networking
Host1
W2003EE-32-A

W2003EE-32-B

Host2
W2003EE-32-A2

Host3

W2003EE-32-B2

W2003EE-32-A3

W2003EE-32-B3

Host4
W2003EE-32-A4

W2003EE-32-B4

Distributed
Virtual Port
Group

DistributedvSwitch
VM Network

DVS

vDS

Configure everything only once


and in only one place.

Platform applies settings to the


right ports on the right hosts.

Manage roles and permissions


per vDS and distributed virtual
port group.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-8

The vDS adds functionality and simplified management to the VMware network. The vDS adds
the ability to use private VLANs (PVLANs), perform inbound rate limiting, and track VM port
state with migrations. Additionally, the vDS is a single point of network management for
VMware networks. The vDS is a requirement for the Cisco Nexus 1000V DVS.

7-8

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Virtual Switch Options with vSphere 4


Virtual Switch Options with vSphere 4
Virtual Switch

Model

Details

vSwitch

Host based:
1 or more per
ESX host

Same as vSwitch in ESX 3.5

vDS

Distributed:
1 or more per
data center

Expanded feature set


PVLANs
Bidirectional traffic shaping
Network vMotion
Simplified management

Cisco Nexus 1000V

Distributed:
1 or more per
data center

CLI similar to Cisco IOS CLI


Same remote management as
Cisco Nexus physical switches
Different feature set compared to
Cisco Nexus physical switches

Virtual networking concepts are similar with all virtual switch alternatives.
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-9

With the introduction of vSphere 4, VMware customers can enjoy the benefits of three virtual
networking solutions: vSwitch, vDS, and the Cisco Nexus 1000V.
The Cisco Nexus 1000V bypasses the VMware vSwitch with a Cisco software switch. This
model provides a single point of configuration for the networking environment of multiple ESX
hosts. Additional functionality includes policy-based connectivity for the VMs, network
security mobility, and a nondisruptive software model.
VM connection policies are defined in the network and applied to individual VMs from within
vCenter. These policies are linked to the Universally Unique ID (UUID) of the VM and are not
based on physical or virtual ports.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-9

Cisco Nexus 1000V Virtual Switching Feature


Overview
This topic provides an overview of the virtual switching feature of the Cisco Nexus 1000V
solution.

Cisco Nexus 1000V Features


Cisco Nexus 1000V Features

Security and Policy


Enforcement
Enables VM-level
security and policy
Scales the use of
vMotion and DRS

Operation and
Management

Organizational
Structure

Simplifies
management and
troubleshooting with
VM-level visibility

Enables flexible
collaboration with
individual team
autonomy

Scales with automated


server and network
provisioning

Simplifies and
maintains existing VM
management model

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-11

Cisco Nexus 1000V has the same Cisco IOS command-line interface (CLI) and remote
management capabilities as Cisco Catalyst and Cisco Nexus physical switches, but it provides a
feature set that has been optimized for VMware virtualized environments. The Cisco Nexus
1000V solution maintains compatibility with VMware advanced services such as vMotion,
Distributed Resource Scheduler (DRS), Fault Tolerance (FT), and High Availability (HA). The
solution extends VM visibility to the physical access switch, while maintaining the rich,
advanced feature set offered by physical switches.
From an operations perspective, the network administrator may manage the Cisco Nexus
1000V solution from a console connection or remotely by using Telnet or Secure Shell (SSH).
The solution preserves the network administrator function and enables administrators to
effectively create advanced security, QoS, and VLAN policies. Because the Cisco Nexus
1000V switch communicates directly with vCenter, configuration changes are reflected directly
within the vCenter inventory. Therefore, the server administrator can consume the policies by
applying them to either virtual uplinks or VM vNICs.

7-10

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VM View of Resources
VM View of Resources
With virtualization, VMs
have a transparent view
of their resources

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-12

Server virtualization has created many networking challenges, especially at the VM level. VMs
have a transparent view of their CPU, memory, storage, and networking resources because the
hypervisor services all resource requests that the VMs make.
As a result, creating and assigning advanced networking policies (such as security, QoS, port
channels, and so on) to virtual networks at the VM level has been difficult.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-11

VM View of Resources (Cont.)


but correlating
network and storage
back to VMs is difficult.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-13

VMware vSphere enables CPU and memory to be reserved for VMs that run critical
applications. However, this resource-reservation mechanism applies to neither storage nor
networking resources.
Because a single VMNIC uplink may be shared, it becomes difficult to identify multiple virtual
I/O streams from a single, physical server uplink.

7-12

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VM Transparency
VM Transparency
Scaling globally depends
on maintaining transparency
while providing operational
consistency.

*OS = Operating System


2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-14

VMs are likely to move from one assigned ESX host to another. The ability to identify unique,
individual VMs and treat them differently from a security and network-performance
perspective would also be valuable.
To deliver operational consistency, three important parameters must be supported:

Providing VM visibility to the physical access switch

Having the ability to maintain the rich, advanced feature set that is offered by highperformance access switches

Enabling the application of the advanced feature policies at the VM level and assuring that
they move with the VM in a vMotion event

2011 Cisco Systems, Inc.

Virtual Server Networking

7-13

Scaling Server Virtualization


Scaling Server Virtualization
Networking Challenges

Operations and
Management

Organizational
Structure

Applied at physical
servernot the
individual VM

Lack of VM visibility,
accountability, and
consistency

Impossible to enforce
policy for VMs in
motion

Inefficient
management model
and inability to
effectively troubleshoot

Muddled ownership
because
server administrator
must configure
virtual network

Security and Policy


Enforcement

2011 Cisco Systems, Inc. All rights reserved.

Organizational
redundancy creates
compliance challenges
DCUCI v4.07-15

Modern virtualized environments introduce problems for the network. As in a physical


environment, the server connects to the network switch via one or more NICs. However, in a
virtual environment, each NIC carries traffic for multiple VMs. Therefore, the network switch
has no way of knowing from which VM the traffic is being sent.
In addition, from an operations and management perspective, server administrators prefer to
perform server tasks, whereas network administrators prefer to create, apply, and manage
policies that govern connectivity and network performance.
So, harmony between the skill sets of server and network administrators requires a clear
delineation of responsibility.
Server virtualized networking must be able to extend and mirror the robust, advanced feature
sets that are offered by high-performance access layer switches.

7-14

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VN-Link Brings VM-Level Granularity


VN-Link Brings VM-Level Granularity
vMotion

Challenges with vSwitches:


vMotion may move VMs across
physical ports; policy must follow
Impossible to view or apply policy
to locally switched traffic
Cannot correlate traffic on physical
links from multiple VMs

VN-Link technology:
VLAN
101

Extends network to the VM


Consistent services
Coordinated, coherent
management

Cisco DVS
2011 Cisco Systems, Inc. All rights reserved.

*OS = Operating System


DCUCI v4.07-16

VMware vMotion is a feature that is used within ESX environments. This feature permits you
to move VMs, either automatically or manually (via notifications), from one physical machine
to another, as resources become overutilized in response to a physical server failure or a
VMware fault-tolerance event. This process can involve several issues:

The policy that is associated with the VM must be able to follow the VM to the new
physical location.

Viewing or applying policies to locally switched traffic when using the vSwitch that is
internal to the ESX host can be difficult.

Differentiating traffic on physical lines from multiple VMs can be difficult.

Determining with which VLAN the VMs should be associated can be difficult.

Cisco VN-Link uses the vDS framework to deliver a portfolio of networking solutions that can
operate directly within the distributed hypervisor layer, offering a feature set and operational
model that is familiar and consistent with other Cisco networking products. Cisco VN-Link
specifically enables individual VMs to be identified, configured, monitored, migrated, and
diagnosed in a way that is consistent with current network operational models.
VN-Link indicates the creation of a logical link between a vNIC on a VM and a Cisco switch
that is enabled for VN-Link. This logical creation is the equivalent of using a cable to connect a
NIC to a port on an access layer switch.
A switch that is enabled for VN-Link uses the concept of virtual Ethernet (vEthernet)
interfaces, which are dynamically provisioned based on network policies that are stored on the
switch. These policies are the result of VM provisioning operations by the hypervisor
management layer (vCenter). The vEthernet interface maintains network configuration
attributes, security, and statistics for a given virtual interface across mobility events.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-15

VN-Link with the Cisco Nexus 1000V


VN-Link with the Cisco Nexus 1000V
VM

Cisco Nexus 1000V

VM

VM

VM

Software Based

First third-party vDS for VMware


vSphere

Built on Cisco NX-OS

Compatible with all switching platforms

Maintains vCenter provisioning model


unmodified for server administration;
allows network administration of virtual
network via familiar Cisco NX-OS CLI

Policy-Based
VM Connectivity

Nexus
1000V
VEM
vSphere

Cisco Nexus
1000V VSM

Mobility of Network and


Security Properties

2011 Cisco Systems, Inc. All rights reserved.

Non-Disruptive
Operational Model

DCUCI v4.07-17

With the introduction of the vDS framework, VMware permits third-party networking vendors
to provide their own implementation of distributed virtual switches. When deploying the Cisco
Nexus 1000V solution, the vSwitch and port group configuration is offloaded to the network
administrator. This process helps to ensure a consistent network policy throughout the data
center.
In the Cisco Nexus 1000V, traffic between VMs is switched locally at each instance of a VEM.
Each VEM is responsible for interconnecting the local VMs with the rest of the network,
through upstream access layer network switches. The Cisco Nexus 1000V Virtual Supervisor
Module (VSM) manages the control plane protocols and configuration of the VEMs. The VSM
never takes part in the actual forwarding of packets.

7-16

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
Cisco Nexus 1000V is a third-party DVS co-developed with
VMware.
Cisco Nexus 1000V preserves existing operating
environments while delivering extended VM policies.

2011 Cisco Systems, Inc. All rights reserved.

2011 Cisco Systems, Inc.

DCUCI v4.07-18

Virtual Server Networking

7-17

7-18

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Lesson 2

Working with VMware Ethernet


Networking
Overview
Implementers must be able to compare the VMware virtual switching offerings (vSwitch and
Distributed Virtual Switch [DVS]) and articulate how deployment of Cisco Nexus 1000V DVS
offers greater security, scalability, manageability, and configuration consistency.
This lesson presents the integration of the Cisco Nexus 1000V solution within the VMware
distributed switching environment. The lesson also contrasts the unique features, benefits, and
capabilities of the Cisco Nexus 1000V solution with the Cisco Unified Computing System
virtualization adapter and discusses the differences in how these solutions implement Cisco
Virtual Network Link (VN-Link).

Objectives
Upon completing this lesson, you will be able to describe the unique features of the Cisco
Nexus 1000V solution and compare it to the Cisco Unified Computing System virtualization
adapter. This ability includes being able to meet these objectives:

Describe how the Cisco Nexus 1000V integrates into VMware vDS

Describe the Cisco DVS solution

Describe the unique features that the Cisco Nexus 1000V brings to VMware vDS

Differentiate between the capabilities of the Cisco Unified Computing System


virtualization adapter and the Cisco Nexus 1000V

VMware vDS
This topic discusses the use of the distributed virtual switch (DVS) in VMware environments.

vDS Configuration
vDS Configuration
ESX 1

ESX 2

ESX 3

ESX 4

App

App

App

App

App

App

App

App

OS

OS

OS

OS

OS

OS

OS

OS

vSwitch

vSwitch

vSwitch

vSwitch

vDS

vCenter

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-4

The VMware vNetwork Distributed Switch (vDS) extends the features and capabilities of
virtual networks. At the same time, the vDS simplifies provisioning and the ongoing process of
configuration, monitoring, and management.
With VMware ESX 3.5 and prior releases, virtual networks were constructed by using VMware
vNetwork Standard Switches (vSwitches). Each ESX host would use one or more vSwitches to
connect the virtual machines (VMs) with the server network interface cards (NICs) and the
outside physical network.

7-20

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Distributed Virtual Switching


Distributed Virtual Switching
Single point of control
Simplified network
management
Removes the
requirement for
host-level network
configuration
Statistics and policies
follow the VM
Enables network
resource pools (view the
network as a clustered
resource)

App

App
App

OS

App

Configuration

App
OS App

OS

OS
OS*

App

App

App

OS

OS
OS

OS

VMware vDS

*OS = Operating System

Available with the release of vSphere 4.


2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-5

In addition to continuing support for the vSwitch, VMware vSphere introduces an additional
choice for VMware virtual networking: the vDS. The vDS eases the management burden of
per-host virtual-switch configuration management by treating the network as an aggregated
resource. Individual, host-level virtual switches are abstracted into one large vDS that spans
multiple hosts at the datacenter level. Port groups become distributed virtual port groups that
span multiple hosts and ensure configuration consistency for the VMs and virtual ports that are
necessary for such functions as VMware vMotion.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-21

Virtual Network Configuration


Virtual Network Configuration
Multiple vSwitches configured
using vDS

VI3 vSwitches
configured manually
on each host

DVSwitch0

Virtual Switch: vSwitch0


Virtual Machine Port Group

Physical Adapters

Virtual Switch: vSwitch0


VM Network
7 virtual machines
| VLAN
ID *Port Group
Virtual
Machine

Vmnic0 1000 Full

VM Network

Physical
Adapters
Vmnic1
1000
Full

Virtual Switch: vSwitch0


VM Network

Vmnic0 1000 Full

7 virtual machines
| VLAN
ID *Port Group
Virtual
Machine

Physical
Adapters
Vmnic1
1000
Full

AD Server
DHCP Server

VM Network

Vmnic0 1000 Full

AD Server
WebApp1
WebApp2

7 virtual machines | VLAN ID *

Vmnic1 1000 Full

VLAN ID: --

vmnic1 vc1.ucs.local

Virtual Machines (8)

DHCP Server
AD Server

FileServ

WebApp1

DB1

WebApp2

DB2

FileServ

DHCP Server

AD1
AD2

WebApp1

WebApp2
DB1
Service Console Port
Service Console
FileServ
DB2
Vswif0: 10.1.100.10
DB1
Service Console Port
Service Console
DB2
Vswif0: 10.1.100.10
Service Console Port
Service Console
Vswif0: 10.1.100.10

DB1
DB2
DB3
DHCP
Web1
Web2

2011 Cisco Systems, Inc. All rights reserved.

- DVSwitch0-DVUplinks i
- Uplink0 (4 NIC Adapters)

i
i
i
i
i
i
i
i

vmnic1 vc2.ucs.local
vmnic1 vc3.ucs.local
vmnic1 vc4.ucs.local

DCUCI v4.07-6

The figure shows the conceptual difference in management of a standard vSwitch environment
versus a vDS environment. The vSwitch requires a separate configuration from a separate
management panel. The vDS requires just one management panel for the single switch that
spans multiple hosts.

7-22

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

vDS Enhancements
vDS Enhancements
The VMware vDS offers several enhancements to VMware
switching:
Port state migration (statistics and port state follow VM)
Rx rate limiting
PVLANs
DVSwitch0

VM Network

x
i

VLAN ID: --

AD2
DB1
DB2
DB3
DHCP
Web1
Web2

2011 Cisco Systems, Inc. All rights reserved.

DVSwitch0-DVUplinks
Uplink0 (4 NIC Adapters)

vmnic1 vc1.ucs.local

Virtual Machines (8)


AD1

i
i
i
i
i
i
i
i

vmnic1 vc2.ucs.local
vmnic1 vc3.ucs.local
vmnic1 vc4.ucs.local

DCUCI v4.07-7

Private VLAN (PVLAN) support enables broader compatibility with existing networking
environments that use PVLAN technology. PVLANs enable users to restrict communication
between VMs on the same VLAN or network segment. This ability significantly reduces the
number of subnets that are needed for certain network configurations.
PVLANs are configured on a DVS, with allocations made to the promiscuous PVLAN, the
community PVLAN, and the isolated PVLAN. Distributed virtual port groups can then use one
of these PVLANs, and VMs are assigned to a distributed virtual port group. Within the subnet,
VMs on the promiscuous PVLAN can communicate with all VMs. VMs on the community
PVLAN can communicate among themselves and with VMs on the promiscuous PVLAN. VMs
on the isolated PVLAN can communicate only with VMs on the promiscuous PVLAN.
Note

Adjacent physical switches must support PVLANs and must be configured to support the
PVLANs that are allocated on the DVS.

Network vMotion is the tracking of VM networking state (for example, counters or port
statistics) as the VM moves from host to host on a vDS. This tracking provides a consistent
view of a virtual network interface, regardless of the VM location or vMotion migration
history. This view greatly simplifies network monitoring and troubleshooting activities when
vMotion is used to migrate VMs between hosts.
DVS expands upon the egress-only traffic-shaping feature of standard switches by providing
bidirectional traffic-shaping capabilities. Egress (from VM to network) and ingress, or receive
(Rx) rate limiting, (from network into VM) traffic-shaping policies can now be applied on
distributed virtual port group definitions.
Traffic shaping is useful when you want to limit the traffic to or from a VM or group of VMs,
to protect a VM or other traffic in an oversubscribed network. Policies are defined by three
characteristics: average bandwidth, peak bandwidth, and burst size.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-23

vSwitch and vDS


vSwitch and vDS
VMware vDS and vSwitch are not mutually exclusive.
Physical NIC ports are assigned to either a vSwitch or vDS.
Separate ports can be assigned to a vSwitch and vDS on the
same VMware ESX host.
DVSwitch0

Virtual Switch: vSwitch0

i x

Virtual Machine Port Group

Physical Adapters
Vmnic0 1000 Full

Virtual Machine Network

VM Network
VLAN ID: --

Virtual Machines (8)


AD1
AD2
DB1
DB2
DB3
DHCP
Web1
Web2

i
i
i
i
i
i
i
i

DVSwitch0-DVUplinks
Uplink0 (4 NIC Adapters)

7 virtual machines | VLAN ID *

Vmnic1 1000 Full

AD Server

vmnic1 vc1.ucs.local
vmnic1 vc2.ucs.local

DHCP Server

vmnic1 vc3.ucs.local

WebApp1

vmnic1 vc4.ucs.local

WebApp2

2011 Cisco Systems, Inc. All rights reserved.

FileServ

DB1

DB2

Service Console Port


Service Console
Vswif0: 10.1.100.10

DCUCI v4.07-8

The VMware vSwitch and vDS are not mutually exclusive and can coexist within the same
VMware vCenter management environment. Physical VM NICs (VMNICs) may be assigned to
either the vSwitch or the vDS on the same VMware ESX or ESXi host.
You can also migrate the ESX service console and VMware VMkernel ports from the vSwitch,
where they are assigned by default during ESX installation, to the vDS. This migration
facilitates a single point of management for all virtual networking within the vCenter datacenter object.

7-24

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1000V DVS


This topic discusses Cisco Nexus 1000V DVS.

Cisco Nexus 1000V Components


Cisco Nexus 1000V Components
Cisco VSMs

vCenter Server

VEM

VSM

Replaces VMware virtual switch

CLI interface into the Cisco Nexus 1000V

Data plane for the Cisco Nexus 1000V

Control plane for the Cisco Nexus 1000V

Enables advanced switching capability


on the hypervisor
Provides each VM with dedicated
switch ports

Leverages Cisco NX-OS


Controls multiple VEMs as a single
network device

Cisco VEM

Cisco VEM

VM1

VM2

VM3

VM4

VM5

VM6

VM7

Cisco VEM

VM7

VM9

VM10

VM11

2011 Cisco Systems, Inc. All rights reserved.

VM12

DCUCI v4.07-10

The Cisco Nexus 1000V provides Layer 2 switching functions in a virtualized server
environment. Cisco Nexus 1000V DVS replaces virtual switches within the ESX servers. This
replacement allows users to configure and monitor the virtual switch by using the Cisco Nexus
Operating System (NX-OS) command-line interface (CLI). The Cisco Nexus 1000V also
provides visibility into the networking components of the ESX servers and access to the virtual
switches within the network.
The vCenter server defines the data center that the Cisco Nexus 1000V will manage. Each
server is represented as a line card and is managed as if it were a line card in a physical Cisco
switch.
Two components are part of the Cisco Nexus 1000V implementation:

Virtual Supervisor Module (VSM): The Cisco Nexus 1000V VSM is the control software
of the Cisco Nexus 1000V DVS. The VSM runs either on a VM or as an appliance and is
based on Cisco NX-OS.

Virtual Ethernet Module (VEM): The Cisco Nexus 1000V VEM actually switches the data
traffic and runs on a VMware ESX 4.0 host. VSM can control several VEMs, with the
VEMs forming a switch domain that should be in the same virtual data center that is
defined by VMware vCenter.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-25

Cisco Nexus 1000VSingle Chassis Management


Cisco Nexus 1000VSingle Chassis
Management
All components in a single vSphere cluster operate logically as a single virtualized
access switch.
pod5-vsm# show module
Mod Ports Module-Type
Model
Status
--- ----- -------------------------------- ------------------ -----------1
0
Virtual Supervisor Module
Nexus1000V
active *
2
0
Virtual Supervisor Module
Nexus1000V
ha-standby
2 248
Virtual Ethernet Module
NA
ok
4 248
Virtual Ethernet Module
NA
ok
VSM (Supervisor)
VSM (Supervisor)
VEM (Line Card)
VEM (Line Card)

VM
#1

VM
#2

2011 Cisco Systems, Inc. All rights reserved.

VM
#3

VM
#4

VM
#5

VM
#6
DCUCI v4.07-11

Cisco Nexus 1000V is effectively a virtual chassis: It is modular, and ports can be physical or
virtual. The servers are modules on the switch, with each physical NIC port on a module being
a physical Ethernet port. Modules 1 and 2 are reserved for the VSM, and the first server or host
is assigned automatically to the next available module number. The ports to which the vNIC
interfaces connect are virtual ports on the Cisco Nexus 1000V and are assigned a global
number.

7-26

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1000VSingle Chassis


Management (Cont.)
Upstream-Switch#show cdp neighbor
Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge
S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone
Device ID

Local Intrfce

Holdtme

N1KV-Rack10
N1KV-Rack10

Eth 1/8
Eth 2/10

136
136

VSMs and VEMs appear


as one switch from a
management and control
plane perspective.

Capability
S
S

Platform

Port ID

Nexus 1000V
Nexus 1000V

Eth2/2
Eth3/2

Cisco VSMs

Protocols such as Cisco


Discovery Protocol and SNMP
operate as a single switch.
These control protocols are
run on the VSM and carried
over the packet VLAN.
Cisco VEM

2011 Cisco Systems, Inc. All rights reserved.

Cisco VEM

DCUCI v4.07-12

To the upstream switches, the Cisco Nexus 1000V appears as a single switch from the control
and management plane. Protocols such as Cisco Discovery Protocol and Simple Network
Management Protocol (SNMP) operate as a single switch.
The show cdp neighbor command that is shown in the figure indicates that two VEMs are
associated with the virtual switch. Therefore, two ports connect the Cisco Nexus 1000V to the
upstream switch.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-27

Cisco Nexus 1000VVSM Deployment Options


Cisco Nexus 1000VVSM Deployment
Options
Physical
Appliance in
the Network
Admin
Domain

VM in the Server
Admin Domain

Cisco Nexus 1010

Cisco Nexus 1010 physical appliance


Houses as many as four VSMs

VSM virtual appliance

Deployed in pairs for redundancy

ESX virtual appliance

Built on Cisco UCS

Supports 64 VEMs (64 hosts)

Is the future platform for all virtual


services (vNAM, VSG, vWAAS, etc.)

Installable via ISO or OVA file

Cisco VEM

Cisco VEM

VSM-VA

VM1

VM2

VM3

VM4

VM5

VM6

2011 Cisco Systems, Inc. All rights reserved.

Cisco VEM

VM7

VM8

VM9

VM10

VM11

DCUCI v4.07-13

There are two VSM options.

7-28

VSM virtual appliance: The VSM runs on an ESX host as an ESX virtual appliance, with
support for 64 VEMs. Installation of the VSM virtual appliance is provided though an ISO
or Open Virtual Appliance (OVA) file.

VSM physical appliance: A Cisco Nexus 1010 Physical Server can host four VSM virtual
appliances. The VSM physical appliance typically is deployed in pairs, for redundancy
purposes.

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1000VVSM High-Availability Options


Cisco Nexus 1000VVSM HighAvailability Options
VSMs should always be deployed
redundantly for high availability.
Virtual VSM high availability
VSM VMs should not reside on the
same physical host. This can be
accomplished by using VMware
anti-affinity rule.
Cisco Nexus 1010 high availability

VM

VM

VSM
Primary

VM
ESX Host B

VM

VM

Two Cisco Nexus 1010 systems


must be deployed for high
availability.
High availability pair is formed
between two Cisco Nexus 1010
systems, based on control VLAN
and domain ID information.

VM

VSM
Secondary
ESX Host A

Cisco Nexus 1010

Cisco Nexus 1010 software takes


control of load balancing
active/standby VSMs between the
high-availability pair.
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-14

The Cisco Nexus 1000V provides high availability of control plane functionality by using
active (primary) and standby (secondary) VSMs.
Both the primary and secondary VSMs install as VMs on ESX or ESXi hosts. To provide
maximum redundancy and high availability, each VSM should be installed on separate hosts.
The Cisco Nexus 1010 architecture requires the deployment of two devices in a highavailability configuration.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-29

Cisco Nexus 1000V CommunicationExtending the Backplane


Cisco Nexus 1000V Communication
Extending the Backplane
Because the VSM and VEM are not
physically connected, the VSM (supervisor)
must program the VEM (line cards) over a
network.
The VSM-to-VEM communication uses the
same backplane protocol (AIPC) found in the
Cisco Nexus 7000 and MDS platforms.

Cloud

There are two ways to extend


communication between VSM and VEM:
Over Layer 2 cloud, using control and
packet VLANs

Cisco Nexus 1000V VSM

Over Layer 3 cloud, using Layer 3 control


capability

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-15

Maintaining communication between the VSM (control plane) and the individual VEMs (data
forwarding) is of paramount importance for successful operation. The control protocol that is
used by the VSM to communicate with the VEM is borrowed from the Cisco Nexus data center
switch and Cisco Multilayer Director Switch (MDS) 9000 Series Fibre Channel switches.
The VSM-to-VEM communication may be implemented by using a Layer 2 model that uses
control and packet VLANs or by using a Layer 3 cloud control capability.

7-30

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VSM and VEM CommunicationLayer 2 Connectivity


VSM and VEM CommunicationLayer 2
Connectivity
Two distinct virtual interfaces are used to communicate
between the VSM and VEM.

Cisco VSMs

Control: Uses control VLAN


Extends AIPC between SUP and linecard

Carries low-level messages to ensure proper


configuration of the VEM
Maintains a 2-second heartbeat with the VSM to
the VEM (timeout 6 seconds)

Layer 2
Cloud

Maintains synchronization between primary and


secondary VSMs
Packet: Uses packet VLAN
Carries any network packets, such as Cisco
Discovery Protocol or IGMP control, from the
VEM to the VSM
Control and packet VLANs can share a VLAN or can be
separate VLANs, based on customer requirements.

Cisco VEM

Separate offers traffic segregation


Sharing offers simplicity
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-16

Communication between the VSM and VEM is provided through two distinct virtual interfaces:
the control and packet interfaces.
The control interface carries low-level messages to each VEM, to ensure proper configuration
of the VEM. A 2-second heartbeat is sent between the VSM and the VEM, with a 6-second
timeout. The control interface maintains synchronization between primary and secondary
VSMs. The control interface is like the Ethernet out-of-band channel (EOBC) in switches such
as Cisco Nexus 7000 Series Switches.
The packet interface carries network packets, such as Cisco Discovery Protocol or Internet
Group Management Protocol (IGMP) control messages from the VEM to the VSM.
Customers may choose to implement separate VLANs for the control, management, and packet
VLANs, or they can share the same VLAN.
Being VLAN interfaces, the control and packet interfaces require Layer 2 connectivity.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-31

VSM and VEM CommunicationLayer 3 Connectivity


VSM and VEM CommunicationLayer 3
Connectivity
VM

VM

VM

VM

Introduced in Cisco Nexus 1000V


Release 4.0(4)SV1(2).
User specifies an IP address that
belongs to a separate network for VSMto-VEM communication.
Requires enough IP addresses to
span all participating ESX hosts.
Requires IP connectivity between
the ESX hosts and the VSM.

Layer 3
Cloud

Cisco Nexus
1000V VSM

Layer 3 AIPC functionality is


accomplished by encapsulating the
control and packet interface data in an
EoIP tunnel.
Layer 3 AIPC also reduces the number
of broadcasts that are used.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-17

With the release of the Cisco Nexus 1000V Release 4.0(4)SV1(2) image, you can now manage
VSM-to-VEM communication by using a Layer 3 network. The administrator specifies a
separate IP subnet and sufficient IP addresses to span all participating ESX or ESXi hosts,
which requires IP connectivity between the VSM and the ESX or ESXi hosts.
The Cisco Nexus 1000V uses Advanced Interprocess Communications (AIPC), a backplane
protocol that the Cisco Nexus 7000 Series Switches also uses. AIPC encapsulates the control
and packet interface data in an Ethernet over IP (EoIP) tunnel, which reduces the number of
broadcasts that are used.

7-32

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VSM and VEM CommunicationImportant Considerations for


Layer 3 Control
VSM and VEM CommunicationImportant
Considerations for Layer 3 Control
Round-trip time between the VSM and
the VEM must not exceed 100 ms.
The network administrator must be
careful to ensure that duplicate IP
addressing of the Layer 3 control
interface does not occur.
Layer 2 adjacency (control and packet
VLANs) is still required between both
VSMs deployed in a high-availability
pair, where both the active and standby
VSM share the same control and
packet VLANs.
The VLAN used to carry the Layer 3
traffic must be declared as a system
VLAN in both the vEthernet and uplink
port profiles.

2011 Cisco Systems, Inc. All rights reserved.

VM

VM

VM

VM

Layer 3
Cloud

Cisco Nexus
1000V VSM

DCUCI v4.07-18

Effective Layer 3 control requires important considerations and requirements. The round-trip
time between the VSM and VEM must not exceed 100 ms. The network administrator must
also assure that duplicate IP addresses do not appear.
Layer 2 adjacency must be maintained between active and standby VSMs that share the same
control and packet VLANs. The VLAN that is configured to carry the Layer 3 traffic must be
specified as the system VLAN in both the virtual Ethernet (vEthernet) and uplink port profiles.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-33

Cisco Nexus 1000V ComponentvCenter Communication


Cisco Nexus 1000V Component
vCenter Communication
Cisco VSMs

vCenter Server

Communication uses the VMware VIM API over SSL.


Connection is configured manually on the VSM or through the
installer application.
Communication requires installation of vCenter plug-in
(downloaded from VSM).
When communication established the Cisco Nexus 1000V is
created in vCenter.
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-19

Communication between the VSM and vCenter is provided through the VMware VIM
application programming interface (API) over Secure Sockets Layer (SSL). The connection is
set up on the VSM and requires installation of a vCenter plug-in, which is downloaded from the
VSM.
After communication between the two devices is established, the Cisco Nexus 1000V is created
in vCenter.
This interface is known as the out-of-band (OOB) management interface and should be in the
same VLAN as your vCenter and host-management VLAN, although that is not a requirement.

7-34

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1000VDomain ID


Cisco Nexus 1000VDomain ID
Each instance of the Cisco Nexus 1000V DVS must contain a unique domain ID.
Domain ID is used as a segregation mechanism between multiple Cisco Nexus 1000V
instances within a data center.
Each packet between VSM and VEM is tagged with the appropriate domain ID, to ensure that
VEMs respond only to commands from their managing VSMs.
Domain IDs range from 1 to 4095.
VSM2 - TEST

VSM1 - PRODUCTION

DID 15 CMD

DID 25 CMD

X
Cisco VEM DID 15

Cisco VEM DID 15

Cisco VEM DID 15

*The same VLANs can be used between Cisco Nexus 1000V instances with different domain IDs.
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-20

The domain ID configuration setting within the Cisco Nexus 1000V CLI provides a means to
uniquely identify and separate multiple instances of the Cisco Nexus 1000V DVS within a
vCenter management environment.
The domain ID may be any number between 1 and 4095.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-35

Cisco Nexus 1000VOpaque Data


Cisco Nexus 1000VOpaque Data
Each Cisco Nexus 1000V requires global setting, called opaque data,
on the VSMs and VEMs.
Contains such data as control and packet VLAN, domain ID, and system port profiles.
VSM pushes the opaque data to vCenter server.
vCenter server pushes the opaque data to each VEM as they are added.
Cisco VSMs
OD

OD

vCenter Server

Cisco VEM
OD

Cisco VEM
OD

2011 Cisco Systems, Inc. All rights reserved.

Cisco VEM
OD

DCUCI v4.07-21

The domain ID is a parameter of the Cisco Nexus 1000V and is used to identify a VSM and
VEM that relate to one another. The domain ID of the Cisco Nexus 1000V is defined when the
VSM is first installed and becomes part of the opaque data that is transmitted to vCenter. Each
command that the VSM sends to any associated VEM is tagged with this domain ID. When a
VSM and VEM share a domain ID, the VEM accepts and responds to requests and commands
from the VSM. If the VEM receives a command or configuration request that is not tagged with
the proper domain ID, then the VEM ignores that request. Similarly, if the VSM receives a
packet that is tagged with the wrong domain ID from a VEM, the VSM ignores that packet.

7-36

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1000V Administrator Roles


This topic discusses administrator roles of the Cisco Nexus 1000V.
The Cisco Nexus 1000V also provides a single, redundant point for managing multiple ESX or
ESXi hosts. This management includes the ability to automate policy migration and mobility in
virtualized environments by using a feature called port profiles.
Network policies that are enforced by a port profile follow the VM throughout its lifecycle,
whether the VM is being migrated from one server to another or is being suspended or
restarted. In addition to migrating the policy, the Cisco Nexus1000V moves the network state
of the VM, such as the port counters and flow statistics. VMs that participate in trafficmonitoring activities, such as Cisco NetFlow or Encapsulated Remote Switched Port Analyzer
(ERSPAN), can continue these activities, uninterrupted by vMotion operations.

Standard VMware Administrator Roles


Standard VMware Administrator Roles
Task
vSwitch
configuration
Port group
configuration
Add ESX host
NIC teaming
configuration
VM creation
Security
Visibility
Management

2011 Cisco Systems, Inc. All rights reserved.

VMware Admin
per ESX Host

Network Admin

per ESX Host

vCenter based
per vSwitch

vCenter based

vCenter
vCenter

DCUCI v4.07-23

The figure shows the challenges that server administrators face when using the vSwitch or vDS
to manage network connectivity and policies. The network administrator has little or no
participation, despite the fact that these issues are part of their area of expertise and skill set.
Shifting the burden of network connectivity, policy creation, and configuration from the server
to the network administrator vastly improves operations, management, and continuity. Server
administrators are tasked only with consuming or assigning the port profile (port groups within
the vCenter), which is well within their comfort zone.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-37

Cisco Nexus 1000V Administrator Roles


Cisco Nexus 1000V Administrator Roles
Task
vSwitch
configuration
Port group
configuration
Add ESX host
NIC teaming
configuration
VM creation
Security
Visibility
Management

VMware Admin
Automated
Automated
Unchanged;
vCenter based
Automated
Unchanged;
vCenter based
Policy-Based
VM specific
Unchanged;
vCenter based

2011 Cisco Systems, Inc. All rights reserved.

Network Admin
Same as physical
network
Policy based

Port channel
optimized

ACL, PVLAN, port


security, TrustSec
VM specific
Cisco CLI, XML API,
SNMP, DCNM
DCUCI v4.07-24

When a new VM is provisioned, the server administrator selects the appropriate port profile.
The Cisco Nexus 1000V creates a new switch port that is based on the policies that are defined
by the port profile. The server administrator can reuse the port profile to provision similar VMs,
as needed. Port profiles are also used to configure the physical NICs in a server. These port
profiles, which are known as uplink port profiles, are assigned to the physical NICs as part of
the installation of the VEM on an ESX host.

7-38

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Comparing VN-Link in Software and Hardware


This topic compares the Cisco Nexus 1000V and the Cisco M81KR/P81E Virtual Interface
Card (VIC).

VN-Link DeploymentCisco Nexus


1000V and Generic Ethernet Adapter
More than 200 VMs

Cisco Nexus 1000V


Performs packet forwarding and
applies advanced networking
features
Uses port security, VLAN, and ACLs,
policy maps for QoS treatment for
VM traffic, console and vMotion or
VMkernel

Generic adapter on generic x86


server

VM

VM

VM

VM

Cisco
Nexus
1000V
VEM
Hypervisor

VN-Link

Generic Adapter

Generic upstream switch


802.1Q Switch
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-26

The Cisco Nexus 1000V was developed with VMware to deliver transparency to various server
hardware platforms. The Cisco Nexus 1000V may be used with generic NICs on generic x86based servers. In addition, the upstream physical access layer switch may also be generic.
This generic support enables the Cisco Nexus 1000V to be installed and configured within
existing architectures, minimizing disruption and maximizing functionality.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-39

VN-Link Packet FlowCisco Nexus 1000V and a Generic


Adapter
VN-Link Packet FlowCisco Nexus
1000V and Generic Ethernet Adapter
vNIC

vEth

VM

VM

VM

VM

VM

VM

Cisco
Nexus
1000V
VEM

Cisco
Nexus
1000V
VEM

Hypervisor

Hypervisor

VM

VM

Server

Server

802.1Q Switch
vCenter

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-27

The Cisco Nexus 1000V is similar to physical Ethernet switches. For packet forwarding, the
Cisco Nexus 1000V uses the same techniques that other Ethernet switches apply, keeping a
MAC address-to-port mapping table that is used to determine where packets should be
forwarded. The Cisco Nexus 1000V maintains forwarding tables in a slightly different manner
than other modular switches. Unlike physical switches with a centralized forwarding engine,
each VEM maintains a separate forwarding table. No synchronization exists between
forwarding tables on different VEMs. In addition, there is no concept of forwarding from a port
on one VEM to a port on another VEM. Packets that are destined for a device that is not local
to a VEM are forwarded to the external network, which in turn may forward the packets to a
different VEM.

7-40

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VN-Link ProductsCisco UCS 6100 and VIC


VN-Link ProductsCisco UCS 6100 and VIC
Cisco UCS 6100

10GbE/FCoE

Eth
FC

Cisco
VIC

FC

Eth

User
Definable
vNICs
0

127

Cisco Unified Computing System


PCIe x16
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-28

The Cisco M81KR/P81E is a VIC that operates within the Cisco Unified Computing System.
The VIC adapter is a hardware-based VN-Link solution, which uses a virtual network tag
(VNTag) to deliver VM visibility at the physical access switch. The VIC adapter assigns a
VNTag to each vNIC that is created as part of the service profile within Cisco UCS Manager.
The assigned VNTag is locally significant and has visibility between the VIC and Cisco UCS
6100 Fabric Interconnects.
The VNTag provides traffic separation between VMs that share the same VIC adapter and is a
reliable method for virtualizing each VM vNIC onto the physical access layer switchport.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-41

VN-Link DeploymentVIC and Cisco UCS 6100 Series with


VMware VMDirectPath
VN-Link DeploymentVIC and Cisco UCS
6100 Series with VMware VMDirectPath
More than 50 VMs

Hypervisor

VM

VM

VM

VM

Complete bypass
Falls back to previous deployment
when vMotion is needed

Cisco VIC

Hypervisor

VM talks directly to Cisco


virtualized adapter

Cisco UCS 6100

Cisco VIC

Cisco UCS 6100

VN-Link
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-29

The VIC adapter offers the Pass-Through Switch (PTS) feature. When configured, this feature
enables I/O processing to be offloaded from and bypass the local hypervisor to the hardware
VIC. Therefore, the VM vNIC communicates directly with the VIC adapter. The M81KR VIC
mezzanine adapter option works within the Cisco UCS B-Series platform, and the P81E
Peripheral Component Interconnect (PCI) adapter works within the Cisco UCS C-Series
platform.

7-42

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VN-Link Packet FlowVIC Cisco UCS 6100 Series


VN-Link Packet FlowVIC and Cisco UCS
6100 Series
vNIC

VM

VM

VM

VM

VM

VM

VM

Hypervisor

Hypervisor

Cisco VIC

VM

Cisco VIC

Cisco UCS Server

Cisco UCS Server

vETH

Cisco UCS 6100

vCenter

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-30

Packet flow within a Cisco Unified Computing System cluster that includes the VIC begins
with the operating system, constructing frames in a traditional fashion. The VIC adds the
VNTag, which consists of a new, 6-byte file that is inserted directly behind the source MAC
address within the Ethernet frame format. The VNTag also has a new EtherType that is
assigned for this service.
The virtual interface switch provides both ingress and egress processing, and the VIC
forwarding is based upon the VNTag value. VNTag assignment and removal remain local to
the Cisco Unified Computing System cluster. Finally, the operating system stack receives the
frame transparently, as though the frame is directly connected to the physical access layer
switch.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-43

Summary of All VN-Link Offerings


Summary of All VN-Link Offerings
Generic Adapter
VIC and Cisco Nexus 1000V
and Cisco Nexus 1000V (Suggested Deployment)

Generic
Adapter

Cisco
VIC

Cisco
VIC

2011 Cisco Systems, Inc. All rights reserved.

VIC and Cisco


Nexus UCS

VIC and Cisco Nexus UCS


With VMDirectPath

Cisco
VIC

DCUCI v4.07-31

The main solution that VN-Link offers is the concept of a VM adapter with a virtual patch cord
that is connected to a physical access layer switch. This solution essentially translates to
providing VM vNIC adapter visibility and virtualizing this interface onto the physical switch
port.
This solution may be implemented by using the Cisco Nexus 1000V with a generic Ethernet
adapter, in which the VEM assigns the vEthernet interface, which survives vMotion events.
The Cisco Nexus 1000V may also be combined with the VIC adapter within a Cisco Unified
Computing System cluster. In this configuration example, the VEM would assign the vEthernet
interface to deliver the VN-Link implementation.
Within a Cisco Unified Computing System cluster, the VIC adapter offers the hardware-based
implementation of VN-Link, by using VNTag assignment on the VIC. This option offers two
unique choices that are based on performance: the PTS feature or the use of VMDirectPath.

7-44

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Why All the Different Models?


Why All the Different Models?

Greater Flexibility and Scalability, Rich


Feature Set, and Faster Time to Market

Generic
Adapter +
Cisco
Nexus
1000V

2011 Cisco Systems, Inc. All rights reserved.

VIC +
Cisco
Nexus
1000V

Higher Performances and Better I/O


Management

VIC + Cisco
UCS 6100

VIC +
Cisco UCS
6100
VMDirect
Path

DCUCI v4.07-32

Cisco has created a flexible set of networking options to support various existing and new
virtual environments. For large, diverse environments that require advanced networking feature
sets and flexible deployments, the use of the Cisco Nexus 1000V and Cisco Nexus 1010,
combined with either generic or VIC adapters, is a solid choice.
For environments that may require higher performance and more predictable I/O management,
the use of a Cisco UCS B- or C-Series server and the VIC adapter, with or without the
configured PTS feature, is another deployment option.
The Cisco VN-Link solution caters to a wide continuum of networking requirements within
virtualized server environments.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-45

Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
VMware vDS allows a centralized configuration point for vSwitches
within a VMware ESX cluster.
Cisco Nexus 1000V has separate control plane (VSM) and data
plane (VEM) functionality that ensures policy mobility and preserves
network and server administration functionality.
Cisco Nexus 1000V represents a VN-Link software-based solution.
Cisco Virtual Interface Card (VIC) represents a VN-Link hardwarebased solution.

2011 Cisco Systems, Inc. All rights reserved.

7-46

Data Center Unified Computing Implementation (DCUCI) v4.0

DCUCI v4.07-33

2011 Cisco Systems, Inc.

Lesson 3

Characterizing Cisco Nexus


1000V Architecture
Overview
Implementers must be able to identify component features of the Cisco Nexus 1000V
architecture and articulate the architectural solution at a high level. This lesson presents the
architecture of the Cisco Nexus 1000V, as well as its capabilities for delivering a softwarebased Cisco Virtual Network Link (VN-Link) solution.

Objectives
Upon completing this lesson, you will be able to characterize the architecture of the Cisco
Nexus 1000V and its capabilities in delivering VN-Link. This ability includes being able to
meet these objectives:

Describe the Cisco Nexus 1000V solution

Describe the Cisco Nexus 1000V architecture

Cisco Nexus 1000V Overview


This topic provides an overview of Cisco Nexus 1000V.

Cisco Nexus 1000V Series DVS


Cisco Nexus 1000V Series DVS
Cisco Nexus 1000V provides
the following per VM:
VLAN, PVLAN settings
ACL, port security, ACL
redirect

VM
1

NetFlow collection

VM
4

Cisco Nexus 1000V

QoS marking (CoS/DSCP)

VMware ESX
NIC

SPAN
Mobility of network policy
and security to the virtual
network

Server
VM
VM
2
3

NIC

Cisco Nexus
1000V

LAN

Maintains policies as VMs


migrate using vMotion
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-4

The Cisco Nexus 1000V Series Switch is a software solution that is used in place of the
VMware Standard Switch (vSwitch) to provide advanced functionality. The Cisco Nexus
1000V allows all physical network best practices to be applied at the virtual machine (VM)
level. Achieving this level of service with the VMware vSwitch is not possible.

7-48

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1000V


Cisco Nexus 1000V
Virtualizing the Network Domain
Policy-Based VM
Connectivity

Mobility of Network and


Security Properties

Nondisruptive
Operational Model

Server
VM
1

VM
2

Server

VM
3

VM
4

VM
5

VM
6

VM
7

VM
8

Cisco Nexus 1000V


VMware ESX

Defined Policies

VMware ESX

WEB Apps

VM connection policy

HR

Defined in the network

DB

Applied in vCenter
Linked to VM UUID

Compliance
vCenter

DCUCI v4.07-5

2011 Cisco Systems, Inc. All rights reserved.

The distributed architecture of the Cisco Nexus 1000V deployment allows a single Cisco Nexus
Operating System (Cisco NX-OS) software-based supervisor module to manage the switching
capabilities of as many as 64 VMware ESX servers.
Each ESX server Virtual Ethernet Module (VEM) acts as a remote line card of the Virtual
Supervisor Module (VSM) and is configured as such. The VSM can run as a VM that runs on a
VMware ESX or ESXi host, or as a physical appliance that is known as a VMware vCenter
virtual appliance.

Cisco Nexus 1000V (Cont.)


VN-LinkVirtualizing the Network Domain
Policy-Based VM
Connectivity

Mobility of Network and


Security Properties
Server

VM
1

VM
2

VM
3

Nondisruptive
Operational Model
Server

VM
4

VM
1

VM
2

VM
3

VM
4

Cisco Nexus 1000V


VMware ESX

Server benefits
Maintains existing VM
management
Reduces deployment time
Improves scalability
Reduces operational workload
Virtual Center
Enables VM-level visibility

2011 Cisco Systems, Inc. All rights reserved.

2011 Cisco Systems, Inc.

VMware ESX

Network benefits
Unifies network management and
operations
Improves operational security
Enhances VM network features
Ensures policy persistence
Enables VM-level visibility

DCUCI v4.07-6

Virtual Server Networking

7-49

A new type of port has been created to identify the individual VMs. This port type is a virtual
Ethernet (vEthernet) port. This port type represents a virtual network interface card (vNIC),
regardless of which physical server it is on. vEthernet ports are not tied to a specific server or
physical VM NIC (VMNIC), but instead represent the vNIC of a virtual server. The vEthernet
port for a VM will remain the same even if the VM migrates. The vEthernet port remains the
same, so if a VM obtains vEthernet 1 on host 1 and is moved to host 2, then the VM retains
vEthernet 1 as its port. Keeping the same vEthernet port allows for network configuration and
policy mobility.

Cisco Nexus 1000V (Cont.)


Virtualizing the Network Domain
Policy-Based VM
Connectivity

Mobility of Network and


Security Properties
Server

Nondisruptive
Operational Model
Server

VM
VM1
#5

VM
VM2
#6

VM
VM3
#7

VM
VM4
#8

Cisco Nexus 1000V


VMware ESX

VMware ESX

Property mobility
vMotion for the network
Ensures VM security
Maintains connection state
vCenter

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-7

This figure demonstrates the policy mobility that the Cisco Nexus 1000V Distributed Virtual
Switch (DVS) offers. A VMware vMotion event (either manual or automatic) has occurred,
causing the VMs on the first ESX server to be moved to the second ESX server, which is now
running all eight VMs.
The policies have been assigned to the VM vEthernet interfaces. Because these interfaces
remain with the VM, then by inheritance, the policies that are assigned to the vEthernet
interfaces remain with the VM.

7-50

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Managing Network Policies with the Cisco Nexus 1000V


Managing Network Policies with the
Cisco Nexus 1000V
vCenter

2. Push
3. Consume (Server
Admin)

ESX Server
VEM

2011 Cisco Systems, Inc. All rights reserved.

ESX Server
VEM

1. Produce
(Network
Admin)

VSM

DCUCI v4.07-8

The Cisco Nexus 1000V provides an ideal model in which network administrators can define a
network policy that virtualization or server administrators can use as new VMs are created.
Policies that are defined on the Cisco Nexus 1000V are exported to vCenter. The server
administrator then assigns specific network policies as new VMs require access. This concept is
implemented on the Cisco Nexus 1000V by using a feature called port profiles. With the port
profile feature, the Cisco Nexus 1000V eliminates the requirement for the server administrator
to create or maintain vSwitch and port group configurations on ESX hosts.
Port profiles separate network and server administration. For network administrators, the Cisco
Nexus 1000V feature set and the capability to define a port profile by using the same syntax
that is used for existing physical Cisco switches helps ensure consistent policy enforcement,
without the burden of managing individual switch ports. The Cisco Nexus 1000V solution also
provides a consistent network management, diagnostic, and troubleshooting interface to the
network operations team, allowing the virtual network infrastructure to be managed like the
physical infrastructure.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-51

Cisco Nexus 1000V Architectural Overview


This topic provides an overview of Cisco Nexus 1000V architecture.

Cisco Nexus 1000V Architecture


Cisco Nexus 1000V Architecture
VMware

VM 1

VM N

Cisco NX-OS
VSM

VMnet

Control Path
API

INIT
VMnet

VMnet

vNIC/vNICs
Virtual Switch
Cisco NX-OS VEM

Control Path
API

VMware NIC Drivers


Physical NIC/NICs

2011 Cisco Systems, Inc. All rights reserved.

VMware ESX Host

DCUCI v4.07-10

The figure depicts the basic architecture of the Cisco Nexus 1000V. The Cisco Nexus 1000V
uses a control path application programming interface (API) to communicate with the data
plane. This API simulates out-of-band (OOB) management and does not interfere with the data
plane. In the figure, the Cisco NX-OS VEM replaces the vSwitch for assigned VMNICs and
vNICs. The VEM is controlled by the VSM on an OOB channel (control VLAN). VMware
vCenter provides the API that the Cisco NX-OS VSM uses to control the VEM. The VSM
resides logically outside of the host, although the VSM can be a VM that resides on the host.
In Cisco Nexus 1000V deployments, VMware provides the vNIC and drivers, whereas the
Cisco Nexus 1000V provides switching and management of switching.
Note

7-52

A NIC in VMware is represented by the VMNIC interface. The VMNIC number is allocated
during VMware installation.

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1000V VLANs


Cisco Nexus 1000V VLANs

VLAN Type

System Function

System

Bootstraps communication between VEM and VSM

Control

VEM-to-VSM communications

Management

Console access to ESX, vCenter, or VSM

Packet

CDP, IGMP, or LACP data

vMotion

For vmknic interfaces

VM traffic

For guest application traffic

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-11

Classification of traffic types in any network is difficult to achieve. In a VMware environment,


traffic varies based on the types of applications that are virtualized. However, some traffic
types can be identified and general prioritization can be applied. The figure shows the general
classifications of traffic for a typical VMware deployment.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-53

Cisco Nexus 1000v Management VLAN


Cisco Nexus 1000V Management VLAN
The management VLAN is used
to provide console connectivity
to the VSM.
The VSM also uses the
management VLAN to
communicate with VMware
vCenter.

Cisco VSMs

Layer 2
Cloud

vCenter Server

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-12

The management VLAN is one of three mandatory VLANs for Cisco Nexus 1000V
communications services. The management VLAN has two primary purposes:

7-54

Providing access to the VSM command-line interface (CLI) console via Secure Shell (SSH)
or Telnet

Communicating with vCenter

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1000V Control and Packet VLANs


Cisco Nexus 1000V Control and Packet
VLANs
Two distinct virtual interfaces are used to
communicate between the VSM and VEM.

Cisco VSMs

Control
Extended AIPC such as those within a
physical chassis (6k, 7k, MDS).
Carries low level messages to ensure
proper configuration of the VEM.
Maintains a 2-second heartbeat with the
VSM to the VEM (timeout 6 seconds).
Maintains synchronization between primary
and secondary VSMs.
Packet
Carries any network packets, such as CDP
or IGMP control, from the VEM to the VSM.

Layer 2
Cloud

Cisco VEM

DCUCI v4.07-13

2011 Cisco Systems, Inc. All rights reserved.

In a traditional modular switch, the supervisor has physical connections to its line cards. In the
Cisco Nexus 1000V, the VSM uses IP packets to communicate with the VEMs as remote line
cards. Two separate channels are required:

Control VLAN

Advanced Interprocess Communications (AIPC)

Communication sync between primary and secondary VSMs

Low-level configuration messaging from VSM to VEM

Heartbeat from VSM to VEM

Packet VLAN

Allowance for Cisco Discovery Protocol packets to be exchanged from the VSM
and VEM

Allowance for port channel setup on VEM ports, via Link Aggregation Control
Protocol (LACP)

Allowance for Internet Group Management Protocol (IGMP) join and leave
messaging for multicast communications

2011 Cisco Systems, Inc.

Virtual Server Networking

7-55

Cisco Nexus 1000V Configuration Example


Cisco Nexus 1000V Configuration Example

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-14

In this configuration example, three ESX servers are managed by the vCenter server that is
shown in the lower left corner of the figure. A standalone VSM has been installed and
configured on host ESX 1. Hosts ESX 2 and ESX 3 have VEMs that are installed to provide the
data traffic forwarding for the host.
Four VLANs have been configured to support VSM-to-VEM, VSM-to-vCenter, and VSM-tomanagement network connectivity. VLAN 260 is used to support control, packet, and
management functions, and VLAN 20 is used for data traffic forwarding.

7-56

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VEM-to-VSM Communication
VEM-to-VSM Communication

VEM1

VEM2

VEM3

Heartbeats (Control Interface)

2
2

CDP and IGMP (Packet Interface)

VSM
Virtualization
Platform
2011 Cisco Systems, Inc. All rights reserved.

Domain ID/SSL
/SOAP/WSDL

vCenter

DCUCI v4.07-15

Like the VSM, each VEM has control and packet interfaces. The end user cannot manage or
directly configure these interfaces.
1. The VEM uses the opaque data that vCenter provides to configure the control and packet
interfaces with the correct VLANs. The VEM then applies the correct uplink port profile to
the control and packet interfaces, to establish communication with the VSM. After the
VSM recognizes the VEM, a new module is virtually inserted into the Cisco Nexus 1000V
virtual chassis. The VSM CLI notifies the network administrator that a new module has
powered on, much as happens with a physical chassis. The module assignment is
sequential, meaning that the VEM is assigned the lowest available module number between
3 and 66. When a VEM comes online for the first time, the VSM assigns the module
number and tracks that module by using the Universally Unique Identifier (UUID) of the
ESX server. This process helps ensure that if the ESX host loses connectivity or is powered
down for any reason, the VEM will retain its module number when the host comes back
online.
2. Cisco Nexus 1000V implements a solution called domain IDs. A domain ID is a parameter
of the Cisco Nexus 1000V and is used to identify a VSM and VEM that relate to each
other. The domain ID of the Cisco Nexus 1000V is defined when the VSM is first installed
and becomes part of the opaque data that is transmitted to vCenter. Each command the
VSM sends to any associated VEMs is tagged with this domain ID. When a VSM and
VEM share a domain ID, the VEM accepts and respond to requests and commands from
the VSM. If the VEM receives a command or configuration request that is not tagged with
the proper domain ID, then the VEM ignores that request. Similarly, if the VSM receives a
packet that is tagged with the wrong domain ID from a VEM, the VSM ignores that packet.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-57

3. The VSM maintains a heartbeat with its associated VEMs. This heartbeat is transmitted at
2-second intervals. If the VSM does not receive a response within 8 seconds, the VSM
considers the VEM to be removed from the virtual chassis. If the VEM is not responding
because of a connectivity problem, then the VEM continues to switch packets in its last
known good state. When communication is restored between a running VEM and the VSM,
the VEM is reprogrammed, causing a slight (1 to 15 second) pause in network traffic. All
communication between the VSM and the VEM is encrypted with a 128-bit algorithm.

Cisco Nexus 1000V Opaque Data


Cisco Nexus 1000V Opaque Data
Each Cisco Nexus 1000V requires global setting, called opaque data, on the
VSMs and VEMs.
Contains such data as control and packet VLAN, domain ID, and system port
profiles.
VSM pushing the opaque data to vCenter server.
vCenter server pushes the opaque data to each VEM as they are added.
Cisco VSMs

OD

OD

vCenter Server

Cisco VEM

OD

Cisco VEM

OD

2011 Cisco Systems, Inc. All rights reserved.

Cisco VEM

OD

DCUCI v4.07-16

The term opaque data is applied to information that is encrypted with a Master Key that the
VSM and vCenter hold. The vCenter server imports the VSM digital certificate, to establish a
confidential channel to the VSM. As each VEM is enabled, vCenter exchanges a protected
element that includes configuration information that is authenticated by the encrypted domain
ID. This process ensures that only trusted vCenter servers push port profiles to the VEMs and
that vCenter allows communications only to trusted VSMs.

7-58

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Policy-Based VM Connectivity
Policy-Based VM Connectivity
Port profiles are created and pushed to
vCenter as port groups.
Server 1

VMware administrator uses vCenter to


assign port groups to virtual server vNIC.
Cisco Nexus 1000V enables VM
connectivity and enforces policy defined in
the port profile when VM powers on.

VM
1

VM
2

VM
3

VM
4

Cisco Nexus 1000 VEM

VMware ESX

Web apps:
PVLAN 108, Isolated
Security Policy = Port 80 & 443
Rate Limit = 100 Mb/s
QoS Priority = Medium
Remote Port Mirror = Yes

vCenter
2011 Cisco Systems, Inc. All rights reserved.

Available Port Groups


WEB Apps

HR

DB

Compliance

Cisco Nexus 1000V

VSM

DCUCI v4.07-17

Network administrators configure VM port profiles that contain the policies for specific server
types. These port profiles are passed to vCenter as port groups and are assigned to individual
vNICs by the VMware administrator.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-59

Policy-Based VM Connectivity (Cont.)


Port profile options
VLAN, PVLAN settings
ACL, port security, and ACL redirect

Server
VM
1

VM
2

VM
3

VM
4

Cisco Trust Sec (SGT)


NetFlow Collection

Cisco Nexus 1000 DVS


VMware ESX

Rate limiting
QoS marking (CoS/DSCP)
Remote port mirror (ERSPAN)

vCenter
2011 Cisco Systems, Inc. All rights reserved.

Cisco Nexus 1000V

DCUCI v4.07-18

The figure summarizes the various security and traffic flow options that can be applied to port
profiles:

7-60

VLAN and private VLAN (PVLAN) settings

Access control list (ACL), port security, and ACL redirect

Cisco TrustSec security group tag (SGT)

NetFlow collection

Rate limiting

Quality of service (QoS) marking (class of service [CoS]/ differentiated services code point
[DSCP])

Remote port mirror (Encapsulated Remote Switched Port Analyzer [ERSPAN])

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Mobility of Security and Network Properties


Mobility of Security and Network
Properties
1.

2.

vCenter initiates vMotion


(initiated either manually or
automatically, fault tolerance
or DRS) and notifies the
Cisco Nexus 1000V.

Server 2

Server 1
VM
1
#1

VM
2

During VM replication, Cisco


Nexus 1000V copies VM port
state to new host.

VM
3

VM
5

VM
6

VM
7

VM
8

Cisco Nexus 1000V DVS


VMware ESX

Mobile properties include:

VM
4

VMware ESX

2.

1.

Port profile
Cisco Nexus 1000V

Interface state and counters

Network persistence:

Flow statistics
Remote port mirror session
2011 Cisco Systems, Inc. All rights reserved.

vCenter

VM port configuration
state
VM monitoring statistics

VSM

DCUCI v4.07-19

As VMs migrate between hosts through manual or automatic processes, vCenter and the Cisco
Nexus 1000V VSM work together to maintain the state and port policies of the VMs. This
cooperation helps to ensure network security and consistency in a rapidly adapting VMware
environment.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-61

Mobility of Security and Network Properties


(Cont.)
When vMotion
completes, the port on
the new VMware ESX
host is brought up and
the VM MAC address
is announced to the
network.

Server 2

Server 1
VM
2

VM
3

VM
4

VM
VM
1
#5

VM
6

VM
7

VM
8

Cisco Nexus 1000V DVS


VMware ESX

VMware ESX

3.
Cisco Nexus 1000V

Network update
vCenter

ARP for VM1 sent to


network
Flows to VM1 MAC
redirected to server 2

2011 Cisco Systems, Inc. All rights reserved.

VSM

DCUCI v4.07-20

This function offers several benefits:

7-62

Prevents disparate vSwitch configurations from affecting Distributed Resource Scheduler


(DRS) or live VM migration

Enables security setting, network policy, and connectivity state to move with vMotion

Offers continuous traffic mirroring as VMs move between servers

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
The Cisco Nexus 1000V architecture provides control plane and
data plane separation.
The control plane and management architecture is provided by the
VSM, and the data traffic forwarding is implemented by using the
VEM.

2011 Cisco Systems, Inc. All rights reserved.

2011 Cisco Systems, Inc.

DCUCI v4.07-21

Virtual Server Networking

7-63

7-64

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Lesson 4

Installing and Configuring the


Cisco Nexus 1000V Switch
Overview
The installation of Cisco Nexus 1000V components can be a challenging aspect to delivering a
Cisco Unified Computing System (UCS) solution. It is very important to understand the
installation methods that are available for the Cisco Nexus 1000V Virtual Supervisor Module
(VSM) and Virtual Ethernet Module (VEM).
This lesson describes the process to install the Cisco Nexus 1000V VSM, provide connectivity
to VMware vCenter, and perform initial configuration of the Cisco Nexus 1000V Distributed
Virtual Switch. In addition, the Cisco Nexus 1010 Virtual Services Appliance is introduced and
compared to the Cisco Nexus 1000V.

Objectives
Upon completing this lesson, you will be able to describe the installation, connection, and
configuration of the Cisco Nexus 1000V VSM and compare the Cisco Nexus 1000V and the
Cisco Nexus 1010 Virtual Services Appliance. This ability includes being able to meet these
objectives:

Describe how to configure the preinstallation vSwitch network

Describe how to install a VSM on a VM

Describe the initial configuration of the VSM

Describe the configuration of the certificate exchange and connection from the VSM to
vCenter

Describe the Cisco Nexus 1000V high-availability configuration

Describe the Cisco Nexus 1010 Virtual Services Appliance

Differentiate between the capabilities of the Cisco Nexus 1010 Virtual Services Appliance
and the Cisco Nexus 1000V software VSM

Configure VSM vSwitch Networks


This topic discusses how to configure VSM VMware vNetwork Standard Switch (vSwitch)
networks.

Preparing the ESX Servers


Preparing the ESX Servers
License required: VMware Enterprise Plus
ESX versions supported: ESX 4.x, ESXi 4.x
VSM VM requirements
4 GB memory
2 NICs (one for management and one for the rest)
64-bit CPU
Upstream switches configured with proper VLANs
Data-center object configured in vCenter

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-4

You must install the VMware Enterprise Plus license on each VMware ESX server, to run a
VEM on that host.
You can run the VSM in a virtual machine (VM) that is running either VMware ESX 3.5 or 4.0
or ESXi 4.0. You must run ESX or ESXi 4.0 to support the VEMs.
The VSM VMs require at least 4 GB of memory, two physical network interface cards (NICs)
on the ESX host (one NIC for management and one for the rest of the traffic), and a 64-bit
processor.
The upstream switches must be configured to trunk the VLANs that you intend to use for
control, management, packet, VMware vMotion, and VM traffic networks.
Last, you must configure a data-center object to contain your ESX servers in vCenter.

7-66

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VSM Port Group Requirements


VSM Port Group Requirements

Port Group

VLAN

Description

First

Control

e1000

Second

Management

Third

Packet

e1000

2011 Cisco Systems, Inc. All rights reserved.

Used for management and connected


to the management network
e1000

DCUCI v4.07-5

The Cisco Nexus 1000V VSM requires at least three types of port groups to be created on an
existing or new vSwitch. These port groups correspond to the VSM communication VLANs, as
described previously.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-67

VSM Port Group Creation


VSM Port Group Creation

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-6

To create a VMware port group that the VSM can use, follow these steps:
1. Choose the host on which the VSM will reside.
2. Choose the Configuration tab.
3. Choose Networking.
4. Choose Properties.

7-68

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VSM Port Group Creation (Cont.)

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-7

5. In the Properties pane, choose Add.


6. Choose Virtual Machine.
7. Click Next.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-69

VSM Port Group Creation (Cont.)

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-8

After you choose Add, configure the following parameters:


8. For the Network Label, type Control.
9. For the VLAN ID, provide the VLAN number that was chosen for the VSM control traffic.
10. Click Next.

VSM Port Group Creation (Cont.)


Repeat this procedure for the management and packet port groups.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-9

After you choose Next, verify the configuration of the control VLAN for the VSM. Click
Finish. Repeat for the management and packet port groups.
7-70

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VSM vSwitch Configuration Showing Port Groups


VSM vSwitch Configuration Showing
Port Groups

The control,
management, and
packet port groups
must be created
prior to Cisco
Nexus 1000V
installation.
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-10

In the traditional Cisco Nexus 1000V installation, you create different port groups and VLANs
for the control, management, and packet interfaces, as shown in the figure. These port groups
must exist before you install the Cisco Nexus1000V.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-71

Install the VSM on a VM


This topic discusses how to install the VSM on a VM.

Cisco Nexus 1000V VSM Installation Methods


Cisco Nexus 1000V VSM Installation
Methods
From .iso file:
1.
2.
3.
4.
5.
6.

Create the VSM VM in vCenter.


Configure VSM networking (control, management,
packet).
Perform initial VSM setup in VSM console.
Install VSM plug-in in vCenter.
Configure SVS connection in VSM console.
Add hosts to the DVS in vCenter.

From .ovf file:


Use wizard for Steps 1 and 2.
All other steps are identical and manual.

From .ova file (preferred):


Use wizard for Steps 1 through 4.
Other steps are identical and manual.
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-12

There are several ways to install and deploy the VSM. The preferred method is to use an Open
Virtual Appliance (OVA) file. This method provides the highest degree of guidance and errorchecking for the user.
Open Virtualization Format (OVF) files are standardized file structures that are used to deploy
VMs. You can create and manage OVF files by using the VMware OVF Tool.
OVA files are like OVF files.

7-72

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Creating a VSM VMChoose VM Configuration


Creating a VSM VMChoose VM
Configuration

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-13

The VSM VM requirements include these specifics:

Type: Other Linux (64 bit)

3 GB disk

2 GB memory

3 NICs, type E1000

To create the VM, right-click the host on which you want to install the VSM, then choose New
Virtual Machine.
Choose Typical, then click Next.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-73

Creating a VMName the VM and Inventory Location


Creating a VMName the VM and
Inventory Location

Cisco

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-14

In the dialog box, name the VM and choose a data center. Click Next.

7-74

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Creating a VMChoose Shared Storage for VSM VM Files


Creating a VMChoose Shared Storage
for VSM VM Files

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-15

At the next stage, choose a datastore. Click Next.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-75

Creating a VMChoose VSM Guest Operating System


Creating a VMChoose VSM Guest
Operating System

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-16

Choose the correct operating system, per VSM requirements.

7-76

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Creating a VMCreate a VSM Local Disk


Creating a VMCreate a VSM Local
Disk

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-17

Ensure that you have configured the correct virtual disk size, per VSM requirements. Click
Next.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-77

Creating a VMReview VSM Options


Creating a VMReview VSM Options

Cisco

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-18

So that you can modify the remainder of the settings for the VSM, check Edit the Virtual
Machine Settings before completing the configuration. Click Continue.

7-78

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Creating a VMAdjust the VSM Memory Size


Creating a VMAdjust the VSM
Memory Size

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-19

Change the memory size to 2048; the VSM requires a minimum of 4 GB of memory. Then
choose New NIC.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-79

Creating a VMAdd the VSM Port Groups


Creating a VMAdd the VSM Port
Groups

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-20

Click Remove to remove the new NIC. Then choose Add.

7-80

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Creating a VMAdd the Port Group Adapters


Creating a VMAdd the Port Group
Adapters

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-21

Choose Ethernet Adapter. Click Next.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-81

Creating a VMChoose Adapter Driver


Creating a VMChoose Adapter Driver

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-22

Ensure that adapter type E1000 is specified. Click Next.

7-82

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Creating a VMReview Adapter Options


Creating a VMReview Adapter
Options
Add the remaining two NIC adapters for the management and packet port
groups.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-23

Verify the configuration and click Finish. Repeat this process to add two additional NIC
adapters, as required by the VSM.
The NIC adapters should be assigned in ascending order:

Control: NIC0

Management: NIC1

Packet: NIC2

2011 Cisco Systems, Inc.

Virtual Server Networking

7-83

Verify Port Group Configuration


Verify Port Group Configuration

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-24

After the VM has been created for the VSM, choose the VSM. Click Edit Settings. Verify that
the control, management, and packet adapters are assigned in ascending order. The VSM
operating system will ensure that the appropriate traffic is assigned to each adapter, based on
lowest-to-highest numbering.

7-84

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Creating a VMChoose .iso Boot File


Creating a VMChoose .iso Boot File

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-25

To attach the VSM to a bootable Cisco Nexus1000V, install the .iso boot file or disk. Follow
these steps:
1. Choose CD/DVD Drive.
2. Choose Datastore ISO File.
3. Browse to the Cisco Nexus 1000V VSM .iso file.
4. Click OK.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-85

Initial VSM Configuration


This topic discusses the initial VSM setup configuration.

Access the VSM Console


Access the VSM Console
Right-click the VSM.
Choose Open Console.
Click the Power On icon.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-27

To access the VSM console and to power up the VM, right-click the VSM VM. Choose Open
Console. When the console screen opens, click the Power On icon (the green triangle).

7-86

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Access the VSM Console (Cont.)


When the boot menu appears, choose Install Cisco Nexus
1000V and bring up the new image.
This process can take as much as 5 minutes.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-28

After the virtual machine is powered up, a boot menu appears. Choose Install Nexus 1000V.
This choice will bring up the new image.
Note

2011 Cisco Systems, Inc.

This process can take as much as 5 minutes.

Virtual Server Networking

7-87

Initial Setup
Initial Setup
When prompted, enter and confirm the administrator
password. Enter yes to enter basic configuration.
---- System Admin Account Setup ---Enter the password for "admin": Qwer12345
Confirm the password for "admin": Qwer12345
[#########################################] 100%

---- Basic System Configuration Dialog VDC: 1 ---This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime to
skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): yes
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-29

The Cisco Nexus 1000V runs the Cisco Nexus Operating System (NX-OS). As with most Cisco
devices, a basic setup script is provided when booting a new switch; issuing a write, erase, or
reload; or issuing the command setup at the command-line interface (CLI).
The basic setup script leads the administrator through basic connectivity options and switch
parameters.

Initial Setup (Cont.)


Enter the configuration for your VSM.
Create another login account (yes/no) [n]: no
Configure read-only SNMP community string (yes/no) [n]: no
Configure read-write SNMP community string (yes/no) [n]: no
Enter the switch name: VSM-1
Continue with Out-of-band management configuration? [yes/no]: yes
Mgmt0 IPv4 address: 192.168.110.31
Mgmt0 IPv4 netmask: 255.255.255.0
Configure the default-gateway: (yes/no) [y]: yes
IPv4 address of the default-gateway: 192.168.110.1
Configure Advanced IP options (yes/no)? [n]: no
Enable the telnet service? (yes/no) [y]: no
Enable the ssh service? (yes/no) [y]: yes
Type of ssh key you would like to generate (dsa/rsa) : rsa
Number of key bits <768-2048> : 1024
2011 Cisco Systems, Inc. All rights reserved.

7-88

Data Center Unified Computing Implementation (DCUCI) v4.0

DCUCI v4.07-30

2011 Cisco Systems, Inc.

Assign an IP address, netmask, and default gateway for the management interface. By default,
Telnet is enabled and Secure Shell (SSH) is disabled. If you enable SSH, then you must
configure the security settings according to your needs.
You can configure Network Time Protocol (NTP) to ensure that the VSM clock is
synchronized with an external time source.
You will also assign VLAN IDs for the control and packet interfaces. Be careful to use the
control and packet VLANs that you configured in vCenter before the Cisco Nexus 1000V
installation. This simplistic example uses VLAN 1 for all the traffic (control, management, and
packet); however, in a production environment, you should use individual VLANs for each
type of traffic (control, management, and packet).

Initial Setup (Cont.)


Configure NTP server? (yes/no) [n]: no
Configure n1000v domain parameters: yes
Enter control vlan <1-3967, 4048-4093>: 111
Enter packet vlan <1-3967, 4048-4093>: 211
Enter domain id <1-4095>: 1

The following configuration will be applied:


Switchname VSM-1
interface Mgmt0
ip address 192.168.110.31 255.255.255.0
no shutdown
exit
vrf context management
...
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-31

After the dialog is complete, you are presented with a summary of all the configurations that
you entered. Review the configurations and enter n to quit the wizard and save the
configurations or y to make changes.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-89

Configure the VSM-to-vCenter Connection


This topic discusses how to configure the VSM-to-vCenter connection.

Install and Register the Plug-In for the New VSM


Install and Register the Plug-In for the
New VSM
Open a web browser on your management network and direct it to
the IP address configured for management of the VSM.
Download the extension.xml file.
This plug-in is specific to each VSM and must be downloaded and
installed individually for each VSM.

http://10.1.1.10

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-33

For the VSM to connect to the vCenter server, a plug-in must be installed. You can download
the plug-in for the vCenter server from the Cisco Nexus 1000V. After installing the Cisco
Nexus 1000V, open an Internet browser. Navigate to the IP address of the Cisco Nexus 1000V
VSM and download the file cisco_nexus1000v_extension.xml.

7-90

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VSM Plug-In
VSM Plug-In
The VSM plug-in is an authentication method for the VSM. Without
the plug-in installed, the VSM is not authorized to communicate with
vCenter.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-34

After you download the file, follow these steps:


1. Open the vSphere client. The local host VMware Infrastructure Client dialog box opens.
2. From the Plug-ins menu, choose Manage Plug-ins. The Plug-in Manager dialog box opens.
3. Right-click the white space within the dialog box. Choose New Plug-in from the pop-up
menu.
4. Choose Browse and choose cisco_nexus1000v_extension.xml file, which you
downloaded.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-91

Install the Plug-In for the New VSM


Install the Plug-In for the New VSM
You might need to expand the
window to obtain clear white
space.
Step 1
Open the vCenter Plug-ins menu.

Step 2
Right-click the white space and
click New Plug-in.

Step 3
Choose the plug-in downloaded
from the VSM.
Click Open.
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-35

Choose Register Plug-in. The plug-in is created and registered.


The VSM plug-in is unique for each VSM and must be installed manually. The plug-in acts as
the authentication method for each VSM to prevent unauthorized distributed virtual switch
(DVS) configuration from an unregistered VSM.

7-92

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Install the Plug-In for the New VSM (Cont.)


Verify your plug-in and click Register Plug-in.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-36

Choose Register Plug-in, to complete the process of making the Cisco Nexus 1000V a plug-in
of vCenter.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-93

Verify Connectivity
Verify Connectivity
Ping the default gateway to verify connectivity.
The first ping is typically lost because the switch uses Address
Resolution Protocol (ARP) to gain the gateway MAC address.

PING 10.1.1.1 (10.1.1.1): 56 data bytes


Request 0 timed out
64 bytes from 10.1.1.1: icmp_seq=1 ttl=63 time=3.848 ms
64 bytes from 10.1.1.1: icmp_seq=2 ttl=63 time=1.996 ms
64 bytes from 10.1.1.1: icmp_seq=3 ttl=63 time=1.724 ms
64 bytes from 10.1.1.1: icmp_seq=4 ttl=63 time=1.98 ms
--- 10.1.1.1 ping statistics --5 packets transmitted, 4 packets received, 20.00% packet loss
round-trip min/avg/max = 1.724/2.387/3.848 ms

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-37

After you complete the initial VSM setup and vCenter plug-in registration, you should verify
Layer 3 connectivity to the default gateway of the management network, by issuing the ping
command.
You can use additional pings to validate connectivity to the vCenter server and the ESX hosts
that will be added to the Cisco Nexus 1000V switch instance.

7-94

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Configure VSM Connectivity


Configure VSM Connectivity
1. Enter configuration mode.
2. Create a new SVS connection.
3. Provide your data-center name.
4. Specify the VMware VIM protocol.
5. Provide the host IP address.
6. Connect and save the configuration.
Remote IP of the vCenter Server
VSM-1# conf
VSM-1(config)# svs connection VC
VSM-1(config-svs-conn)# vmware dvs datacenter-name DC-1
VSM-1(config-svs-conn)# protocol vmware-vim
VSM-1(config-svs-conn)# remote ip address 192.168.110.11
VSM-1(config-svs-conn)# connect
***Note this may take up to 10 seconds***
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-38

To connect from the VSM to the vCenter server, an SVS connection must be created. The SVS
connection requires these parameters:

SVS connection name

VMware datacenter name (case sensitive)

Protocol (currently, only VMware VIM is supported)

IP address of the vCenter server

2011 Cisco Systems, Inc.

Virtual Server Networking

7-95

Verify VSM Connectivity


Verify VSM Connectivity
After the VSM connects properly, you should see output
showing the creation of a DVS. The DVS will also
appear in the vCenter Inventory > Networking pane.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-39

After the VSM is connected to vCenter, the DVS appears in the networking inventory panel of
vCenter. You should see the port groups that you configured for control, management, and packet
traffic. Some other port groups are also created by default: One is the Unused_Or_Quarantined
DVUplinks port group (which connects to physical NICs) and the other is the
Unused_Or_Quarantined VMData port group (which is VM facing).

7-96

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1000V High-Availability


Configuration
This topic discusses high-availability configuration of the Cisco Nexus 1000V.

Deploy the Secondary VSM


Deploy the Secondary VSM
Follow the same steps as for the primary VSM.
Use the same domain ID.
Provide a unique name for the secondary VSM VM.
Define the VSM role as secondary.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-41

If you run your VSM in a high-availability configuration, then you can configure the secondary
VSM as you did the primary VSM. There are two exceptions: You need a unique name for the
secondary VSM, and you must define its role as secondary to the primary VSM.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-97

Cisco Nexus 1000V High Availability


Cisco Nexus 1000V High Availability
The Cisco Nexus 1000V supports high-availability
configuration with dual VSMs.
Active

Standby

Cisco Nexus 1000V

State
Synch

VSM

Cisco Nexus 1000V

VSM

VEM

VEM

ESX 1

ESX 2

2011 Cisco Systems, Inc. All rights reserved.

vCenter

DCUCI v4.07-42

The Nexus 1000V supports high availability by using primary and standby supervisors. Only
one active running configuration is managed by connecting to the primary VSM. The primary
and standby VSMs are state synchronized. In the event of a manual or automatic switchover,
the standby supervisor assumes the role of the primary, whereas the former primary (now the
standby) attempts to complete the state synchronization process after it reboots.
During initial setup of the Cisco Nexus 1000V, you need to configure only one management IP
address for remote management. The primary supervisor seizes the management IP address
because that supervisor is responsible for the active running configuration.

7-98

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Supervisor Modes
Supervisor Modes
In Dual Supervisor mode, the active (primary) supervisor is the
supervisor in slot 1; the standby (secondary) supervisor resides in slot 2.
All services are run on the active supervisor and exist in standby mode
on the second supervisor. These services are maintained in synch for
failover.
All management and configuration is performed on the active supervisor.
Configuration of primary and secondary can be performed during initial
switch setup.

Active
Cisco Nexus 1000V

VSM
2011 Cisco Systems, Inc. All rights reserved.

Standby
State
Synch

Cisco Nexus 1000V

VSM
DCUCI v4.07-43

After the initial setup, the primary supervisor appears in slot 1 (or module 1, if the show
module command is issued from the CLI). The standby, if one is created, appears in slot 2 (or
module 2, if the show module command is issued from the CLI).
If only a standalone supervisor has been configured, then module 2 does not appear when the
show module command is issued from the CLI.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-99

Verifying High Availability


Verifying High Availability
High-availability status for both supervisors can be
performed from the VSM CLI.
VSM-1# show system redundancy status
Redundancy role
--------------administrative: primary
operational: primary
Redundancy mode
--------------administrative: HA
operational: HA
This supervisor (sup-1)
----------------------Redundancy state: Active
Supervisor state: Active
Internal state: Active with HA standby
Other supervisor (sup-2)
-----------------------Redundancy state: Standby
Supervisor state: HA standby
Internal state: HA standby
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-44

The output of the show system redundancy status command shows the primary VSM as the
supervisor that is associated with the running configuration. The standby supervisor appears in
the active state as the high-availability standby. This output confirms the state synchronized
mode of operation when dual VSMs are installed and configured.

7-100

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Verifying High Availability (Cont.)


The show module command shows all modules
controlled by the VSM and shows the high-availability
status of available supervisors.

VSM-1# show module


Mod Ports
Module-Type
Model
Status
------------------------------- ---------- --------1
0
Virtual Supervisor Module Nexus1000V active *
2
0
Virtual Supervisor Module Nexus1000V ha-standby
3
248
Virtual Ethernet Module
NA
ok
...

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-45

The output of the show module command shows the primary supervisor in slot 1 and the highavailability standby in slot 2. The first ESX host that is added to the Cisco Nexus 1000V switch
instance appears in slot 3.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-101

Cisco Nexus 1010 Virtual Services Appliance


This topic discusses the Cisco Nexus 1010 Virtual Services Appliance.

Cisco Nexus 1010 Virtual Services Appliance


Cisco Nexus 1010 Virtual Services
Appliance
Dedicated appliance hosting
Cisco Nexus 1000V VSMs
VSBs

NAM VSB
Virtual Security Gateway (VSG)

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-47

The Cisco Nexus 1010 Virtual Services Appliance is a member of the Cisco Nexus 1000V
Series Switches. This appliance hosts the Cisco Nexus 1000V VSM and supports the Cisco
Nexus 1000V Network Analysis Module (NAM) Virtual Service Blade (VSB), to provide a
comprehensive solution for virtual access switching. Because the Cisco Nexus 1010 provides
dedicated hardware for the VSM, the platform makes virtual access switch deployment much
easier for the network administrator. With its support for additional VSBs, such as the Cisco
Nexus 1000V NAM VSB, the Cisco Nexus 1010 is a crucial component of a virtual access
switch solution.

7-102

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1010 Hardware Configuration


Cisco Nexus 1010 Hardware Configuration
Based on the Cisco UCS C200 M2 Physical Appliance
(2) Intel Xeon X5650 Six-Core Processor (2.66 GHz)
(4) 4 GB RDIMM RAM
(2) 500 GB SATA-II HDD
(1) Broadcom NetXtreme II 5709 Quad-port Gigabit Ethernet
NIC
(1) Serial port
(1) Rail kit

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-48

With the introduction of the Cisco Nexus 1010, the VSM now has dedicated hardware.
Therefore, network administrators can install and configure virtual access switches like they
install physical switches. The dedicated VSM hardware is especially helpful in a data-center
power-up, because there is no dependency in finding server resources for the VSM. Thus, the
Cisco Nexus 1010 enables network administrators to manage the Cisco Nexus 1000V virtual
access switch just like other physical switches and to scale server-virtualization deployments.
The figure summarizes the hardware characteristics, based on the Cisco UCS C200 Series
Rack-Mount Server architecture.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-103

Cisco Nexus 1010 Virtual Service Appliance (Cont.)


Cisco Nexus 1010 Virtual Services
Appliance (Cont.)
Cisco NX-OS based Cisco Nexus 1010 Manager
Supports as many as four VSMs
Complete Cisco CLI VSM deployment
Automatic active/standby VSM deployment on Cisco Nexus
1010 pair
Automatic placement of active VSM on Cisco Nexus 1010 pair
Automatic restart of VSM
Supports VSBs
NAM VSB
Separately licensed

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-49

The Cisco Nexus 1010 runs Cisco NX-OS. The appliance supports as many as four instances of
the VSM, running in high-availability mode (active-standby pairs). The appliance also supports
automatic restart of the VSM, as well as automatic placement of the active VSM on the highavailability pair.
The Cisco Nexus 1010 also supports the VSB and NAM, both of which are licensed separately.

7-104

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1010 Internal Architecture


Cisco Nexus 1010 Internal Architecture

NAM

Cisco Nexus 1000V VSM

Cisco Nexus 1000V VSM

Cisco Nexus 1000V VSM

Cisco Nexus 1000V VSM

Cisco Nexus 1010 Manager


Cisco Nexus 1010 Manager: Cisco management experience
Manages VSBs
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-50

This figure shows the internal architecture of the Cisco Nexus 1010. The Cisco Nexus 1010
contains Cisco Nexus 1010 Manager, which is based on Cisco NX-OS. The appliance can host
as many as four VSMs and supports the Cisco Nexus 1000V NAM VSB. Therefore, in addition
to hosting Cisco Nexus 1000V VSMs, the Cisco Nexus 1010 becomes a platform for other
networking services. The appliance will also support other VSBs in the future.
Because the Cisco Nexus 1010 uses the same VSM as the Cisco Nexus 1000V Series, the Cisco
Nexus 1000V Series solution with Cisco Nexus 1010 has all the features of the Cisco Nexus
1000V Series. Because Cisco Nexus 1010 Manager is based on Cisco NX-OS, the network
administrator has a familiar interface for installing and configuring Cisco Nexus 1010. Cisco
Nexus 1010 Manager also supports Cisco NX-OS high availability, allowing a standby Cisco
Nexus 1010 appliance to become active if the primary Cisco Nexus 1010 appliance fails.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-105

Cisco Nexus 1010 Benefits


Cisco Nexus 1010 Benefits
Complete virtual access switching solution
Dedicated appliance for VSMs:
No vSphere or vCenter dependency for network team
Install and manage like a top-of-rack switch
High availability with active/standby Cisco Nexus 1010
Simple VSM deployment
No need to manually install active/standby VSMs

Support for VSBs:


Process restart of VSBs

Network monitoring with NAM VSB

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-51

Cisco Nexus 1000V Series Switches are intelligent VM access switches that are designed for
VMware vSphere environments running Cisco NX-OS. Operating inside the VMware ESX
hypervisor, the Cisco Nexus 1000V Series supports Cisco Virtual Network Link (VN-Link)
server-virtualization technology. This technology provides the following:

Policy-based VM connectivity

Mobile VM security and network policy

Nondisruptive operational model for server virtualization and networking teams

When server virtualization is deployed in the data center, virtual servers typically are not
managed like physical servers. Server virtualization is treated as a special deployment. This
treatment leads to longer deployment times and requires a greater degree of coordination
among server, network, storage, and security administrators. With the Cisco Nexus 1000V
Series, you can have a consistent networking feature set and provisioning process, all the way
from the VM access layer to the core of the data-center network infrastructure. Virtual servers
can now use the same network configuration, security policy, diagnostic tools, and operational
models as their physical-server counterparts that are attached to dedicated physical network
ports. Virtualization administrators can access predefined network policy that follows mobile
VMs to ensure proper connectivity. This approach saves valuable time that administrators can
use to focus on VM administration.

7-106

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1010 Connectivity


Cisco Nexus 1010 Connectivity
Serial Port: Used for Initial Installation
if not Using SoL

(2) 1 Gigabit Ethernet Ports LOM


(4) 1 Gigabit Ethernet PCI Card (Optional)

Management Port: Provides Access to


Cisco IMC

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-52

The rear panel image of the Cisco Nexus 1010 appliance illustrates the management
connectivity that is provided by the serial port that is used for serial over LAN (SoL), as well as
the management port, which provides access to the Cisco Integrated Management
Controller(IMC).
Network connectivity may be provided by using the two Gigabit Ethernet LAN on
Motherboard (LOM) ports or the four Gigabit Ethernet ports on the Broadcom PCI NIC.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-107

Cisco Nexus 1010 Connectivity (Cont.)


There are four options to connect the Cisco Nexus 1010 to the
physical network by using the six available 1 Gigabit Ethernet
interfaces.

Within the Cisco Nexus 1010 CLI, these options are called network options
and influence on which interface different traffic will be configured.
There are four types of traffic available on the system:

Management

Control

Packet

2011 Cisco Systems, Inc. All rights reserved.

Data

DCUCI v4.07-53

The figure shows the rear panel of the Cisco Nexus 1010, with the physical connectivity
provided by the four-port, Broadcom Gigabit Ethernet NIC. The administrator has flexibility in
configuring connectivity of the control, management, packet, and data traffic types, by using
the network options command within the Cisco Nexus 1010 CLI.

7-108

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1010 High Availability


Cisco Nexus 1010 High Availability
Two Cisco Nexus 1010 appliances must be deployed for high availability; cannot mix
and match
Cisco Nexus 1010 with VM VSM for the high-availability pair will be formed, based
on control VLAN and domain ID

The VSMs on both Cisco Nexus 1010 appliances should back up each other. The
primary VSM should be created on one Cisco Nexus 1010 appliance; the
secondary VSM should be created on the second Cisco Nexus 1010 appliance.
The Cisco NX-OS manager will take charge in load-balancing active/standby VSMs
across the two Cisco Nexus 1010 appliances in the high-availability pair.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-54

The figure shows the high availability that is built into Cisco Nexus 1010 Manager. Because
Cisco Nexus 1010 Manager is built on Cisco NX-OS, it offers active-standby redundancy. If
one Cisco Nexus 1010 Manager instance fails, the other Cisco Nexus 1010 Manager instance
automatically takes over and continues operation. In addition, Cisco Nexus 1010 Manager
automatically places the active VSM to balance the distribution and reduce the potential fault
domain.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-109

Cisco Nexus 1010 VSB


Cisco Nexus 1010 VSB
The Cisco Nexus 1010 appliance comes with a new concept of
virtual services called a VSB.
Shipping with the product will be:
VSM VSB
NAM VSB
VSG VSB
NAM*

Cisco Nexus 1000V VSM

Cisco Nexus 1010 Manager


* Optional VSB add-on
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-55

The VSB provides expansion capabilities so that new features can be added in the future. Cisco
Nexus 1010 Manager enables customers to install, configure, and manage VSBs. If the VSB
stops, Cisco Nexus 1010 Manager automatically restarts it. The Cisco Nexus 1000V NAM
VSB takes advantage of these capabilities to provide a robust, complete solution for the
virtualized data center.

7-110

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Cisco Nexus 1010 VSB (Cont.)


The network administrator now has total control over
the VSB deployment.
pe-nexus1010-1# show virtual-service-blade
virtual-service-blade VSM-AV.1-1
Description:
Slot id:
1
Host Name:
pe-nexus1010-VSM-1
Management IP: 172.25.203.182
VSB Type Name : VSM-1.0
Interface:
control vlan: 20
Interface:
management vlan:
1
Interface:
packet vlan: 20
Interface:
internal vlan: NA

The network administrator can power


off and power down the VSM without
the help of the server administrator.

<Output Cut>
virtual-service-blade:
HA Oper role: ACTIVE
Status:
VSB POWERED ON
Location: PRIMARY
SW version: 4.0(4)SV1(3)

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-56

The output of the show virtual-service-blade command that is issued from the Cisco Nexus
1010 primary VSM CLI shows parameters that are associated with the VSB. The hardware
version, hostname, and management IP address of the primary VSM is listed, as are the control,
management, and packet VLANs.
The operational status is powered on and the high-availability operational role is active.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-111

Comparing the Cisco Nexus 1000V and Cisco


Nexus 1010 Virtual Services Appliance
This topic compares the Cisco Nexus 1000V and the Cisco Nexus 1010 VSM Appliance.

Architectural Comparison
Architectural Comparison
VSM on VM

VSM on Cisco Nexus 1010

VM VM VM VM

VM VM VM

1000V
VSM x 1

Nexus
1000V

Nexus
1000V
VEM
vSphere

vSphere

Server

Server
1000V
VSM x 4

Physical Switches

2011 Cisco Systems, Inc. All rights reserved.

Cisco Nexus 1010

Physical Switches

DCUCI v4.07-58

The Cisco Nexus 1000V VSM installs as a VM on an ESX or ESXi host and supports high
availability. The VSM communicates directly with the VEM that is installed on every ESX or
ESXi host that is a part of the Cisco Nexus 1000V instance. A single instance of the Cisco
Nexus 1000V VSM can support as many as 64 ESX hosts that are managed within a single
data-center object within vCenter. Additional Cisco Nexus 1000V instances may be added to
expand beyond 64 hosts.
The Cisco Nexus 1010 VSM is supported in hardware as a virtual appliance. The VSM
connects directly to the physical access switch and requires the proper configuration of control,
management, and packet VLANs. The Cisco Nexus 1010 VSMs communicate directly with the
VEMs, which are installed on every ESX or ESXi host that is a part of the Cisco Nexus 1010
instance.
The Cisco Nexus 1010 can support as many as four high-availability instances (primary and
standby VSMs), scaling to manage as many as 256 ESX or ESXi hosts.

7-112

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Feature Comparison
Feature Comparison

VSM as VM

VSM on
Cisco
Nexus 1010

Cisco Nexus 1000V features and scalability

VEM running on vSphere 4 Enterprise Plus

Cisco NX-OS high availability of VSM

Software-only deployment

Installation like a standard Cisco switch

Network team owns and manages the VSM

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-59

The figure shows a comparison deployment of a VSM as a VM and of a VM on the Cisco


Nexus 1010 appliance. For customers who want a complete software deployment of the Cisco
Nexus 1000V Series, deployment of the VSM as a VM provides flexibility in VSM placement
and mobility with VMware vMotion.
For network administrators who want greater control over the management of the VSM, the
Cisco Nexus 1010 solution provides a complete Cisco NX-OS experience in installing the
Cisco Nexus 1000V virtual access switch. In addition, the Cisco Nexus 1010 appliance offers
fewer dependencies when the data center is powered on because the VSM can be initiated at the
same time as the VEMs.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-113

Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
The vSwitch on which the VSM resides must be configured with three port
groups: control, management, and data.
A VSM VM must be configured with 2 GB memory and a 3 GB disk drive
on a 64-bit Linux VM.
Initial configuration of the VSM is performed via VMware console access;
this access is like console cable access for physical switches.
A plug-in XML file must be downloaded from each VSM and added to
vCenter before VSM attachment.
VSMs can be run in a redundant mode as an active/standby configuration.
The Cisco Nexus 1010 Virtual Services Appliance is a hardware device
that offers expanded capabilities over the Cisco Nexus 1000V, for larger
virtualized environments.

2011 Cisco Systems, Inc. All rights reserved.

7-114

Data Center Unified Computing Implementation (DCUCI) v4.0

DCUCI v4.07-60

2011 Cisco Systems, Inc.

Lesson 5

Configuring Basic Cisco Nexus


1000V Networking
Overview
When the Virtual Supervisor Module (VSM) is installed and linked to the Virtual Ethernet
Modules (VEMs), implementers are faced with a vast array of features that need to be
integrated to meet the requirements of a customers design. This lesson presents the basic
networking capabilities of the Cisco Nexus 1000V and describes the use of port profiles to
define VLANs, private VLANs (PVLANs), uplink and virtual Ethernet (vEthernet) port
profiles, and port channels. The lesson describes how to install VEMs by using the VMware
vCenter Update Manager, as well as how to back up a VSM configuration. Finally, the lesson
describes VMware vMotion in relation to the port profile mobility that Cisco Nexus 1000V
offers.

Objectives
Upon completing this lesson, you will be able to describe the networking capabilities of the
Cisco Nexus 1000V, including the use of port profiles, VEM installation, and VSM
configuration backup. This ability includes being able to meet these objectives:

Describe port profiles within the VSM

Describe VLAN configuration and assignment to port profiles

Describe PVLAN configuration

Describe the configuration of uplink port profiles

Describe the configuration of vEthernet data port profiles

Describe Cisco Nexus 1000V port channel configuration

Describe the process of adding hosts to a VSM

Describe how to back up a VSM configuration to a TFTP server

Describe the vMotion process and Cisco Nexus 1000V port profile mobility

Port Profiles Within the VSM


This topic discusses port profiles within the VSM.

Cisco Nexus 1000V Port Profiles


Cisco Nexus 1000V Port Profiles
The Cisco Nexus 1000V uses port profiles to
configure multiple ports that require similar policies.
Port profiles are tied to VMware port groups, which
can be selected by the VMware administrator.
There are two port profile types:
VM profiles are used for VM ports. These
profiles provide the configuration for vNIC ports
on the VMs.
Uplink profiles are used to provide outbound
connectivity from the VEM. These profiles can
carry VEM-to-VSM traffic, VM data traffic, or
both.

App
OS

OS = Operating System
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-4

The Cisco Nexus 1000V uses a new concept called port profiles. Port profiles are used to
provide port configuration for the ports that are assigned to the profile. There are two types of
port profiles: virtual machine (VM) and uplink. Uplink profiles provide all outbound
communication for VMs, as well as all VSM-to-VEM communication on the control and
packet VLANs. VM port profiles provide the configuration for VM virtual network interface
card (vNIC) ports.
Port profiles are used as a central configuration point for multiple ports. By using port profiles,
groups of ports that require the same network configuration can be configured rapidly. When a
port profile has been created and configured, ports can be assigned to the profile and receive the
configuration that is contained in that profile.
Port profiles tie directly to VMware port groups on the distributed virtual switch (DVS) that the
Cisco Nexus 1000V controls. After a profile is created, a VMware port group name can be
assigned to the profile. By default, the port group name is the same as the port profile name, but
this setting is configurable. When a port profile is enabled, the corresponding port group is
created on the DVS within VMware vCenter Server.
VMware administrators assign ports to a port group from the VMware vSphere client. When a
port is made a member of a port group, the corresponding port profile configuration is applied
to the port on the VEM. Any configuration changes to the port profile are immediately assigned
to member ports and are stored in the Cisco Nexus 1000V running configuration file.

7-116

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Port Profiles and Port Groups


Port Profiles and Port Groups
Port profiles correspond to port groups within VMware. By default, the
port group created within VMware for each port profile will have the
same name. VMware administrators use the port group to assign
network settings to VMs and uplink ports.
VSM-1(config)# port-profile type vEthernet pod1VMdata
VSM-1(config-port-prof)# switchport mode access
VSM-1(config-port-prof)# switchport access vlan 102
VSM-1(config-port-prof)# vmware port-group pod1VMdata
VSM-1(config-port-prof)# no shut
VSM-1(config-port-prof)# state enabled

Port Profile pod1VMdata

Port Group pod1VMdata

Cisco Nexus 1000V

VSM
2011 Cisco Systems, Inc. All rights reserved.

vCenter
DCUCI v4.07-5

When a port profile is created and enabled, a corresponding port group is created in VMware.
By default, this port group has the same name as the profile, but this name is configurable.
VMware administrators use the port profile to assign network settings to VMs and uplink ports.
When a VMware ESX host port (a physical VM NIC, or VMNIC) is added to a DVS that the
Cisco Nexus 1000V switch controls, an available uplink port group is assigned and those
settings are applied. When a NIC is added to a VM, an available VM port group is assigned and
the network settings that are associated with that profile are inherited.
A NIC in VMware is represented by a VMNIC interface. The VMNIC number is allocated
during VMware installation.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-117

Uplink Port Profiles


Uplink Port Profiles
Uplink port profiles can be assigned to VMNICs on VMware ESX
hosts.
These profiles can be one of two types:
System uplinks carry packet and control VLANs between the
VEM and the VSM.
VM uplinks carry data traffic from VMs destined for the
network.
A single uplink profile can also be configured to carry both types
of traffic.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-6

Uplink profiles are used to provide outbound connections from the VEM to the VSM, as well
as to carry VM data traffic to the network. Uplink profiles are used to configure VMNICs or
physical host ports. These profiles typically contain information such as trunking and port
channel behavior.

7-118

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VM ProfilesType vEthernet
VM ProfilesType vEthernet
VM profiles provide the configuration for VM ports.
VM vNIC ports are assigned to a VM port profile.
VM port profiles require an uplink port profile to access the
physical network.
VM profiles can be configured in a VLAN with no
corresponding uplink profile, to create internal VM networks.

App
OS
2011 Cisco Systems, Inc. All rights reserved.

OS = Operating System
DCUCI v4.07-7

VM profiles are used to provide configuration for VMs. These profiles require an uplink profile
to access the physical network.
VM profiles can be configured without a corresponding uplink profile, to create internal VM
networks. If a VM profile is created by accessing a VLAN that is not trunked to the physical
network, then the assigned VMs can communicate only with other VMs that are assigned to the
profile in the same host. This configuration is like creating internal-only VMware vNetwork
Standard Switches (vSwitches) or port groups within a standard VMware networking
environment.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-119

Port Profile States


Port Profile States
Port profiles can behave in one of two states:
Disabled (default): In this state, port profile configuration is not
pushed to the assigned ports, nor is a port group exported to the
VMware vCenter server.
Enabled: In this state, all configuration is applied to assigned
interfaces and a port group is pushed to the VMware vCenter
server.
Enable a profile
VSM-1(config-port-prof)# state enabled
Disable a profile
VSM-1(config-port-prof)# no state enabled

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-8

Port profiles can be enabled or disabled as a whole. The configuration of a port profile is
applied to assigned ports only if the profile is in an enabled state. Additionally, no VMware
port group is created for disabled port groups.
The enabled and disabled state of a port group is separate from the shut and no shut parameters
within the profile. The enabled and disabled state applies to the profile itself, whereas shut and
no shut apply to the port configuration of member ports.

7-120

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Port Profile Usage


Port Profile Usage
Port profiles should be used for all interface configurations. This
use ensures consistency for configuration of like devices.
Any configuration performed directly to a port overrides port profile
settings and should be used only for testing and troubleshooting.
All port profile configuration and changes are automatically pushed
to assigned ports.
Uplink Profile vmData

VM Profile DB

App
App
OS
OS

VM Profile WebApp

App
App
App
OS App
OS
OS
OS

OS = Operating System
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-9

You should use port profiles for all port configurations on the Cisco Nexus 1000V. Using port
profiles for port configurations helps to ensure consistency among ports that have similar
characteristics and speeds the deployment process. Any configuration that is performed on an
individual port overrides the port profile configuration. Configuring individual ports is typically
necessary only for testing and troubleshooting purposes.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-121

Port Profile Properties


Port Profile Properties
Port profiles contain similar configuration properties
for physical and virtual ports. Port configuration
parameters within a port profile are pushed to the
individually assigned ports.
Port Profile State
Default State for Assigned Ports
VSM-1(config)# port-profile type vEthernet DataProfile
VSM-1(config-port-prof)# description VM Traffic
VSM-1(config-port-prof)# vmware port-group DataProfile
VSM-1(config-port-prof)# switchport mode access
VSM-1(config-port-prof)# switchport access vlan 102
VSM-1(config-port-prof)# no shutdown
VSM-1(config-port-prof)# state enabled
VSM-1(config-port-prof)# exit

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-10

Port profiles are a key element of port configuration on the Cisco Nexus 1000V. VM ports and
uplink ports are configured as port profiles. Port profiles are then assigned to virtual machines
or virtual switches.

7-122

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Preventing VM Sprawl
Preventing VM Sprawl
A default state of shutdown within a port profile can assist
with the prevention of VM sprawl.
In this state, when a new VM is assigned to a VMware port
group, its port defaults to a shut state until enabled by a
network administrator.
This state can prevent new VMs from gaining network access
without network administrator approval.
VSM-1(config)# port-profile type vEthernet DataProfile
VSM-1(config-port-prof)# description VM Traffic
VSM-1(config-port-prof)# vmware port-group DataProfile
VSM-1(config-port-prof)# switchport mode access
VSM-1(config-port-prof)# switchport access vlan 102
VSM-1(config-port-prof)# shutdown
VSM-1(config-port-prof)# state enabled
VSM-1(config-port-prof)# exit

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-11

VM sprawl is a new data-center challenge that has come about with virtualization. Virtual
servers typically make new server deployment very easy. As such, servers may be deployed
more than would be typical in a standard physical architecture. By using the port profiles on the
Cisco Nexus 1000V, VM sprawl can be minimized by preventing VMs from accessing physical
network resources without network administrator intervention.
By dictating a port profile default port state of down, a network administrator can ensure that
new VMs have access to the network only with approval from network teams. After a new VM
is brought online, the VMware administrator needs to contact the network team to enable the
port. Until the port is enabled, the virtual server exists but has no outbound network
connectivity.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-123

VLAN Configuration
This topic discusses configuring VLANs.

VLAN Configuration
VLAN Configuration
VLANs are used to separate traffic types on a single physical interface.
The Cisco Nexus 1000V supports VLAN options similar to physical
switches.
VSM-1(config)# VLAN 100
VSM-1(config-vlan)# ?
exit

Exit from command interpreter

ip

Configure IP features

media

Media type of the VLAN

name

Ascii name of the VLAN

no

Negate a command or set its defaults

private-vlan

Configure a private VLAN

service-policy

Configure service policy for an interface

shutdown

Shutdown VLAN switching

state

Operational state of the VLAN

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-13

The VLAN command is used to define a VLAN on the Cisco Nexus 1000V Virtual Supervisor
Module in a manner similar to VLAN creation on a physical switch. When in VLAN
configuration mode, additional options such as private VLANs, an IP address for a switched
virtual interface (SVI), or operational state can be configured.

7-124

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

VLAN Port Profile Configuration


VLAN Port Profile Configuration
Ports can either be assigned to a specific VLAN or be
designated as trunks.
Access ports are members of a single VLAN.
VSM-1(config-port-prof)# switchport mode access
VSM-1(config-port-prof)# switchport access vlan 100

Trunk ports carry traffic from one or more VLANs.


VSM-1(config-port-prof)# switchport mode trunk
VSM-1(config-port-prof)# switchport trunk allowed VLAN 100-103
VSM-1(config-port-prof)# switchport trunk allowed add VLAN 104

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-14

After you configure VLANs on the switch, you can assign port profiles to use them. Ports
inherit VLAN settings from the port profile. Two port modes are typically used for VLANs:

Trunk ports: Trunk ports carry traffic for multiple VLANs. By default, these ports carry
data for all VLANs, but the list of allowed VLANs can be pruned back to minimize traffic
or increase security.

Access ports: Access ports are members of a single VLAN. Access ports are typically
configured for the VLAN of which they are members. In most cases, single server instances
(one operating system and one application) running on bare metal or virtual hardware are
assigned to access ports.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-125

Create VLANs on the VSM


Create VLANs on the VSM
Any VLANs that will be used for control, packet, or
VM data must be created on the VSM.
Create control and packet VLANs on your VSM.

VSM-1(config)# vlan 111


VSM-1(config-vlan)# name control
VSM-1(config)# vlan 211
VSM-1(config-vlan)# name packet
Create a data VLAN or VLANs on your VSM.

VSM-1(config)# vlan 411


VSM-1(config-vlan)# name vmTraffic
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-15

Because the Cisco Nexus 1000V acts as a fully functional switch within the virtual
environment, any VLANs that the attached ports use need to be created on the Cisco Nexus
1000V, to facilitate switching for those ports. VLAN creation is done from the configuration
level of the Cisco Nexus Operating System (Cisco NX-OS) command-line interface (CLI).

7-126

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Private VLAN Configuration


This topic discusses configuring private VLANs.

Create Private VLANs and Assign Them to a Port Profile


Create Private VLANs and Assign Them
to a Port Profile
In this configuration example, secondary VLAN 300 is being
mapped to primary VLAN 3.
The promiscuous port is also specified.
VSM-1(config)# vlan 3
VSM-1(config-vlan)# private-vlan primary
VSM-1(config)# vlan 300
VSM-1(config-vlan)# private-vlan isolated
VSM-1(config)# vlan 301
VSM-1(config-vlan)# private-vlan community
VSM-1(config)# port-profile pvlanprof type vethernet
VSM-1(config-port-prof)# switchport mode private-vlan promiscuous
VSM-1(config-port-prof)# switchport private-vlan host-association 3
300
VSM-1(config-port-prof)# switchport private-vlan mapping 3 add 300
2 0 1 1 Ci s co Sy s te m s , n
I c . A l ri g h ts re s er ve d .

D CU CI v 4 .0 7 -1 7

In the configuration example that the figure shows, the secondary VLANs 300, 301, and 302
are associated with the primary VLAN 3. In addition, the port profile specifies a promiscuous
port, which will manage traffic flow into and out of the private VLAN association.
The configuration has been applied to the port profile named pvlanprof and may be assigned to
vNIC interfaces.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-127

Validate the Private VLAN Port Profile


Validate the Private VLAN Port Profile
VSM-1(config-port-prof)# show port-profile name pvlanprof
port-profile pvlanprof
description:
type: vethernet
status: disabled
capability l3control: no
pinning control-vlan: pinning packet-vlan: system vlans: none
port-group:
max ports: 32
inherit:
config attributes:
switchport mode private-vlan promiscuous
switchport private-vlan host-association 3 300
switchport private-vlan mapping 3 300
(Output skipped)

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-18

The output of the show port-profile name pvlanprof command can be used to validate the
configuration and to assure correct secondary-to-primary VLAN association and the
promiscuous port assignment.
The output indicates that the pvlanprof port profile is the primary (promiscuous) VLAN
number 3. VLANs 300 and 301 are secondary VLANs. Depending on the port profile that is
created for the secondary VLANs, they can be isolated or community ports.

7-128

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Creating Uplink Profiles


This topic discusses how to create uplink profiles.

Create a System Uplink


Create a System Uplink
Create a new port profile.
Configure trunk mode.
Provide the system VLANs.
Control
Data
Configure the VMware port group.
Enable ports for the group.
VSM-1(config)# port-profile type ethernet Pod1-uplink
VSM-1(config-port-prof)# switchport mode trunk
VSM-1(config-port-prof)# switchport trunk allowed vlan
110,111,211,311,411
VSM-1(config-port-prof)# system vlan 110,111,211
VSM-1(config-port-prof)# vmware port-group Pod1-uplink
VSM-1(config-port-prof)# no shut
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-20

The figure shows how to create a port profile for uplink ports.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-129

Create a System Uplink (Cont.)


Set the profile mode to uplink.
Enable the profile.
Save the configuration.
VSM-1(config-port-prof)# state enabled
VSM-1(config-port-prof)# copy running-config startup-config
[########################################] 100%

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-21

The figure shows the next steps in creating a port profile for uplink ports.

7-130

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Verify the System Uplink


Verify the System Uplink
You should see similar output when the configuration
is completed successfully.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-22

The figure shows how to verify the port profile configuration within vSphere.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-131

Create Uplink Profiles for VM Data


Create Uplink Profiles for VM Data
Create a new port profile.
Configure the VMware port group.
Configure the profile to allow VM data VLANs across the
trunk.
Enable ports for the group.
Enable the profile.
VSM-1(config)# port-profile type ethernet vmData-uplink
VSM-1(config-port-prof)# switchport mode trunk
VSM-1(config-port-prof)# vmware port-group vmData-uplink
VSM-1(config-port-prof)# switchport trunk allowed VLAN 105-110
VSM-1(config-port-prof)# no shut
VSM-1(config-port-prof)# state enabled

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-23

The figure describes how to create a VM data uplink profile.

7-132

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Creating vEthernet Port Profiles


This topic discusses how to create vEthernet port profiles.

Create a VM Data Port Profile


Create a VM Data Port Profile
Create a new port profile.
Configure the ports as access ports.
Name the corresponding VMware port group. (By default, the profile name
is used.)
Enable the ports.
Set the maximum number of ports (default is 32).
VSM-1(config)# port-profile type vEthernet pod1VMdata
VSM-1(config-port-prof)# switchport mode access
VSM-1(config-port-prof)# switchport access vlan 102
VSM-1(config-port-prof)# vmware port-group pod1VMdata
VSM-1(config-port-prof)# no shut
VSM-1(config-port-prof)# state enabled
VSM-1(config-port-prof)# vmware max-ports 12
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-25

The figure describes the process of creating a VMware data port profile.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-133

Verify the New Port Group


Verify the New Port Group
You should observe output similar to the following.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-26

The figure shows the process of verifying VMware data port profile configuration. From within
vSphere, choose Inventory > Networking within the navigation pane. The network inventory
objects appear, including the newly created port profile named pod1VMdata. The vSphere
Recent Tasks window shows that the creation of the new port profile has been completed
successfully.

7-134

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Configuring Cisco Nexus 1000V Port Channels


This topic discusses how to configure Cisco Nexus 1000V port channels.

Understanding Port Channels


Understanding Port Channels
Port channels are used to create link bundles.
Port channels act as a single link with bandwidth equal to the
combined bandwidth of the member links.
Load balancing is performed on available links within a port
channel.

(2) 1 Gigabit Ethernet Links =


(1) 2 Gigabit Ethernet Port Channel

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-28

Port channels are used to bundle links, to increase availability. The Cisco Nexus 1000V uses
VMNICs to create port channels to upstream switches. These port channels allow VM data that
is destined for the physical network to be load-balanced actively across available links. Without
port channels, NIC teaming from the Cisco Nexus 1000V is performed in an active-passive
fashion.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-135

Port Channel Load Balancing


Port Channel Load Balancing
Load balancing is the method of selecting a physical link to transmit
data within a logical port channel link.
Port channels can be load-balanced by using the following methods:
Destination MAC
Source MAC address
Source and destination MAC
Destination IP
Source IP
Source and destination IP
Source TCP/UDP port number
Destination TCP/UDP port number
Source and destination TCP/UDP port number

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-29

Port channels create a single logical link that comprises multiple physical links. When a port
channel has been selected as an egress interface, the switch must choose a physical link, within
the port channel, on which to transmit data. The process of selecting a physical link is based on
load-balancing algorithms that provide fairness among the links. The Cisco Nexus 1000V
supports several load-balancing algorithms, the most granular of which is source and
destination TCP/User Datagram Protocol (UDP) port number. This support means that using
the source and destination port number will provide the greatest fairness when distributing
traffic load across physical links.

7-136

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Port Channel Guidelines


Port Channel Guidelines
Port channels across VEMs are not supported.
Port channels can be formed with multiple upstream links only when they
satisfy the compatibility requirements, and under these conditions:
The uplinks from the host are going to the same upstream switch.
The uplinks from the host are going to multiple upstream switches,
but the load-balancing algorithm is source MAC-based.
Port channels can be configured by using a port profile.

2011 Cisco Systems, Inc. All rights reserved.

Cisco VEM

Cisco VEM

Cisco VEM

Cisco VEM

DCUCI v4.07-30

The Cisco Nexus 1000V does not support port channels across multiple hosts. Each VEM on
each host requires its own port channel. These port channels can exist between the VEM and a
single upstream switch or between the VEM and two upstream switches. Multiple port channels
are supported on a single VEM or VSM.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-137

Port Channel Types


Port Channel Types
The Cisco Nexus 1000V supports two types of port channels for uplink
ports or port profiles:
Standard: One or more uplink NICs attached to a single upstream switch
vPC-HM: One or more uplink NICs attached to two upstream switches
VMware ESX

VMware ESX

App

App

App

App

OS

OS

OS

OS

VEM

VEM

OS = Operating System
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-31

The Cisco Nexus 1000V supports two types of port channels: standard, and virtual port channel
host mode (vPC-HM).
Standard is the most common type of port channel and requires all members of the port channel
to belong to the same device on each side of the port channel. In this configuration, all
VMNICs that are members of the port channel must be cabled to the same upstream switch.
In vPC-HM, VMNIC port channel members can be cabled to two upstream switches, for
redundancy. This mode has the following requirements:

The uplinks from the host go to multiple upstream switches, but the load-balancing
algorithm is source MAC-based.

The upstream switch can and does enable Cisco Discovery Protocol.

The Cisco Nexus 1000V port channel is configured as enabled for Cisco Discovery
Protocol.

Cisco Discovery Protocol information allows the Cisco Nexus 1000V to address issues that
typically occur in asymmetric configurations:

7-138

MAC address flapping: The Cisco Nexus 1000V prevents MAC address flapping, by
using MAC-based load balancing.

Duplicate packets that are received during floods, broadcasts, and multicasts: The
Cisco Nexus 1000V checks for a match between the subgroup ID, where a packet was
received, and the destination subgroup ID, and accepts only matching packets.

Packets that are sent back to the host during floods and broadcasts: The Cisco Nexus
1000V verifies that ingress packets are not the same packets that it sent out.

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

vPC Host Mode


vPC Host Mode
Virtual port channel in vPC-HM allows member ports in a port
channel to connect to multiple upstream switches. With vPCHM, ports are grouped into subgroups (031) for traffic
separation.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-32

Before beginning this procedure, you must confirm or complete these tasks:

You are logged in to the CLI in EXEC mode.

In vPC-HM, the port channel member ports connect to multiple upstream switches, and the
traffic must be managed in separate subgroups.

When you create a port channel, an associated channel group is automatically created.

vPC-HM is supported only in port channels that are configured in the on mode. vPC-HM is
not supported for Link Aggregation Control Protocol (LACP) channels that use the active
and passive modes.

You need to know whether Cisco Discovery Protocol is configured in the upstream
switches. If configured, then Cisco Discovery Protocol creates a subgroup for each
upstream switch to manage its traffic separately. If Cisco Discovery Protocol is not
configured, then you must manually configure subgroups to manage the traffic flow on the
separate switches.

If you are using Cisco Discovery Protocol with the default Cisco Discovery Protocol timer
(60 seconds), then links that advertise that they are in service and then out of service in
quick succession can take as much as 60 seconds to be returned to service.

If any subgroup has more than one member port, then you must configure a port channel
for the member ports of each subgroup on the upstream switch.

If vPC-HM is not configured when port channels connect to multiple upstream switches,
then the VMs behind the Cisco Nexus 1000V receive duplicate packets from the network
for unknown unicast floods, multicast floods, and broadcasts.

The subgroup command that is used in this procedure overrides any subgroup configuration
that is specified in the port profile that the port channel interface inherits.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-139

Standard Port Channel Configuration


Standard Port Channel Configuration
Create the port profile.
Configure port profile parameters.
Turn on channel-group auto.
After the first physical port has been added to the port profile, a
channel group is created automatically.
Separate port channels are created by using assigned VMNICs on
each VEM.
VSM-1# conf
VSM-1(config)# port-profile type ethernet UplinkChannel
VSM-1(config-port-prof)# description Upstream Port-Channel
VSM-1(config-port-prof)# channel-group auto mode on
VSM-1(config-port-prof)# interface ethernet3/2-3
VSM-1(config-port-prof)# state enabled
VSM-1(config-port-prof)# switchport mode trunk
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-33

Port channels should be configured by using port profiles rather than at the individual port
level. This method ensures consistent configuration across hosts and compatibility for advanced
functionality, such as VMware vMotion, high availability, and Distributed Resource Scheduler
(DRS).
For standard port channels, enter the channel-group auto mode on command under the port
profile configuration. When VMNICs (physical NICs on ESX hosts) are added to this profile,
port channels are automatically created on each host, and additional VMNICs are added to the
channel. A port channel cannot span multiple hosts, so an individual port channel is created
from the VEM of each ESX host.

7-140

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Standard Port Channel Verification


Standard Port Channel Verification
VSM-1(config-port-prof)# show port-profile name UplinkChannel
port-profile UplinkChannel
description: Upstream Port-Channel
status: enabled
port-group: UplinkChannel
config attributes:
switchport mode trunk
channel-group auto mode on
evaluated config attributes:
switchport mode trunk
channel-group auto mode on
assigned interfaces:

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-34

The figure illustrates the verification of the uplink port profile standard port channel
configuration.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-141

vPC-HM Port Channel Verification


vPC-HM Port Channel Configuration
When creating asymmetric port profiles, the sub-group cdp command
must be used to have Cisco Discovery Protocol information gathered
for subgroup creation. Subgroups are used to manage traffic. You
must verify that Cisco Discovery Protocol is enabled in the upstream
switch.
Create the port profile.
Configure port profile parameters.
Enable sub-group cdp.
Turn on channel-group auto.
VSM-1# config
VSM-1(config)# port-profile type vEthernet UplinkChannel
VSM-1(config-port-prof)# sub-group cdp
VSM-1(config-port-prof)# channel-group auto mode on
VSM-1(config-port-prof)# state enabled
VSM-1(config-port-prof)# switchport mode trunk

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-35

vPC-HM port channels rely on source MAC address load balancing and subgroups, to prevent
MAC address instability and multiple frame copies on upstream switches. Loops are prevented
automatically because frames that are received on uplink ports are not forwarded to other
uplink ports.

7-142

MAC table instability: In asymmetric mode, the Cisco Nexus 1000V relies on source
MAC hashing to avoid MAC address instability. Source MAC hashing ensures that traffic
from a source VM always uses the same uplink port in asymmetric configurations. This
method prevents upstream switches from having issues that can be caused when a source
MAC is seen on multiple switches and from having to make constant MAC table changes.

Subgroups: Using Cisco Discovery Protocol, the Cisco Nexus 1000V can analyze traffic
and create subgroups for the separate upstream switches in a vPC-HM port channel. This
method allows the Cisco Nexus 1000V to prevent multiple copies of the same frame that
are received on separate ports from being forwarded to a VM.

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Adding VEMs to the VSM


This topic discusses adding VEMs to the VSM.

Add a Host
Add a Host
This procedure assumes that
VMware Update Manager is
being used or that the VEM has
previously been installed on the
host.
To manually install a VEM on a
VMware ESX host, refer to the
Cisco Nexus 1000V VEM
Software Installation and
Upgrade Guide.
1.
2.
3.

Choose the data center


Networking view.
Right-click the VSM.
Choose Add Host.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-37

The VMware administrator adds hosts to a VSM. The administrator uses the data-center
Networking view to assign hosts and their VMNICs to a Cisco Nexus 1000V DVS.
The procedure assumes that VMware vCenter Update Manager is being used or that the VEM
has been installed on the host.
If you need to manually install a VEM on an ESX host, refer to the Cisco Nexus 1000V VEM
Software Installation and Upgrade Guide.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-143

Add a Host (Cont.)


4.

Choose the host to add.

5.

Choose the uplink NIC or NICs to add.

6.

Click Next.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-38

On the Add Host to Distributed Virtual Switch screen, follow these steps:

7-144

Choose the host that you want to add.

Choose the uplink NICs that you want to add.

Click Next.

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Add a Host (Cont.)


7. Verify the configuration.
8. Click Finish.
9. Repeat as necessary for all
hosts.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-39

After you have added the host, verify the configuration, then click Finish. Repeat these steps
for all the hosts that need to be added.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-145

Verify the New VEMs from the VSM


Verify the New VEMs from the VSM
VSM-1# show module
Mod

Ports

Module-Type

---

-----

-------------------------------- ------------------ --------

Model

Virtual Supervisor Module

248

Virtual Ethernet Module

ok

248

Virtual Ethernet Module

ok

Nexus1000V

Status

active *

...
Mod

MAC-Address(es)

---

--------------------------------------

Serial-Num
----------

00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8

NA

02-00-0c-00-03-00 to 02-00-0c-00-03-80

NA

02-00-0c-00-04-00 to 02-00-0c-00-04-80

NA

...

Slots 1 and 2 are reserved for VSMs. New host VEMs will begin at slot 3.
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-40

After a host has been added and the VEM has been installed successfully, the VEM appears as
a module on the VSM CLI, like modules that are added to a physical chassis.
Note

7-146

Slots 1 and 2 are reserved for the VSMs. New host VEMs start from slot 3.

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Verify the New VEMs from the VSM (Cont.)


The show module vem map command shows the
status of all VEMs, as well as the UUID of the host on
which the VEM runs.
VSM-1# show module vem map
Mod

Status

UUID

---

-----------

----

powered-up

34343937-3638-3355-5630-393037415833

powered-up

34343937-3638-3355-5630-393037415834

This is the UUID of the host on which the VEM resides.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-41

The show module vem map command shows the status of all VEMs, as well as the
Universally Unique Identifier (UUID) of the host on which the VEM runs. This command can
be used for verification purposes.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-147

Add a VM to a VSM Port Group


Add a VM to a VSM Port Group
1. Right-click the desired VM.
2. Choose Edit Settings.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-42

To add a VM to a VSM port group, right-click the VM, then choose Edit Settings.

Add a VM to a VSM Port Group (Cont.)


3. Select the NIC to configure.
4. Change the Network Label to the desired VSM port profile.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-43

The VMware administrator adds VMs to the Cisco Nexus 1000V DVSs. The port group
becomes available for selection as a network label within the VM configuration, after the port
profile has been created and the corresponding port group has been pushed to vCenter.

7-148

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Backing up a VSM Configuration


This topic discusses how to back up a VSM configuration.

Managing VSM Files and Configurations


Managing VSM Files and Configurations
Copy a file from the specified source location to the specified
destination location.
VSM-1# copy [source filesystem:] filename [destination
filesystem:] filename

Save a copy of the running configuration to a remote switch.


VSM-1# copy system:running-config
tftp://10.10.1.1/home/configs/switch3-run.cfg

Copy a file from bootflash in the active supervisor module to


bootflash in the standby supervisor module.
VSM-1# copy bootflash:system_image bootflash://sup2/system_image

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-45

The figure shows the various copy commands that can be used within the Cisco Nexus 1000V
CLI to manage configuration file images and Cisco NX-OS kickstart and system images. These
commands can be issued from EXEC mode.
The first command illustrates the command structure to copy a file from a source to a
destination location. The second command saves a copy of the active running configuration to a
remote switch. The third command copies a file in the bootflash of the active supervisor to the
bootflash of the standby supervisor.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-149

Managing VSM Files and Configurations


(Cont.)
Copy a running configuration to the bootflash file system.
VSM-1# copy system:running-config bootflash:my-config

Copy a system image file from the SCP server, identified by an


IP version 4 (IPv4) address, to bootflash.
VSM-1# copy scp://[email protected]/system-image
bootflash:system-image

Copy a script file from the SFTP server, identified by an IPv4


address, to the volatile file system.
VSM-1# copy sftp://172.16.10.100/myscript.txt
volatile:myscript.txt

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-46

The first command in this figure can be used to copy the active running configuration to the
bootflash. The saved file will survive a system reset (write erase and reload command sequence).
The second command downloads a copy of the system image file from a Secure Copy Protocol
(SCP) server to the bootflash. The third command copies a script file from a Secure File Transfer
Protocol (SFTP) server to the volatile file system.

7-150

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Managing VSM Files and Configurations


(Cont.)
Place a backup copy of the running configuration on the bootflash
file system (ASCII file).
VSM-1# copy system:running-config bootflash:my-config

Copy the file samplefile from the root directory of the bootflash file
system to the mystorage directory.
VSM-1# copy bootflash:samplefile
bootflash:mystorage/samplefile

Copy the source file to the running configuration on the switch,


and configure the switch as the file is parsed line by line.
VSM-1# copy tftp://10.10.1.1/home/configs/switch3run.cfg system:running-config

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-47

The first command in this figure places a backup copy of the active running configuration on
the bootflash. The second command saves a file on the bootflash root directory to a different
bootflash directory. The third command copies the source file to the active running
configuration and configures the switch, line by line, as the file commands are compiled.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-151

vMotion and Port Profile Mobility


This topic discusses vMotion and port profile mobility.

Policy-Based VM Connectivity
Policy-Based VM Connectivity
Policy definition supports:
VLAN and PVLAN
settings

Server

ACL, port security, and


ACL redirect
NetFlow collection

VM

VM

VM

VM

#1

#2

#3

#4

Cisco Nexus 1000V VEM

Rate limiting

VMware ESX

QoS marking
(CoS/DSCP)
Remote port mirror
(ERSPAN)

Cisco Nexus 1000V

vCenter
VSM
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-49

Port profiles are created within the Cisco Nexus 1000V CLI by the network administrator and
are pushed out to vCenter. On vCenter, the profiles appear in the vCenter inventory as a port
group.
These port profiles represent policies that include VLANs and PVLANs, ACLs, Layer 2 port
security, NetFlow collection, rate limiting, quality of service (QoS) marking that uses either the
differentiated services code point (DSCP) or class of service (CoS) values, and Encapsulated
Remote Shared Port Analyzer (ERSPAN).

7-152

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Policy-Based VM Connectivity (Cont.)


Web apps:
PVLAN 108, Isolated

Server 1

Security Policy = Port 80


and 443
Rate limit = 100 Mbps

VM

VM

VM

#1

#2

#3

#4

Cisco Nexus 1000V VEM

QoS priority = Medium

VM

VMware ESX

Remote port mirror = Yes

3
Cisco Nexus 1000V

1
Available Port Groups

vCenter

2011 Cisco Systems, Inc. All rights reserved.

WEB Apps

HR

DB

Compliance

VSM
DCUCI v4.07-50

In this configuration, a port profile named Web Apps has been created and assigns a security
policy for TCP ports 80 and 443 to the isolated secondary PVLAN 108. The profile rate-limits
the port to 100 Mb/s with a QoS medium priority level and enables remote port mirroring.
This policy has been state enabled and pushed out to vCenter, where it appears within the vCenter
inventory. The policy has been assigned to a vNIC within VM 1 on server 1.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-153

Mobility of Security and Network Properties


Mobility of Security and Network
Properties
1. vCenter initiates
vMotion (manual/DRS)
and notifies Cisco
Nexus 1000V.
2. During VM replication,
Cisco Nexus 1000V
copies VM port state
to new host.
Mobile properties
include:

Port policy

Interface state and


counters

Flow statistics

Remote port mirror


session

Server 2

Server 1
VM

VM

VM

VM

VM

VM

VM

VM

#1

#2

#3

#4

#5

#6

#7

#8

Nexus 1000VVEM
Cisco

Nexus 1000 -VEM


Nexus 1000V

VMware ESX

VMware ESX

Cisco Nexus 1000V


vMotion Notification
vCenter

2011 Cisco Systems, Inc. All rights reserved.

Current: VM1 on Server 1

New: VM1 on Server 2


VSM

DCUCI v4.07-51

If a manual or automatic vMotion event occurs, the Cisco Nexus 1000V is notified of the
process. During VM replication on the secondary host, the primary VSM copies the port state
to the new host.
The mobility properties that are retained and copied to the new host include the port policy,
interface state and counters, flow statistics, and remote Switched Port Analyzer (SPAN)
session.

7-154

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Mobility of Security and Network


Properties (Cont.)
1. vCenter initiates
vMotion
(manual/DRS) and
notifies Cisco Nexus
1000V.
2. During VM
replication, Cisco
Nexus 1000V copies
VM port state to new
host.
3. When VMotion
completes, port on
new ESX host is
brought up and VM
MAC address is
announced to the
network.
2011 Cisco Systems, Inc. All rights reserved.

Server 2

Server 1
VM

VM

VM

VM

#1

#2

#3

#4

Nexus 1000VVEM
Cisco Nexus

VM
VM
#1#5

VM

VM

VM

#6

#7

#8

1000V Nexus 1000 -VEM

VMware ESX

VMware ESX

Cisco Nexus 1000V


Network Update

ARP for VM1 sent


to network

Flows to VM1 MAC


redirected to Server 2

vCenter

VSM
DCUCI v4.07-52

After the server administrator, operating within vSphere, assigns these port profiles to either
uplink or vNIC ports, these policies survive both manual and automatic vMotion events.
When the vMotion process is completed, the port on the new ESX host is made active and the
VM MAC address is announced to the network.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-155

Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
Port profiles are used to configure multiple ports globally.
Port profiles reduce administrative time and minimize
opportunities for error.
After a port profile has been created, it is passed to vCenter; new
port groups will be available for VMs.
VLANs and PVLANs can be configured within the CLI and
associated with a port profile.
Port channels are used for link aggregation, redundancy, and load
balancing.
The Cisco Nexus 1000V supports two types of port channels:
symmetric, from the Cisco Nexus 1000V to a single upstream
switch, and asymmetric, from the Cisco Nexus 1000V to two
upstream switches.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-53

Summary (Cont.)
All port configuration for VM and uplink ports should be performed
by using port profiles.
Cisco Nexus 1000V image and configuration files can be managed
from the VSM CLI.
Port profile assignment to a VM assures policy mobility during a
manual or automatic vMotion event.

2011 Cisco Systems, Inc. All rights reserved.

7-156

Data Center Unified Computing Implementation (DCUCI) v4.0

DCUCI v4.07-54

2011 Cisco Systems, Inc.

Lesson 6

Configuring Cisco UCS


Manager for VMware PTS
Overview
The Cisco M81KR represents a quantum leap forward in performance and management in
VMware vSphere environments. Implementers must be able to configure Cisco UCS Manager
to integrate the MK81KR into VMware vSphere deployments. This lesson presents the
installation and configuration of the Cisco Virtual Interface Card (VIC) and its connectivity to
VMware vCenter. The lesson also describes the creation of static and dynamic virtual network
interface cards (vNICs) and static virtual host bus adapters (vHBAs) and the creation and
assignment of port profiles.

Objectives
Upon completing this lesson, you will be able to describe the installation and configuration of
the Cisco VIC and the creation of vNICs, vHBAs, and port profiles. This ability includes being
able to meet these objectives:

Describe how to install the Cisco UCS Manager extension in vCenter

Describe how to configure Cisco UCS Manager to connect to vCenter

Describe how to configure uplink and vEthernet profiles

Describe how to configure service profiles that contain dynamic NICs

Describe how to configure vMotion hosts and port profile mobility within Cisco M81KR

Install Cisco UCS Manager Extension in vCenter


This topic discusses how to install the Cisco Unified Computing System (UCS) Manager
extension in vCenter.

Preparing the Environment


Preparing the Environment
VMware license required: VMware Enterprise Plus
ESX versions supported: ESX 4.x, ESXi 4.x
Cisco UCS Manager versions supported: 1.2.1d or higher
Cisco UCS and upstream switches configured with VLANs
Datacenter object configured in Cisco UCS Manager

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-4

Before integrating Cisco UCS Manager with vCenter, you should verify that a few
requirements are in place. VMware vCenter requires an Enterprise Plus license to create a
distributed virtual switch (DVS), which is how Cisco Virtual Network Link (VN-Link) in
hardware appears in vCenter.
Support for DVSs in vCenter starts in VMware ESX 4.0, and support for the Pass-Through
Switch (PTS) DVS starts with ESX 4.0 Update 1. Cisco UCS Manager supports the VIC PTS,
starting in Cisco UCS Manager Release 1.2(1d).
Other requirements include configuring your upstream network switches with the proper
VLANs that you want to expose to your virtual machines (VMs). You must also configure a
datacenter object in vCenter to contain the PTS DVS.

7-158

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

vCenter Integration Methods


vCenter Integration Methods
With wizard:
Install plug-in on vCenter server.
Define DVS.
Define port profile and port profile client.
Apply port profile to VMs.

Without wizard:
Same steps as wizard.
No guidance in process.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-5

You can use two methods to integrate vCenter with Cisco UCS Manager: using a wizard or
manual integration. The wizard guides you through all the required steps. Manual integration
requires that you know which steps to perform and in which order.

2011 Cisco Systems, Inc.

Networking in a Virtual Server Environment

7-159

vCenter Integration Wizard


vCenter Integration Wizard

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-6

The figure depicts the vCenter integration wizard. The left-hand side of the window shows the
four steps that the wizard guides you through:
1. Install the plug-in.
2. Configure the DVS.
3. Produce port profiles in Cisco UCS Manager.
4. Consume port profiles in vCenter.

7-160

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Downloading the Security Plug-In in Cisco UCS Manager


Downloading the Security Plug-In in
Cisco UCS Manager

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-7

The security plug-in is a Secure Sockets Layer (SSL) certificate that Cisco UCS Manager
generates. You can use the default key that Cisco UCS Manager creates (as shown in the
figure), or you can manually modify the key. Either way, you must download the certificate in a
file to your client machine so that you can then upload it into vCenter.
In a lab environment, set the lifecycle times to a range of 1 to 5 minutes. If you need to remove
all port profiles, Cisco UCS Manager locks the configurations for a default period of 15
minutes.

2011 Cisco Systems, Inc.

Networking in a Virtual Server Environment

7-161

Installing the Security Plug-In in vCenter


Installing the Security Plug-In in
vCenter

Before:
Right-click
white space
for menu.

After

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-8

To install the certificate into vCenter, navigate to the plug-in manager. Right-click the available
plug-ins area, and choose Install Plug-in from the menu.
You are prompted to navigate your client machine file system to the location to which you
downloaded the plug-in from Cisco UCS Manager.

7-162

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Configure Cisco UCS Manager to Connect to


vCenter
This topic discusses how to configure Cisco UCS Manager to connect to vCenter.

Using Folders
Using Folders
Folders in vCenter
Organize objects (servers,
VMs, switches,
datacenters, and so on).
Assign privileges to users
to act on folder objects.

Folders in Cisco UCS


Manager contain:
You must create a
folder to contain
DVS objects.

DVS objects
Datacenter objects
You can create a
folder to contain
datacenter
objects.
2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-10

Folders are container objects in vCenter. Folders are used to organize other vCenter objects,
such as servers, VMs, switches, and data centers. You can also use folders to assign privileges
to users so that they can act on the objects in the folder.
When you integrate vCenter with Cisco UCS Manager, the folders are used to contain DVS and
datacenter objects only. You must configure a folder to contain the DVS objects, whereas you
may decide whether or not to create folders to contain datacenter objects.
Note

2011 Cisco Systems, Inc.

Although your folder names do not need to match those used in vCenter, you should use the
same names to simplify correlation.

Networking in a Virtual Server Environment

7-163

Configure Uplink and vEthernet Profiles


This topic discusses configuring uplink and vEthernet profiles.

QoS Policy
QoS Policy

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-12

To associate a quality of service (QoS) policy with a port profile, you first must enable the
corresponding QoS system class and create a QoS policy with that class. By default, all
Ethernet traffic is delivered with best effort priority. You can enable and use bronze, silver,
gold, and platinum classes, as shown in the figure.
For each of these classes, you can define the class of service (CoS) value, whether to tolerate
packet drop, associated weight (for platinum class only, relative to best effort and Fibre
Channel traffic), maximum transmission unit (MTU), and whether to optimize multicast traffic.
If you enable jumbo frames for any interface, you further limit the total number of PTS
interfaces that you can configure, as described earlier in this lesson.

7-164

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Creating a Port Profile


Creating a Port Profile
Port profile contains:
QoS policy
Network control policy
Max ports
Pin group
VLANs

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-13

A port profile contains configuration information that ultimately is associated with a VM


network interface. The configuration includes QoS, network control, maximum number of
ports, a pin group, and one or more VLANs.

2011 Cisco Systems, Inc.

Networking in a Virtual Server Environment

7-165

Network Control Policy


Network Control Policy

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-14

The network control policy allows you to enable or disable Cisco Discovery Protocol
membership, define which action Cisco UCS Manager takes on link failures, and decide
whether to allow MAC forging.

7-166

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

LAN Pin Group


LAN Pin Group
Dynamic pinning
Cisco UCS Manager
assigns vNICs to uplinks
Round robin
Automatic repinning on
uplink failure

Static pinning
Manually assign vNICs to
uplinks
Admin responsible for load
No repinning on uplink
failure

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-15

The default behavior for Cisco UCS Manager is to dynamically assign (or pin) all vNICs to
available uplinks, by using a round-robin algorithm. The benefit of this approach is that Cisco
UCS Manager automatically repins a vNIC when the uplink to which it is pinned has failed.
The disadvantage of this approach is that the round-robin algorithm might not yield optimal
overall traffic distribution across uplinks.
Alternatively, you can statically pin vNICs to uplinks. The benefit of this approach is that you
can define the overall traffic distribution for all the vNICs that you statically pin. The
disadvantage of this approach is that Cisco UCS Manager does not automatically repin such a
vNIC upon uplink failure.

2011 Cisco Systems, Inc.

Networking in a Virtual Server Environment

7-167

Creating a Port Profile Client


Creating a Port Profile Client
Triplet (datacenter, folder, DVS)
Can have multiple clients for profile

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-16

A port profile client defines the consumer of a port profile. The consumer is defined as a DVS
in a folder that is associated with a datacenter object in vCenter. You can associate the same
port profile with multiple port profile clients.

7-168

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Configure Service Profiles with Dynamic NICs


This topic discusses configuring service profiles with dynamic NICs.

Calculating the Deployment Limits with the M81KR VIC


Calculating the Deployment Limits With
the M81KR VIC
Per Cisco UCS blade:
Max number of vHBAs = 16
Max number of interfaces (vHBAs + vNICs + PTS) = (15 * n) 2
Max number of PTS interfaces = [ 26 ( #vHBA + #vNIC ) ] * 4
Further limits apply if using jumbo frames.

Per Cisco UCS 6100:


64 vHBAs
1000 virtual interfaces (vHBA + vNICs + PTS)

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-18

There are several limits to consider when you are trying to figure out how many static and
dynamic interfaces you can configure on an M81KR VIC. The M81KR can support a
maximum of 128 interfaces, but this number is further reduced by the limits that are defined in
the figure.
The maximum number of interfaces that you can configure is based on a physical limit that is
defined in the Cisco UCS 6100 Series Fabric Interconnects. The formula is defined as (15 * n)
2, where n is the number of server links that connect the I/O module (IOM) to the fabric
interconnect. For example, with a single link, you can configure (15 * 1) 2, or 13, total
interfaces. These interfaces include the sum of vHBAs, static vNICs, and dynamic (or PTS)
vNICs.
The maximum number of PTS interfaces that you can configure is given as [26 (#vHBA +
#vNIC)] * 4. For example, if you configure two vHBAs and four VNICs, then you are bound to
configure no more than [26 (2 + 4)] * 4, or 80, PTS interfaces. Recall, however, that you are
also bound by the first formula as a function of the number of server links.

2011 Cisco Systems, Inc.

Networking in a Virtual Server Environment

7-169

Configuring a Dynamic vNIC Connection Policy


Configuring a Dynamic vNIC Connection
Policy
Can configure multiple policies per Cisco Unified Computing
System.
Adapter policy for PTS is VMware PassThru.
Interfaces are protected by default.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-19

A dynamic vNIC connection policy defines the number of PTS interfaces that you want to
instantiate on an M8lKR VIC. Recall the limitations that were defined earlier in this lesson.
Because you can configure varying numbers of server links to connect IOMs from different
chassis to the fabric interconnects, you can feasibly have some M81KR VICs with more PTS
interfaces than others. However, note that when you migrate a service profile with an
associated connection policy from one chassis to another, you might run into problems.
Specifically, if the number of IOM links on the target chassis is fewer than the number of IOM
links on the source chassis, then fewer PTS interfaces will be available on those blades.

7-170

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Associating a Dynamic vNIC Connection Policy to a Profile


Associating a Dynamic vNIC Connection
Policy to a Profile
Cisco UCS Manager performs the calculation at policy
association.
If there are insufficient resources, you get a config failure.
This profile change incurs a server reboot.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-20

When you associate a dynamic vNIC connection policy to a service profile, Cisco UCS
Manager calculates the total number of PTS interfaces that you can configure. If that number is
fewer than the number that you specify in your connection policy, Cisco UCS Manager gives
you a configuration failure.
Note

2011 Cisco Systems, Inc.

The server must reboot when you associate a connection policy to a service profile.

Networking in a Virtual Server Environment

7-171

Viewing Dynamic vNICS in Cisco UCS Manager


Viewing Dynamic vNICs in Cisco UCS
Manager

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-21

After successfully associating a connection policy to a service profile, you can view the
instantiated PTS interfaces in Cisco UCS Manager, as shown in the figure. These interfaces are
consumed dynamically as you configure network interfaces on your VMs.

7-172

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Viewing Dynamic vNICS in ESX


Viewing Dynamic vNICs in ESX

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-22

The dynamic vNICs that have been configured on the ESX server can be viewed from the ESX
CLI, as shown in the figure. These dynamic vNICs are named vf_vmnicX.

2011 Cisco Systems, Inc.

Networking in a Virtual Server Environment

7-173

Configure vMotion and M81KR Port Profile


Mobility
This topic discusses configuring vMotion and M81KR port profile mobility.

Add the Hosts to the DVS in vCenter


Add the Hosts to the DVS in vCenter

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-24

To add a host to the DVS, navigate to the DVS in the networking inventory view. Right-click
Add Host to the Distributed Virtual Switch from the drop-down menu.
When adding a host to the DVS, you must select static interfaces that have already been
configured in Cisco UCS Manager to use as uplinks to the DVS. Unlike with Cisco Nexus
1000V, you do not manually configure the uplink port groups. These ports are configured
automatically for you.
Note

7-174

If you use VMware Update Manager to install the VEMs, then when adding hosts to the
DVS, VMware Update Manager remediates the ESX servers to ensure that they have the
correct version of the VEM installed.

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

View Hosts in vCenter


View Hosts in vCenter

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-25

After adding all hosts to the DVS, you can view their status from the networking inventory
view, as shown in the figure.

2011 Cisco Systems, Inc.

Networking in a Virtual Server Environment

7-175

Viewing Available Port Profiles in vCenter


Viewing Available Port Profiles in vCenter

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-26

Port profiles that are defined in Cisco UCS Manager appear as port groups in vCenter. You can
view all available port groups in your port profile client by navigating to the appropriate
datacenter, folder, and DVS in your VMware vSphere client. In the figure, the port group
named rhvmprofile is available for consumption.

7-176

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Assigning Port Profile to the VM Interface


Assigning Port Profile to the VM Interface

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-27

You can associate any port group available in the port profile client to any VM network
interface. The figure shows the association of the port group named rhprofile to the RH1 VM
network adapter.

2011 Cisco Systems, Inc.

Networking in a Virtual Server Environment

7-177

Viewing Port Profile Consumption in Cisco UCS Manager


Viewing Port Profile Consumption in
Cisco UCS Manager

Virtual/vEthernet
Interface Number
in Cisco Unified
Computing
System

Interface Number
on DVS in vCenter

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-28

Going back to the Cisco UCS Manager interface, you can now see, under the VM tab, which
port profiles have been consumed by which port profile clients, as shown in the figure. In this
example, an interface vNIC 1692 was instantiated on the DVS in the port profile client. This
interface is associated with a virtual interface 2448 in Cisco Unified Computing System.

7-178

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Displaying Virtual Interface/vEthernet Packet Counters in the


Cisco UCS Manager CLI
Displaying Virtual Interface/vEthernet Packet
Counters in the Cisco UCS Manager CLI

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-29

Now that you know the vEthernet interface number that is associated with your VM network
interface, you can view packet counters and status for that interface in the connect nxos shell of
Cisco UCS Manager.

2011 Cisco Systems, Inc.

Networking in a Virtual Server Environment

7-179

Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
The PTS feature requires configuration on both Cisco UCS
Manager and vCenter, to provide two-way communication.
The Cisco UCS Manager extension is installed as a plug-in to
vCenter.
Datacenter, PTS, and folder objects are used to organize and
assign parameters to hosts and VMs within Cisco UCS Manager.
Uplink and vEthernet port profiles are created within Cisco UCS
Manager and pushed out to vCenter.
Dynamic NICs are created by using a policy wizard within Cisco
UCS Manager, are assigned within the service profile, and appear
within both the Cisco UCS Manager and vCenter inventories.

2011 Cisco Systems, Inc. All rights reserved.

7-180

Data Center Unified Computing Implementation (DCUCI) v4.0

DCUCI v4.07-30

2011 Cisco Systems, Inc.

Module Summary
This topic summarizes the key points that were discussed in this module.

Module Summary
Cisco VN-Link enables policy-based VM connectivity and
automated virtual switch port provisioning.
VMware vDS allows a centralized configuration point for vSwitches
within a VMware ESX cluster.
The Cisco Nexus 1000V architecture consists of a virtual switch
chassis with a VSM and one VEM per ESX host.
Port profiles should be used for all interface configurations, to
ensure consistency across configuration of like devices.
The Cisco Nexus 1010 hardware appliance supports as many as
four domain instances of software-based virtual switching.
The Cisco M81KR/P81E VIC uses VNTag to deliver a hardwarebased VN-Link solution.

2011 Cisco Systems, Inc. All rights reserved.

DCUCI v4.07-1

This module introduced the Cisco Nexus 1000V Distributed Virtual Switch (DVS), which is a
third-party plug-in to VMware vCenter. The features and benefits of the Cisco Nexus 1000V
DVS were discussed, as were detailed installation and configuration methods. In addition, a
comparison of the Cisco Nexus 1000V, Cisco Nexus 1010 Virtual Services Appliance, and the
Cisco M81KR/P81E Virtual Interface Card (VIC) were presented.

2011 Cisco Systems, Inc.

Virtual Server Networking

7-181

7-182

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Module Self-Check
Use the questions here to review what you learned in this module. The correct answers and
solutions are found in the Module Self-Check Answer Key.
Q1)

What is a feature of the Cisco Nexus 1000V Series Switch? (Source: Evaluating the
Cisco Nexus 1000V Switch)
A)
B)
C)
D)

Q2)

Which VMware feature enables automated migration of VMs between physical hosts,
in response to defined thresholds? (Source: Evaluating the Cisco Nexus 1000V
Switch)
A)
B)
C)
D)

Q3)

link speed
VLANs
VM guest operating system
switch high availability

Which feature does the VMware vDS introduce that does not exist in traditional virtual
switches? (Source: Evaluating the Cisco Nexus 1000V Switch)
A)
B)
C)
D)

Q6)

network
development
storage
server

Which property can a Cisco Nexus 1000V port profile define? (Source: Configuring
Basic Cisco Nexus 1000V Networking)
A)
B)
C)
D)

Q5)

fault tolerance
DRS
PTS
clustering

In traditional VMware deployments, which group configures virtual switches? (Source:


Evaluating the Cisco Nexus 1000V Switch)
A)
B)
C)
D)

Q4)

port profiles
Fibre Channel
hypervisor
vMotion

hypervisor
service console
port groups that span the data center
VMkernel

Which VN-Link implementation does the Cisco M81KR VIC use? (Source:
Configuring Cisco UCS Manager for VMware PTS)
A)
B)
C)
D)

2011 Cisco Systems, Inc.

software
virtual
hardware
integrated

Virtual Server Networking

7-183

Q7)

Cisco Nexus 1000V VSMs can be configured for which type of high availability?
(Source: Characterizing Cisco Nexus 1000V Architecture)
A)
B)
C)
D)

Q8)

What are the two types of Cisco Nexus 1000V port profiles? (Choose two.) (Source:
Configuring Basic Cisco Nexus 1000V Networking)
A)
B)
C)
D)

7-184

standalone
high-availability standby
primary
secondary

uplink
physical
vEthernet
server based

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

Module Self-Check Answer Key


Q1)

Q2)

Q3)

Q4)

Q5)

Q6)

Q7)

Q8)

A, C

2011 Cisco Systems, Inc.

Virtual Server Networking

7-185

7-186

Data Center Unified Computing Implementation (DCUCI) v4.0

2011 Cisco Systems, Inc.

You might also like