B UCSM CLI Infrastructure Management Guide 4 0
B UCSM CLI Infrastructure Management Guide 4 0
4.0
First Published: 2018-08-14
Last Modified: 2022-09-21
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
© 2018–2022 Cisco Systems, Inc. All rights reserved.
CONTENTS
PREFACE Preface xi
Audience xi
Conventions xi
Related Cisco UCS Documentation xiii
Documentation Feedback xiii
CHAPTER 2 Overview 3
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
iii
Contents
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
iv
Contents
Recommissioning a Chassis 47
Renumbering a Chassis 48
Turning On the Locator LED for a Chassis 50
Turning Off the Locator LED for a Chassis 51
Acknowledging an IO Module 53
Resetting the I/O Module 54
Resetting an I/O Module from a Peer I/O Module 55
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
v
Contents
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
vi
Contents
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
vii
Contents
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
viii
Contents
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
ix
Contents
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
x
Preface
• Audience, on page xi
• Conventions, on page xi
• Related Cisco UCS Documentation, on page xiii
• Documentation Feedback, on page xiii
Audience
This guide is intended primarily for data center administrators with responsibilities and expertise in one or
more of the following:
• Server administration
• Storage administration
• Network administration
• Network security
Conventions
Text Type Indication
GUI elements GUI elements such as tab titles, area names, and field labels appear in this font.
Main titles such as window, dialog box, and wizard titles appear in this font.
TUI elements In a Text-based User Interface, text the system displays appears in this font.
System output Terminal sessions and information that the system displays appear in this
font.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
xi
Preface
Preface
string A nonquoted set of characters. Do not use quotation marks around the string or
the string will include the quotation marks.
!, # An exclamation point (!) or a pound sign (#) at the beginning of a line of code
indicates a comment line.
Note Means reader take note. Notes contain helpful suggestions or references to material not covered in the
document.
Tip Means the following information will help you solve a problem. The tips information might not be
troubleshooting or even an action, but could be useful information, similar to a Timesaver.
Timesaver Means the described action saves time. You can save time by performing the action described in the
paragraph.
Caution Means reader be careful. In this situation, you might perform an action that could result in equipment
damage or loss of data.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
xii
Preface
Related Cisco UCS Documentation
Documentation Feedback
To provide technical feedback on this document, or to report an error or omission, please send your comments
to [email protected]. We appreciate your feedback.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
xiii
Preface
Documentation Feedback
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
xiv
CHAPTER 1
New and Changed Information
• New and Changed Information, on page 1
Table 1: New Features and Changed Behavior in Cisco UCS Manager, Release 4.0(4a)
Cisco UCS 6454 Fabric With release 4.0(4n) and later Cisco Cisco UCS 6454 Fabric
Interconnect supports 16 unified UCS 6454 Fabric Interconnect Interconnect Overview, on page 9
ports. supports 16 unified ports (ports 1 -
16).
Cisco UCS-IOM-2304V2 I/O introduces the Cisco I/O Module Management in Cisco
module UCS-IOM-2304V2 I/O module UCS Manager CLI , on page 53
which is based on Cisco
UCS-IOM-2304 I/O module.
This section provides information on new feature and changed behavior in Cisco UCS Manager, Release
4.0(2a).
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
1
New and Changed Information
New and Changed Information
Table 2: New Features and Changed Behavior in Cisco UCS Manager, Release 4.0(2a)
Breakout Uplink Ports Cisco UCS Manager Release 4.0(2) Port Breakout Functionality on
and later releases support splitting Cisco UCS 6454 Fabric
a single 40/100G QSFP port into Interconnects, on page 13
four 10/25G ports using a
supported breakout cable. These
ports can be used only as Ethernet
uplink or FCoE uplink ports
connecting to a 10/25G switch.
They cannot be configured as
server ports, FCoE storage ports,
appliance ports or monitoring ports.
This section provides information on new feature and changed behavior in Cisco UCS Manager, Release
4.0(1a).
Table 3: New Features and Changed Behavior in Cisco UCS Manager, Release 4.0(1a)
Cisco UCS 6454 Fabric This release introduces Cisco UCS Cisco UCS 6454 Fabric
Interconnect 6454 Fabric Interconnect that Interconnect Overview, on page 9
support 10/25 Gigabit ports in the
fabric with 40/100 Gigabit uplink
ports.
Cisco UCS VIC 1455 In Cisco UCS Manager Release Port-Channeling, on page 31
4.0(1a), Cisco UCS VIC 1455 are
supported.
Cisco UCS C125 M5 Server Cisco UCS Manager extends Rack-Enclosure Server
support for all existing features on Management, on page 108
Cisco UCS C125 M5 Server.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
2
CHAPTER 2
Overview
• Cisco UCS Manager User CLI Documentation, on page 3
• Infrastructure Management Guide Overview, on page 4
• Cisco Unified Computing System Overview, on page 5
• Cisco UCS Building Blocks and Connectivity, on page 7
Guide Description
Cisco UCS Manager Getting Started Guide Discusses Cisco UCS architecture and Day 0
operations, including Cisco UCS Manager initial
configuration, and configuration best practices.
Cisco UCS Manager Administration Guide Discusses password management, role-based access
configuration, remote authentication, communication
services, CIMC session management, organizations,
backup and restore, scheduling options, BIOS tokens
and deferred deployments.
Cisco UCS Manager Infrastructure Management Discusses physical and virtual infrastructure
Guide components used and managed by Cisco UCS
Manager.
Cisco UCS Manager Firmware Management Guide Discusses downloading and managing firmware,
upgrading through Auto Install, upgrading through
service profiles, directly upgrading at endpoints using
firmware auto sync, managing the capability catalog,
deployment scenarios, and troubleshooting.
Cisco UCS Manager Server Management Guide Discusses the new licenses, registering Cisco UCS
domains with Cisco UCS Central, power capping,
server boot, server profiles and server-related policies.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
3
Overview
Infrastructure Management Guide Overview
Guide Description
Cisco UCS Manager Storage Management Guide Discusses all aspects of storage management such as
SAN and VSAN in Cisco UCS Manager.
Cisco UCS Manager Network Management Guide Discusses all aspects of network management such
as LAN and VLAN connectivity in Cisco UCS
Manager.
Cisco UCS Manager System Monitoring Guide Discusses all aspects of system and health monitoring
including system statistics in Cisco UCS Manager.
Cisco UCS S3260 Server Integration with Cisco UCS Discusses all aspects of management of UCS S-Series
Manager servers that are managed through Cisco UCS Manager.
Topic Description
I/O Module Management Overview of I/O Modules and procedures to manage them.
Power Management in Cisco UCS Overview of UCS Power Management policies, Global Power
policies, and Power Capping.
Blade Server Management Overview of Blade Servers and procedures to manage them.
S3X60 Server Node Management Overview of S3X60 Server Node and procedures to manage
them.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
4
Overview
Cisco Unified Computing System Overview
Architectural Simplification
The simplified architecture of Cisco UCS reduces the number of required devices and centralizes switching
resources. By eliminating switching inside a chassis, network access-layer fragmentation is significantly
reduced. Cisco UCS implements Cisco unified fabric within racks and groups of racks, supporting Ethernet
and Fibre Channel protocols over 10 Gigabit Cisco Data Center Ethernet and Fibre Channel over Ethernet
(FCoE) links. This radical simplification reduces the number of switches, cables, adapters, and management
points by up to two-thirds. All devices in a Cisco UCS domain remain under a single management domain,
which remains highly available through the use of redundant components.
High Availability
The management and data plane of Cisco UCS is designed for high availability and redundant access layer
fabric interconnects. In addition, Cisco UCS supports existing high availability and disaster recovery solutions
for the data center, such as data replication and application-level clustering technologies.
Scalability
A single Cisco UCS domain supports multiple chassis and their servers, all of which are administered through
one Cisco UCS Manager. For more detailed information about the scalability, speak to your Cisco representative.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
5
Overview
Cisco Unified Computing System Overview
Flexibility
A Cisco UCS domain allows you to quickly align computing resources in the data center with rapidly changing
business requirements. This built-in flexibility is determined by whether you choose to fully implement the
stateless computing feature. Pools of servers and other system resources can be applied as necessary to respond
to workload fluctuations, support new applications, scale existing software and business services, and
accommodate both scheduled and unscheduled downtime. Server identity can be abstracted into a mobile
service profile that can be moved from server to server with minimal downtime and no need for additional
network configuration.
With this level of flexibility, you can quickly and easily scale server capacity without having to change the
server identity or reconfigure the server, LAN, or SAN. During a maintenance window, you can quickly do
the following:
• Deploy new servers to meet unexpected workload demand and rebalance resources and traffic.
• Shut down an application, such as a database management system, on one server and then boot it up
again on another server with increased I/O capacity and memory resources.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
6
Overview
Cisco UCS Building Blocks and Connectivity
As shown in the figure above, the primary components included within Cisco UCS are as follows:
• Cisco UCS Manager—Cisco UCS Manager is the centralized management interface for Cisco UCS.
For more information on Cisco UCS Manager, see Introduction to Cisco UCS Manager in Cisco UCS
Manager Getting Started Guide
• Cisco UCS Fabric Interconnects—The Cisco UCS Fabric Interconnect is the core component of Cisco
UCS deployments, providing both network connectivity and management capabilities for the Cisco UCS
system. The Cisco UCS Fabric Interconnects run the Cisco UCS Manager control software and consist
of the following components:
• Cisco UCS 6454 Fabric Interconnect, , Cisco UCS 6332 Series Fabric Interconnects, Cisco UCS
6200 Series Fabric Interconnects, and Cisco UCS Mini
• Transceivers for network and storage connectivity
• Expansion modules for the various Fabric Interconnects
• Cisco UCS Manager software
For more information on Cisco UCS Fabric Interconnects, see Cisco UCS Fabric Infrastructure Portfolio,
on page 8.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
7
Overview
Cisco UCS Fabric Infrastructure Portfolio
• Cisco UCS I/O Modules and Cisco UCS Fabric Extender—IOM modules are also known as Cisco
FEXs or simply FEX modules. These modules serve as line cards to the FIs in the same way that Cisco
Nexus Series switches can have remote line cards. IOM modules also provide interface connections to
blade servers. They multiplex data from blade servers and provide this data to FIs and do the same in
the reverse direction. In production environments, IOM modules are always used in pairs to provide
redundancy and failover.
Important The 40G backplane setting is not applicable for 22xx IOMs.
• Cisco UCS Blade Server Chassis—The Cisco UCS 5100 Series Blade Server Chassis is a crucial
building block of Cisco UCS, delivering a scalable and flexible architecture for current and future data
center needs, while helping reduce total cost of ownership.
• Cisco UCS Blade and Rack Servers—Cisco UCS Blade servers are at the heart of the UCS solution.
They come in various system resource configurations in terms of CPU, memory, and hard disk capacity.
The Cisco UCS rack-mount servers are standalone servers that can be installed and controlled individually.
Cisco provides Fabric Extenders (FEXs) for the rack-mount servers. FEXs can be used to connect and
manage rack-mount servers from FIs. Rack-mount servers can also be directly attached to the fabric
interconnect.
Small and Medium Businesses (SMBs) can choose from different blade configurations as per business
needs.
• Cisco UCS I/O Adapters—Cisco UCS B-Series Blade Servers are designed to support up to two network
adapters. This design can reduce the number of adapters, cables, and access-layer switches by as much
as half because it eliminates the need for multiple parallel infrastructure for both LAN and SAN at the
server, chassis, and rack levels.
Note The Cisco UCS 6100 Series Fabric Interconnects and Cisco UCS 2104 I/O Modules have reached end
of life.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
8
Overview
Expansion Modules
Expansion Modules
The Cisco UCS 6200 Series supports expansion modules that can be used to increase the number of 10G,
FCoE, and Fibre Channel ports.
• The Cisco UCS 6248 UP has 32 ports on the base system. It can be upgraded with one expansion module
providing an additional 16 ports.
• The Cisco UCS 6296 UP has 48 ports on the base system. It can be upgraded with three expansion
modules providing an additional 48 ports.
Note The Cisco UCS 6454 Fabric Interconnect supported 8 unified ports (ports 1 - 8) with Cisco UCS Manager
4.0(1) and 4.0(2), but with release 4.0(4) and later it supports 16 unified ports (ports 1 - 16).
The Cisco UCS 6454 Fabric Interconnect also has one network management port, one console port for setting
the initial configuration, and one USB port for saving or loading configurations. The FI also includes L1/L2
ports for connecting two fabric interconnects for high availability.
The Cisco UCS 6454 Fabric Interconnect also contains a CPU board that consists of:
• Intel Xeon D-1528 v4 Processor, 1.6 GHz
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
9
Overview
Cisco UCS 6454 Fabric Interconnect
• 64 GB of RAM
• 8 MB of NVRAM (4 x NVRAM chips)
• 128 GB SSD (bootflash)
1 Ports 1-16 (Unified Ports 10/25 Gbps 2 Ports 17-44 (10/25 Gbps Ethernet or FCoE)
Ethernet or FCoE or 8/16/32 Gbps Fibre
Note When using Cisco UCS Manager
Channel)
releases earlier than 4.0(4), ports
Note When using Cisco UCS 9-44 are 10/25 Gbps Ethernet or
Manager releases earlier than FCoE.
4.0(4), only ports 1-8 are
Unified Ports.
3 Ports 45-48 (1/10/25 Gbps Ethernet or 4 Uplink Ports 49-54 (40/100 Gbps Ethernet
FCoE) or FCoE)
Each of these ports can be 4 x 10/25 Gbps
Ethernet or FCoE uplink ports when using
an appropriate breakout cable.
The Cisco UCS 6454 Fabric Interconnect chassis has two power supplies and four fans. Two of the fans
provide front to rear airflow.
Figure 4: Cisco UCS 6454 Fabric Interconnect Front View
1 Power supply and power cord connector 2 Fans 1 through 4, numbered left to right, when
facing the front of the chassis.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
10
Overview
Ports on the Cisco UCS 6454 Fabric Interconnects
Note When you configure a port on a Fabric Interconnect, the administrative state is automatically set to
enabled. If the port is connected to another device, this may cause traffic disruption. The port can be
disabled and enabled after it has been configured.
The following table summarizes the Cisco UCS 6454 Fabric Interconnects.
Compatibility with the IOM UCS 2204, UCS 2208, UCS 2408
Fan Modules 4
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
11
Overview
Port Configuration
Port Configuration
The front ports on the Cisco UCS 6454 Fabric Interconnect can be configured as the following port types:
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
12
Overview
Port Breakout Functionality on Cisco UCS 6454 Fabric Interconnects
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
13
Overview
Software Feature Configuration
When you break out a 40G port into 10G ports or a 100G port into 25G ports, the resulting ports are numbered
using a 3-tuple naming convention. For example, the breakout ports of the second 40-Gigabit Ethernet port
are numbered as 1/50/1, 1/50/2, 1/50/3, 1/50/4.
The following image shows the rear view of the Cisco UCS 6454 fabric interconnect, and includes the ports
that support breakout port functionality:
Figure 6: Cisco UCS 6454 Fabric Interconnect Rear View
1 Ports 1-16 (Unified Ports 10/25 Gbps 2 Ports 17-44 (10/25 Gbps Ethernet or FCoE)
Ethernet or FCoE or 8/16/32 Gbps Fibre
Channel)
3 Ports 45-48 (1/10/25 Gbps Ethernet or 4 Uplink Ports 49-54 (40/100 Gbps Ethernet
FCoE) or FCoE)
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
14
Overview
Software Feature Configuration
• MAC Security—In Cisco UCS Manager Release 4.0(1), Cisco UCS 6454 Fabric Interconnects did not
support MAC security. Cisco UCS Manager Release 4.0(2) and later releases support MAC security on
Cisco UCS 6454 Fabric Interconnects.
• Breakout Uplink Ports—Cisco UCS Manager Release 4.0(2) and later releases support splitting a single
40/100G QSFP port into four 10/25G ports using a supported breakout cable. These ports can be used
only as Ethernet uplink or FCoE uplink ports connecting to a 10/25G switch. They cannot be configured
as server ports, FCoE storage ports, appliance ports or monitoring ports.
Cisco UCS 6454 Fabric Interconnects do not support the following software features that were supported on
UCS 6200 and 6300 Series Fabric Interconnects in Cisco UCS Manager 3.2 and earlier releases:
• Chassis Discovery Policy in Non-Port Channel Mode—Cisco UCS 6454 Fabric Interconnects support
only Port Channel mode.
• Chassis Connectivity Policy in Non-Port Channel Mode—Cisco UCS 6454 Fabric Interconnects support
only Port Channel mode.
• Multicast Hardware Hash—Cisco UCS 6454 Fabric Interconnects do not support multicast hardware
hash.
• Service Profiles with Dynamic vNICS—Cisco UCS 6454 Fabric Interconnects do not support Dynamic
vNIC Connection Policies.
• Multicast Optimize—Cisco UCS 6454 Fabric Interconnects do not support Multicast Optimize for QoS.
• NetFlow—Cisco UCS 6454 Fabric Interconnects do not support NetFlow related configuration.
• Port profiles and DVS Related Configurations—Cisco UCS 6454 Fabric Interconnects do not support
configurations related to port profiles and distributed virtual switches (DVS).
Configuration of the following software features has changed for Cisco UCS 6454 Fabric Interconnects:
• Unified Ports—Cisco UCS 6454 Fabric Interconnects support up to 8 unified ports, which can be
configured as FC. These ports appear at the beginning of the module. On UCS 6200 Series Fabric
Interconnects, all ports are unified ports. The Ethernet ports must be contiguous followed by contiguous
FC Ports. FC ports on UCS 6200 Series Fabric Interconnects appear towards the end of the module.
• VLAN Optimization—On Cisco UCS 6454 Fabric Interconnects, VLAN port count optimization is
performed through port VLAN (VP) grouping when the PV count exceeds 16000. The following table
illustrates the PV Count with VLAN port count optimization enabled and disabled on Cisco UCS 6454
Fabric Interconnects, Cisco UCS 6300 Series Fabric Interconnects, and Cisco UCS 6200 Series Fabric
Interconnects.
When the Cisco UCS 6454 Fabric Interconnect is in Ethernet switching mode:
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
15
Overview
Cisco UCS 6300 Series Fabric Interconnects
• The Fabric Interconnect does not support VLAN Port Count Optimization Enabled
• The Fabric Interconnect supports 16000 PVs, similar to EHM mode, when set to VLAN Port Count
Optimization Disabled
• Limited Restriction on VLAN—Cisco UCS 6454 Fabric Interconnects reserve 128 additional VLANs
for system purposes.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
16
Overview
Cisco UCS 6332-16UP Fabric Interconnect
1 Port lane switch button, port lane LEDs, 2 Ports 1–12 and ports 15–26 can operate as
and L1 and L2 ports. 40-Gbps QSFP+ ports, or as 4 x 10-Gbps
SFP+ breakout ports.
Ports 1 - 4 support Quad to SFP or SFP+
(QSA) adapters to provide 1-Gbps/10 Gbps
operation.
Ports 13 and 14 can operate as 40-Gbps
QSFP+ ports. They cannot operate as 4 x
10-Gbps SFP+ breakout ports.
1 Power supply and power cord connector 2 Fans1 through 4, numbered left to right, when
facing the front of the chassis.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
17
Overview
Ports on the Cisco UCS 6300 Series Fabric Interconnects
1 Port lane switch button, port lane LEDs, and 2 Ports 1–16 are Unified Ports (UP) that operate
L1 and L2 ports. either as 1- or 10-Gbps SFP+ fixed Ethernet
ports; or as 4-, 8-, or 16-Gigabit Fibre Channel
ports.
3 Ports 17–34 operate either as 40-Gbps QSFP+ 4 Ports 35–40 operate as 40-Gbps QSFP+ ports.
ports, breakout mode for 4 x 10-Gigabit SFP+
breakout ports, or QSA for 10G.
1 Power supply and power cord connector 2 Fans1 through 4, numbered left to right, when
facing the front of the chassis.
Note When you configure a port on a fabric interconnect, the administrative state is automatically set to
enabled. If the port is connected to another device, this may cause traffic disruption. You can disable
the port after it has been configured.
The following table summarizes the second and third generation ports for the Cisco UCS fabric interconnects.
Item Cisco UCS 6324 Cisco UCS Cisco UCS Cisco UCS Cisco UCS
6248 UP 6296 UP 6332 6332-16UP
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
18
Overview
Port Modes
Description Fabric Interconnect 48–Port Fabric 96–Port Fabric 32–Port Fabric 40–Port Fabric
with 4 unified ports Interconnect Interconnect Interconnect Interconnect
and 1 scalability port
Form factor 1 RU 1 RU 2 RU 1 RU 1 RU
Fan Modules 4 2 5 4 4
Note Cisco UCS 6300 Series Fabric Interconnects support breakout capability for ports. For more information
on how the 40G ports can be converted into four 10G ports, see Port Breakout Functionality on Cisco
UCS 6300 Series Fabric Interconnects, on page 20.
Port Modes
The port mode determines whether a unified port on the fabric interconnect is configured to carry Ethernet
or Fibre Channel traffic. You configure the port mode in Cisco UCS Manager. However, the fabric interconnect
does not automatically discover the port mode.
Changing the port mode deletes the existing port configuration and replaces it with a new logical port. Any
objects associated with that port configuration, such as VLANs and VSANS, are also removed. There is no
restriction on the number of times you can change the port mode for a unified port.
Port Types
The port type defines the type of traffic carried over a unified port connection.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
19
Overview
Port Breakout Functionality on Cisco UCS 6300 Series Fabric Interconnects
By default, unified ports changed to Ethernet port mode are set to the Ethernet uplink port type. Unified ports
changed to Fibre Channel port mode are set to the Fibre Channel uplink port type. You cannot unconfigure
Fibre Channel ports.
Changing the port type does not require a reboot.
Ethernet Port Mode
When you set the port mode to Ethernet, you can configure the following port types:
• Server ports
• Ethernet uplink ports
• Ethernet port channel members
• FCoE ports
• Appliance ports
• Appliance port channel members
• SPAN destination ports
• SPAN source ports
Note For SPAN source ports, configure one of the port types and then configure
the port as SPAN source.
Note For SPAN source ports, configure one of the port types and then configure
the port as SPAN source.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
20
Overview
Port Breakout Functionality on Cisco UCS 6300 Series Fabric Interconnects
the configuration from 40G to 10G is called breakout and the process of changing the configuration from
[4X]10G to 40G is called unconfigure.
When you break out a 40G port into 10G ports, the resulting ports are numbered using a 3-tuple naming
convention. For example, the breakout ports of the second 40-Gigabit Ethernet port are numbered as 1/2/1,
1/2/2, 1/2/3, 1/2/4.
The following image shows the front view for the Cisco UCS 6332 series fabric interconnects, and includes
the ports that may support breakout port functionality:
Figure 11: Cisco UCS 6332 Series Fabric Interconnects Front View
The following image shows the front view for the Cisco UCS 6332-16UP series fabric interconnects, and
includes the ports that may support breakout port functionality:
Figure 12: Cisco UCS 6332-16UP Series Fabric Interconnects Front View
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
21
Overview
Cisco UCS Chassis
The following image shows the rear view of the Cisco UCS 6300 series fabric interconnects.
Figure 13: Cisco UCS 6300 Series Fabric Interconnects Rear View
1 Power supply
2 Four fans
3 Power supply
4 Serial ports
Cisco UCS 6300 Series Fabric Breakout Configurable Ports Ports without breakout functionality support
Interconnect Series
Important Up to four breakout ports are allowed if QoS jumbo frames are used.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
22
Overview
Cisco UCS Mini Infrastructure
The Cisco UCS 5108 Blade Server Chassis is supported with all generations of fabric interconnects.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
23
Overview
Cisco UCS Infrastructure Virtualization
In the Cisco UCS Mini solution, the Cisco UCS 6324 fabric interconnect is collapsed into the IO Module
form factor, and is inserted into the IOM slot of the blade server chassis. The Cisco UCS 6324 fabric
interconnect has 24 10G ports available on it. Sixteen of these ports are server facing, two 10G ports are
dedicated to each of the eight half width blade slots. The remaining eight ports are divided into groups of four
1/10G Enhanced Small Form-Factor Pluggable (SFP+) ports and one 40G Quad Small Form-factor Pluggable
(QSFP) port, which is called the 'scalability port'.
Cisco UCS Manager Release 3.1(1) introduces support for a second UCS 5108 chassis to an existing
single-chassis Cisco UCS 6324 fabric interconnect setup. This extended chassis enables you to configure an
additional 8 servers. Unlike the primary chassis, the extended chassis supports IOMs. Currently, it supports
UCS-IOM-2204XP and UCS-IOM-2208XP IOMs. The extended chassis can only be connected through the
scalability port on the FI-IOM.
Important Currently, Cisco UCS Manager supports only one extended chassis for UCS Mini.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
24
Overview
Cisco UCS Infrastructure Virtualization
Cable Virtualization
The physical cables that connect to physical switch ports provide the infrastructure for logical and virtual
cables. These virtual cables connect to virtual adapters on any given server in the system.
Adapter Virtualization
On the server, you have physical adapters, which provide physical infrastructure for virtual adapters. A virtual
network interface card (vNIC) or virtual host bus adapter (vHBA) logically connects a host to a virtual interface
on the fabric interconnect and allows the host to send and receive traffic through that interface. Each virtual
interface in the fabric interconnect corresponds to a vNIC.
An adapter that is installed on the server appears to the server as multiple adapters through standard PCIe
virtualization. When the server scans the PCIe bus, the virtual adapters that are provisioned appear to be
physically plugged into the PCIe bus.
Server Virtualization
Server virtualization provides you with the ability of stateless servers. As part of the physical infrastructure,
you have physical servers. However, the configuration of a server is derived from the service profile to which
it is associated. All service profiles are centrally managed and stored in a database on the fabric interconnect.
A service profile defines all the settings of the server, for example, the number of adapters, virtual adapters,
the identity of these adapters, the firmware of the adapters, and the firmware of the server. It contains all the
settings of the server that you typically configure on a physical machine. Because the service profile is
abstracted from the physical infrastructure, you can apply it to any physical server and the physical server
will be configured according to the configuration defined in the service profile. Cisco UCS Manager Server
Management Guide provides detailed information about managing service profiles.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
25
Overview
Cisco UCS Infrastructure Virtualization
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
26
CHAPTER 3
Equipment Policies
• Chassis/FEX Discovery Policy, on page 27
• Chassis Connectivity Policy, on page 34
• Rack Server Discovery Policy, on page 35
• Aging Time for the MAC Address Table, on page 37
• HA Version Holder Replacement, on page 38
Chassis Links
If you have a Cisco UCS domain with some of the chassis' wired with one link, some with two links, some
with four links, and some with eight links, Cisco recommends configuring the chassis/FEX discovery policy
for the minimum number links in the domain so that Cisco UCS Manager can discover all chassis.
Tip To establish the highest available chassis connectivity in a Cisco UCS domain where Fabric Interconnect
is connected to different types of IO Modules supporting different max number of uplinks, select platform
max value. Setting the platform max ensures that Cisco UCS Manager discovers the chassis including
the connections and servers only when the maximum supported IOM uplinks are connected per IO
Module.
After the initial discovery of a chassis, if chassis/FEX discovery policy changes are done, acknowledge IO
Modules rather than the entire Chassis to avoid disruption. The discovery policy changes can include increasing
the number of links between Fabric Interconnect and IO Module, or changes to the Link Grouping preference.
Make sure that you monitor for faults before and after the IO Module acknowledgement to ensure that the
connectivity is restored before proceeding to the other IO Module for the chassis.
Cisco UCS Manager cannot discover any chassis that is wired for fewer links than are configured in the
chassis/FEX discovery policy. For example, if the chassis/FEX discovery policy is configured for four links,
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
27
Equipment Policies
Chassis/FEX Discovery Policy
Cisco UCS Manager cannot discover any chassis that is wired for one link or two links. Re-acknowledgement
of the chassis resolves this issue.
The following table provides an overview of how the chassis/FEX discovery policy works in a multi-chassis
Cisco UCS domain:
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
28
Equipment Policies
Chassis/FEX Discovery Policy
Link Grouping
For hardware configurations that support fabric port channels, link grouping determines whether all of the
links from the IOM to the fabric interconnect are grouped in to a fabric port channel during chassis discovery.
If the link grouping preference is set to Port Channel, all of the links from the IOM to the fabric interconnect
are grouped in a fabric port channel. If set to None, links from the IOM are pinned to the fabric interconnect.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
29
Equipment Policies
Pinning
Important For Cisco UCS 6454 Fabric Interconnects, the link grouping preference is always set to Port Channel.
After a fabric port channel is created through Cisco UCS Manager, you can add or remove links by changing
the link group preference and re-acknowledging the chassis, or by enabling or disabling the chassis from the
port channel.
Note The link grouping preference only takes effect if both sides of the links between an IOM or FEX and
the fabric interconnect support fabric port channels. If one side of the links does not support fabric port
channels, this preference is ignored and the links are not grouped in a port channel.
Note Cisco UCS 6454 Fabric Interconnect do not support multicast hardware hashing.
Pinning
Pinning in Cisco UCS is only relevant to uplink ports. If you configure Link Grouping Preference as None
during chassis discovery, the IOM forwards traffic from a specific server to the fabric interconnect through
its uplink ports by using static route pinning.
The following table showcases how pinning is done between an IOM and the fabric interconnect based on
the number of active fabric links between the IOM and the fabric interconnect.
1-Link All the HIF ports are pinned to the active link
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
30
Equipment Policies
Port-Channeling
Only 1,2,4 and 8 links are supported. 3,5,6, and 7 links are not valid configurations.
Port-Channeling
While pinning traffic from a specific server to an uplink port provides you with greater control over the unified
fabric and ensures optimal utilization of uplink port bandwidth, it could also mean excessive traffic over
certain circuits. This issue can be overcome by using port channeling. Port channeling groups all links between
the IOM and the fabric interconnect into one port channel. The port channel uses a load balancing algorithm
to decide the link over which to send traffic. This results in optimal traffic management.
Cisco UCS supports port-channeling only through the Link Aggregation Control Protocol (LACP). For
hardware configurations that support fabric port channels, link grouping determines whether all of the links
from the IOM to the fabric interconnect are grouped into a fabric port channel during chassis discovery. If
the Link Grouping Preference is set to Port Channel, all of the links from the IOM to the fabric interconnect
are grouped in a fabric port channel. If this parameter is set to None, links from the IOM to the fabric
interconnect are not grouped in a fabric port channel.
Once a fabric port channel is created, links can be added or removed by changing the link group preference
and reacknowledging the chassis, or by enabling or disabling the chassis from the port channel.
Note In a setup with Cisco UCS 6454 Fabric Interconnects, the Link Grouping Preference value for
Chassis/FEX Discovery Policy is not user configurable. The value is set to Port Channel.
SUMMARY STEPS
1. UCS-A# scope org /
2. UCS-A /org # scope chassis-disc-policy
3. UCS-A /org/chassis-disc-policy # set action {1-link | 2-link | 4-link | 8-link | platform-max}
4. (Optional) UCS-A /org/chassis-disc-policy # set descr description
5. UCS-A /org/chassis-disc-policy # set link-aggregation-pref {none | port-channel}
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
31
Equipment Policies
Configuring the Chassis/FEX Discovery Policy
DETAILED STEPS
Step 2 UCS-A /org # scope chassis-disc-policy Enters organization chassis/FEX discovery policy mode.
Step 3 UCS-A /org/chassis-disc-policy # set action {1-link | Specifies the minimum threshold for the number of links
2-link | 4-link | 8-link | platform-max} between the chassis or FEX and the fabric interconnect.
Step 4 (Optional) UCS-A /org/chassis-disc-policy # set descr Provides a description for the chassis/FEX discovery policy.
description
Note If your description includes spaces, special
characters, or punctuation, you must begin and
end your description with quotation marks. The
quotation marks will not appear in the description
field of any show command output.
Step 5 UCS-A /org/chassis-disc-policy # set Specifies whether the links from the IOMs or FEXes to the
link-aggregation-pref {none | port-channel} fabric interconnects are grouped in a port channel.
Note The link grouping preference only takes effect
if both sides of the links between an IOM or FEX
and the fabric interconnect support fabric port
channels. If one side of the links does not support
fabric port channels, this preference is ignored
and the links are not grouped in a port channel.
Step 6 UCS-A /org/chassis-disc-policy # set multicast-hw-hash Specifies whether the all the links between the IOM and
{disabled | enabled} the fabric interconnect in a port channel can be used for
multicast traffic.
• disabled—Only one link between the IOM and the
fabric interconnect is used for multicast traffic
• enabled—All links between the IOM and the fabric
interconnect can be used for multicast traffic
Step 7 (Optional) UCS-A /org/chassis-disc-policy # set qualifier Uses the specified server pool policy qualifications to
qualifier associate this policy with a server pool.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
32
Equipment Policies
Configuring the Chassis/FEX Discovery Policy
Example
The following example scopes to the default chassis/FEX discovery policy, sets it to discover chassis
with four links to a fabric interconnect, provides a description for the policy, specifies the server
pool policy qualifications that will be used to qualify the chassis, and commits the transaction:
UCS-A# scope org /
UCS-A /org # scope chassis-disc-policy
UCS-A /org/chassis-disc-policy* # set action 4-link
UCS-A /org/chassis-disc-policy* # set descr "This is an example chassis/FEX discovery
policy."
UCS-A /org/chassis-disc-policy* # set qualifier ExampleQual
UCS-A /org/chassis-disc-policy* # commit-buffer
UCS-A /org/chassis-disc-policy #
The following example scopes to the default chassis/FEX discovery policy, sets it to discover chassis
with eight links to a fabric interconnect, provides a description for the policy, sets the link grouping
preference to port channel, specifies the server pool policy qualifications that will be used to qualify
the chassis, and commits the transaction:
UCS-A# scope org /
UCS-A /org # scope chassis-disc-policy
UCS-A /org/chassis-disc-policy* # set action 8-link
UCS-A /org/chassis-disc-policy* # set descr "This is an example chassis/FEX discovery
policy."
UCS-A /org/chassis-disc-policy* # set link-aggregation-pref port-channel
UCS-A /org/chassis-disc-policy* # set qualifier ExampleQual
UCS-A /org/chassis-disc-policy* # commit-buffer
UCS-A /org/chassis-disc-policy #
The following example scopes to the default chassis/FEX discovery policy, sets it to discover chassis
with four links to a fabric interconnect, provides a description for the policy, sets the link grouping
preference to port channel, enables multicast hardware hashing, specifies the server pool policy
qualifications that will be used to qualify the chassis, and commits the transaction:
UCS-A# scope org /
UCS-A /org # scope chassis-disc-policy
UCS-A /org/chassis-disc-policy* # set action 4-link
UCS-A /org/chassis-disc-policy* # set descr "This is an example chassis/FEX discovery
policy."
UCS-A /org/chassis-disc-policy* # set link-aggregation-pref port-channel
UCS-A /org/chassis-disc-policy* # set multicast-hw-hash enabled
UCS-A /org/chassis-disc-policy* # set qualifier ExampleQual
UCS-A /org/chassis-disc-policy* # commit-buffer
UCS-A /org/chassis-disc-policy #
What to do next
To customize fabric port channel connectivity for a specific chassis, configure the chassis connectivity policy.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
33
Equipment Policies
Chassis Connectivity Policy
Important The 40G backplane setting is not applicable for 22xx IOMs.
The chassis connectivity policy is created by Cisco UCS Manager only when the hardware configuration
supports fabric port channels.
Important For Cisco UCS 6454 Fabric Interconnects, the chassis connectivity policy is always Port Channel.
In a Cisco UCS Mini setup, the creation of a chassis connectivity policy is supported only on the extended
chassis.
Caution Changing the connectivity mode for a chassis results in chassis re-acknowledgement. Traffic might be
disrupted during this time.
SUMMARY STEPS
1. UCS-A# scope org org-name
2. UCS-A /org # scope chassis-conn-policy chassis-num [a | b}
3. UCS-A /org/chassis-conn-policy # set link-aggregation-pref {global | none | port-channel}
4. UCS-A /org/chassis-conn-policy # commit-buffer
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
34
Equipment Policies
Rack Server Discovery Policy
Step 3 UCS-A /org/chassis-conn-policy # set Specifies whether the links from the IOMs or FEXes to the
link-aggregation-pref {global | none | port-channel} fabric interconnects are grouped in a port channel.
• None—No links are grouped in a port channel
• Port Channel—All links from an IOM to a fabric
interconnect are grouped in a port channel.
Note Cisco UCS 6454 Fabric Interconnects
support only Port Channel mode.
Step 4 UCS-A /org/chassis-conn-policy # commit-buffer Commits the transaction to the system configuration.
Example
The following example shows how to change the fabric port channel connectivity for two chassis.
Chassis 6, fabric A is changed to port channel and chassis 12, fabric B is changed to discrete links:
UCS-A# scope org /
UCS-A /org # scope chassis-conn-policy 6 a
UCS-A /org/chassis-conn-policy # set link-aggregation-pref port-channel
UCS-A /org/chassis-conn-policy* # up
UCS-A /org* # scope chassis-conn-policy 12 b
UCS-A /org/chassis-conn-policy* # set link-aggregation-pref none
UCS-A /org/chassis-conn-policy* # commit-buffer
UCS-A /org/chassis-conn-policy #
Cisco UCS Manager uses the settings in the rack server discovery policy to determine whether any data on
the hard disks are scrubbed and whether server discovery occurs immediately or needs to wait for explicit
user acknowledgement.
Cisco UCS Manager cannot discover any rack-mount server that has not been correctly cabled and connected
to the fabric interconnects. For information about how to integrate a supported Cisco UCS rack-mount server
with Cisco UCS Manager, see the appropriate rack-mount server integration guide.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
35
Equipment Policies
Configuring the Rack Server Discovery Policy
Important Cisco UCS VIC 1400 Series adapters support cables of 10G and 25G speed. However, the cables
connecting Cisco UCS VIC 1400 Series adapter ports to each Cisco UCS 6400 Series Fabric Interconnect
must be of uniform speed-either all 10G or all 25G cables. If you connect Cisco UCS VIC 1400 series
adapter or UCS VIC 15000 series adapter ports to a Cisco UCS 6400 Series Fabric Interconnect through
a mix of 10G and 25G cables, UCS server discovery fails and ports may go to a suspended state. Cisco
UCS Manager does not raise any faults in this scenario.
DETAILED STEPS
Step 2 UCS-A /org # scope rackserver-disc-policy Enters organization rack server discovery policy mode.
Step 3 UCS-A /org/rackserver-disc-policy # set action Specifies the way the system reacts when you perform any
{immediate | user-acknowledged} of the following actions:
• Add a new rack server
• Decommission/recommission a previously added or
discovered rack server
Step 4 (Optional) UCS-A /org/rackserver-disc-policy # set descr Provides a description for the rack server discovery policy.
description
Note If your description includes spaces, special
characters, or punctuation, you must begin and
end your description with quotation marks. The
quotation marks will not appear in the description
field of any show command output.
Step 5 UCS-A /org/rackserver-disc-policy # set scrub-policy Specifies the scrub policy that should run on a newly
scrub-pol-name discovered rack server or a
decommissioned/recommissioned server.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
36
Equipment Policies
Aging Time for the MAC Address Table
Example
The following example scopes to the default rack server discovery policy, sets it to immediately
discover new rack servers or decommissioned/recommissioned server, provides a description for the
policy, specifies a scrub policy called scrubpol1, and commits the transaction:
UCS-A# scope org /
UCS-A /org # scope rackserver-disc-policy
UCS-A /org/rackserver-disc-policy* # set action immediate
UCS-A /org/rackserver-disc-policy* # set descr "This is an example rackserver discovery
policy."
UCS-A /org/rackserver-disc-policy* # set scrub-policy scrubpol1
UCS-A /org/rackserver-disc-policy* # commit-buffer
UCS-A /org/rackserver-disc-policy #
DETAILED STEPS
Step 2 UCS-A /eth-uplink # set mac-aging {dd hh mm ss | Specifies the aging time for the MAC address table. Use
mode-default | never} the mode-default keyword to set the aging time to a
default value dependent on the configured Ethernet
switching mode. Use the never keyword to never remove
MAC addresses from the table regardless of how long they
have been idle.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
37
Equipment Policies
HA Version Holder Replacement
Example
The following example sets the aging time for the MAC address table to one day and 12 hours and
commits the transaction:
UCS-A# scope eth-uplink
UCS-A /eth-uplink # set mac-aging 01 12 00 00
UCS-A /eth-uplink* # commit-buffer
UCS-A /eth-uplink #
• For a device to be selected as a version holder, the following requirements must be met:
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
38
Equipment Policies
Creating a Preferred Version Holder
• There must be less than three devices selected for active HA access.
• Chassis removal must not be in progress.
• A chassis that has been removed from the system must not be used as a version holder.
• The connection path must be both fabric interconnect A and B.
• Replacement of HA version holders can be done only through Cisco UCS Manager CLI.
DETAILED STEPS
Step 2 UCS-A /system # create preferred-ha-device device-serial Creates the specified preferred HA device.
Step 3 UCS-A /system/ preferred-ha-device # commit-buffer Commits the transaction to the system configuration.
Step 5 UCS-A /system # show preferred-ha-devices Displays the list of preferred HA version holders and
whether they are active or not.
Example
This example shows how to create a preferred version holder:
UCS-A# scope system
UCS-A /system # create preferred-ha-device FCH1606V02F
UCS-A /system/ preferred-ha-device* # commit-buffer
UCS-A /system/ preferred-ha-device # exit
UCS-A /system # show preferred-ha-devices
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
39
Equipment Policies
Deleting a Preferred Version Holder
What to do next
Trigger a reelection of version holders.
DETAILED STEPS
Step 2 UCS-A /system # delete preferred-ha-device device-serial Deletes the specified preferred HA device.
Step 3 UCS-A /system/ preferred-ha-device* # commit-buffer Commits the transaction to the system configuration.
Step 5 UCS-A /system # show preferred-ha-devices Displays the list of preferred HA version holders and
whether they are active or not.
Example
This example shows how to delete a preferred version holder:
UCS-A# scope system
UCS-A /system # delete preferred-ha-device FCH1606V02F
UCS-A /system/ preferred-ha-device* # commit-buffer
UCS-A /system/ preferred-ha-device # exit
UCS-A /system # show preferred-ha-devices
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
40
Equipment Policies
Triggering the Reelection of Version Holders
DETAILED STEPS
Step 2 UCS-A /system # re-elect-ha-devices Triggers reelection of version holders for HA devices.
Example
This example shows how to trigger the reelection of version holders:
UCS-A# scope system
UCS-A /system # re-elect-ha-devices
SUMMARY STEPS
1. UCS-A# scope system
2. UCS-A /system # show operational-ha-devices
DETAILED STEPS
Step 2 UCS-A /system # show operational-ha-devices Displays the list of all currently operational HA version
holders.
Example
This example shows how to display all currently operational version holders:
UCS-A# scope system
UCS-A /system # show operational-ha-devices
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
41
Equipment Policies
Displaying Operational Version Holders
FOX1636H6R5
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
42
CHAPTER 4
Chassis Management
• Chassis Management in Cisco UCS Manager CLI , on page 43
• Guidelines for Removing and Decommissioning Chassis, on page 45
• Acknowledging a Chassis, on page 45
• Decommissioning a Chassis, on page 46
• Removing a Chassis, on page 46
• Recommissioning a Chassis, on page 47
• Renumbering a Chassis, on page 48
• Turning On the Locator LED for a Chassis, on page 50
• Turning Off the Locator LED for a Chassis, on page 51
Note The second server slot in the chassis can be utilized by an HDD expansion
tray module for an additional four 3.5” drives.
• 56 3.5” drive bays with an optional 4 x 3.5” HDD expansion tray module instead of the second server
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
43
Chassis Management
Cisco UCS 5108 Blade Server Chassis
The blade server chassis has flexible partitioning with removable dividers to handle two blade server form
factors:
• Half-width blade servers have access to power and two 10GBASE-KR connections, one to each fabric
extender slot.
• Full-width blade servers connect to power and two connections to each fabric extender.
Important Currently, Cisco UCS Manager supports only one extended chassis for UCS Mini.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
44
Chassis Management
Guidelines for Removing and Decommissioning Chassis
• Configure the server ports and wait for the second chassis to be discovered.
Decommissioning a Chassis
Decommissioning is performed when a chassis is physically present and connected but you want to temporarily
remove it from the Cisco UCS Manager configuration. Because it is expected that a decommissioned chassis
will be eventually recommissioned, a portion of the chassis' information is retained by Cisco UCS Manager
for future use.
Removing a Chassis
Removing is performed when you physically remove a chassis from the system. Once the physical removal
of the chassis is completed, the configuration for that chassis can be removed in Cisco UCS Manager.
Note You cannot remove a chassis from Cisco UCS Manager if it is physically present and connected.
If you need to add a removed chassis back to the configuration, it must be reconnected and then rediscovered.
During rediscovery Cisco UCS Manager will assign the chassis a new ID that may be different from ID that
it held before.
Acknowledging a Chassis
Acknowledging the chassis ensures that Cisco UCS Manager is aware of the change in the number of links
and that traffics flows along all available links.
After you enable or disable a port on a fabric interconnect, wait for at least 1 minute before you re-acknowledge
the chassis. If you re-acknowledge the chassis too soon, the pinning of server traffic from the chassis might
not get updated with the changes to the port that you enabled or disabled.
SUMMARY STEPS
1. UCS-A# acknowledge chassis chassis-num
2. UCS-A# commit-buffer
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
45
Chassis Management
Decommissioning a Chassis
Example
The following example acknowledges chassis 2 and commits the transaction:
UCS-A# acknowledge chassis 2
UCS-A* # commit-buffer
UCS-A #
Decommissioning a Chassis
SUMMARY STEPS
1. UCS-A# decommission chassis chassis-num
2. UCS-A# commit-buffer
DETAILED STEPS
Example
The following example decommissions chassis 2 and commits the transaction:
UCS-A# decommission chassis 2
UCS-A* # commit-buffer
UCS-A # show chassis
Chassis:
Chassis Overall Status Admin State
---------- ------------------------ -----------
1 Operable Acknowledged
2 Accessibility Problem Decommission
UCS-A #
Removing a Chassis
Before you begin
Physically remove the chassis before performing the following procedure.
SUMMARY STEPS
1. UCS-A# remove chassis chassis-num
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
46
Chassis Management
Recommissioning a Chassis
2. UCS-A# commit-buffer
DETAILED STEPS
Example
The following example removes chassis 2 and commits the transaction:
UCS-A# remove chassis 2
UCS-A* # commit-buffer
UCS-A #
Recommissioning a Chassis
This procedure returns the chassis to the configuration and applies the chassis discovery policy to the chassis.
After this procedure, you can access the chassis and any servers in it.
Note This procedure is not applicable for Cisco UCS S3260 Chassis.
SUMMARY STEPS
1. UCS-A# recommission chassis vendor-name model-name serial-num
2. UCS-A# commit-buffer
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
47
Chassis Management
Renumbering a Chassis
Example
The following example recommissions a Cisco UCS 5108 chassis and commits the transaction:
UCS-A# show chassis
Chassis:
Chassis Overall Status Admin State
---------- ------------------------ -----------
1 Accessibility Problem Decommission
Renumbering a Chassis
Note You cannot renumber a blade server through Cisco UCS Manager. The ID assigned to a blade server is
determined by its physical slot in the chassis. To renumber a blade server, you must physically move
the server to a different slot in the chassis.
Note This procedure is not applicable for Cisco UCS S3260 Chassis.
SUMMARY STEPS
1. UCS-A# show chassis inventory
2. Verify that the chassis inventory does not include the following:
3. UCS-A# recommission chassis vendor-name model-name serial-num [chassis-num]
4. UCS-A# commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
48
Chassis Management
Renumbering a Chassis
DETAILED STEPS
Step 2 Verify that the chassis inventory does not include the • The chassis you want to renumber
following:
• A chassis with the number you want to use
Step 3 UCS-A# recommission chassis vendor-name Recommissions and renumbers the specified chassis.
model-name serial-num [chassis-num]
Step 4 UCS-A# commit-buffer Commits the transaction to the system configuration.
Example
The following example decommissions two Cisco UCS chassis (chassis 8 and 9), switches their IDs,
and commits the transaction:
UCS-A# show chassis inventory
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
49
Chassis Management
Turning On the Locator LED for a Chassis
DETAILED STEPS
Step 2 UCS-A /chassis # enable locator-led Turns on the chassis locator LED.
Step 3 UCS-A /chassis # commit-buffer Commits the transaction to the system configuration.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
50
Chassis Management
Turning Off the Locator LED for a Chassis
Example
The following example turns on the locator LED for chassis 2 and commits the transaction:
UCS-A# scope chassis 2
UCS-A /chassis # enable locator-led
UCS-A /chassis* # commit-buffer
UCS-A /chassis #
DETAILED STEPS
Step 2 UCS-A /chassis # disable locator-led Turns off the chassis locator LED.
Step 3 UCS-A /chassis # commit-buffer Commits the transaction to the system configuration.
Example
The following example turns off the locator LED for chassis 2 and commits the transaction:
UCS-A# scope chassis 2
UCS-A /chassis # disable locator-led
UCS-A /chassis* # commit-buffer
UCS-A /chassis #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
51
Chassis Management
Turning Off the Locator LED for a Chassis
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
52
CHAPTER 5
I/O Module Management
• I/O Module Management in Cisco UCS Manager CLI , on page 53
• Acknowledging an IO Module, on page 53
• Resetting the I/O Module, on page 54
• Resetting an I/O Module from a Peer I/O Module, on page 55
Acknowledging an IO Module
Cisco UCS Manager Release 2.2(4) introduces the ability to acknowledge a specific IO module in a chassis.
Note • After adding or removing physical links between Fabric Interconnect and IO Module, an
acknowledgement of the IO Module is required to properly configure the connection.
• The ability to re-acknowledge each IO Module individually allows to rebuild the network
connectivity between a single IO Module and its parent Fabric Interconnect without disrupting
production traffic in the other fabric interconnect.
SUMMARY STEPS
1. UCS-A# scope chassis chassis-num
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
53
I/O Module Management
Resetting the I/O Module
DETAILED STEPS
Step 2 UCS-A /chassis # acknowledge iom {1 | 2} Acknowledges the specified IOM in the chassis.
Step 3 UCS-A /chassis* # commit-buffer Commits the transaction to the system configuration.
Example
The following example acknowledges IO Module 1 and commits the transaction:
UCS-A# scope chassis 1
UCS-A /chassis # acknowledge iom 1
UCS-A /chassis* # commit-buffer
UCS-A /chassis #
DETAILED STEPS
Step 2 UCS-A /chassis # scope iom {a b} Enters chassis IOM mode for the specified IOM.
Step 4 UCS-A /chassis/iom # commit-buffer Commits the transaction to the system configuration.
Example
The following example resets the IOM on fabric A and commits the transaction:
UCS-A# scope chassis 1
UCS-A /chassis # scope iom a
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
54
I/O Module Management
Resetting an I/O Module from a Peer I/O Module
SUMMARY STEPS
1. UCS-A# scope chassis chassis-num
2. UCS-A /chassis # scope iom {a b}
3. UCS-A /chassis/iom # reset-peer
4. UCS-A /chassis/iom* # commit-buffer
DETAILED STEPS
Step 2 UCS-A /chassis # scope iom {a b} Enters chassis IOM mode for the specified IOM.
Specify the peer IOM of the IOM that you want to reset.
Step 3 UCS-A /chassis/iom # reset-peer Resets the peer IOM of the specified IOM.
Step 4 UCS-A /chassis/iom* # commit-buffer Commits the transaction to the system configuration.
Example
This example shows how to reset IOM b from IOM a:
UCS-A# scope chassis 1
UCS-A /chassis # scope iom a
UCS-A /chassis/iom # reset-peer
UCS-A /chassis/iom* # commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
55
I/O Module Management
Resetting an I/O Module from a Peer I/O Module
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
56
CHAPTER 6
SIOC Management
• SIOC Management in Cisco UCS Manager , on page 57
• Acknowledging an SIOC, on page 58
• Migrating to SIOC with PCIe Support, on page 59
• Resetting the CMC, on page 60
• CMC Secure Boot, on page 60
SIOC Removal
Do the following to remove an SIOC from the system:
1. Shut down and remove power from the entire chassis. You must disconnect all power cords to completely
remove power.
2. Disconnect the cables connecting the SIOC to the system.
3. Remove the SIOC from the system.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
57
SIOC Management
Acknowledging an SIOC
SIOC Replacement
Do the following to remove an SIOC from the system and replace it with another SIOC:
1. Shut down and remove power from the entire chassis. You must disconnect all power cords to completely
remove power.
2. Disconnect the cables connecting the SIOC to the system.
3. Remove the SIOC from the system.
4. Connect the new SIOC to the system.
5. Connect the cables to the SIOC.
6. Connect power cords and then power on the system.
7. Acknowledge the new SIOC.
The server connected to the replaced SIOC is rediscovered.
Note If the firmware of the replaced SIOC is not the same version as the peer SIOC, then it is recommended
to update the firmware of the replaced SIOC by re-triggering chassis profile association.
Acknowledging an SIOC
Cisco UCS Manager has the ability to acknowledge a specific SIOC in a chassis. Perform the following
procedure when you replace an SIOC in a chassis.
Caution This operation rebuilds the network connectivity between the SIOC and the fabric interconnects to which
it is connected. The server corresponding to this SIOC becomes unreachable, and traffic is disrupted.
SUMMARY STEPS
1. UCS-A# scope chassis chassis-num
2. UCS-A /chassis # acknowledge sioc {1 | 2}
3. UCS-A /chassis* # commit-buffer
DETAILED STEPS
Step 2 UCS-A /chassis # acknowledge sioc {1 | 2} Acknowledges the specified SIOC in the chassis.
Step 3 UCS-A /chassis* # commit-buffer Commits the transaction to the system configuration.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
58
SIOC Management
Migrating to SIOC with PCIe Support
Example
The following example acknowledges SIOC 1 and commits the transaction:
UCS-A# scope chassis 3
UCS-A /chassis # acknowledge sioc 1
UCS-A /chassis* # commit-buffer
UCS-A /chassis #
SUMMARY STEPS
1. Update the chassis and server firmware to 4.0(1) release.
2. Decommission the chassis.
3. Shut down and remove power from the entire chassis. You must disconnect all power cords to completely
remove power.
4. Disconnect the cables connecting the SIOC to the system.
5. Remove the SIOC from the system.
6. Connect the new SIOC to the system.
7. Connect the cables to the SIOC.
8. Connect power cords and then power on the system.
9. Acknowledge the new SIOC.
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
59
SIOC Management
Resetting the CMC
DETAILED STEPS
Step 2 UCS-A /chassis # scope sioc {1 | 2} Enters the specified SIOC in the chassis.
Step 3 UCS-A /chassis/sioc # scope cmc Enters the CMC of the selected SIOC slot.
Step 5 UCS-A /chassis/sioc/cmc* # commit-buffer Commits the transaction to the system configuration.
Example
The following example resets the CMC on SIOC 1 and commits the transaction:
UCS-A# scope chassis 1
UCS-A /chassis # scope sioc 1
UCS-A /chassis/sioc # scope cmc
UCS-A /chassis/sioc/cmc # reset
UCS-A /chassis/sioc/cmc* # commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
60
SIOC Management
Enabling CMC Secure Boot
• Beginning with 4.0(1), Secure boot operational state is Enabled by default and is not user configurable.
The option is grayed out.
SUMMARY STEPS
1. UCS-A# scope chassis chassis-num
2. UCS-A /chassis # scope sioc {1 | 2}
3. UCS-A /chassis/sioc # scope cmc
4. UCS-A /chassis/sioc/cmc # enable secure-boot
5. UCS-A /chassis/sioc/cmc* # commit-buffer
DETAILED STEPS
Step 2 UCS-A /chassis # scope sioc {1 | 2} Enters the specified SIOC in the chassis.
Step 3 UCS-A /chassis/sioc # scope cmc Enters the CMC of the selected SIOC slot.
Step 5 UCS-A /chassis/sioc/cmc* # commit-buffer Commits the transaction to the system configuration.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
61
SIOC Management
Enabling CMC Secure Boot
Example
The following example enables CMC secure boot on SIOC 1 and commits the transaction:
UCS-A# scope chassis 1
UCS-A /chassis # scope sioc 1
UCS-A /chassis/sioc # scope cmc
UCS-A /chassis/sioc/cmc # enable secure-boot
Warning: This is an irreversible operation.
Do you want to proceed? [Y/N] Y
UCS-A /chassis/sioc/cmc* # commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
62
CHAPTER 7
Power Management in Cisco UCS
• Power Capping in Cisco UCS, on page 63
• Power Policy Configuration, on page 64
• Policy Driven Power Capping, on page 66
• Blade Level Power Capping, on page 73
• Global Power Profiling Policy Configuration, on page 77
• Global Power Allocation Policy, on page 78
• Power Management During Power-on Operations, on page 79
• Power Sync Policy Configuration, on page 80
• Rack Server Power Management, on page 87
• UCS Mini Power Management , on page 87
You can use Policy Driven Chassis Group Power Cap, or Manual Blade Level Power Cap methods to allocate
power that applies to all of the servers in a chassis.
Cisco UCS Manager provides the following power management policies to help you allocate power to your
servers:
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
63
Power Management in Cisco UCS
Power Policy Configuration
Power Control Policies Specifies the priority to calculate the initial power
allocation for each blade in a chassis.
Global Power Allocation Specifies the Policy Driven Chassis Group Power Cap
or the Manual Blade Level Power Cap to apply to all
servers in a chassis.
Global Power Profiling Specifies how the power cap values of the servers are
calculated. If it is enabled, the servers will be profiled
during discovery through benchmarking. This policy
applies when the Global Power Allocation Policy is
set to Policy Driven Chassis Group Cap.
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
64
Power Management in Cisco UCS
Power Supply for Redundancy Method
Step 4 Required: UCS-A /org/psu-policy # commit-buffer Commits the transaction to the system configuration.
Example
The following example configures the power policy to use grid redundancy and commits the
transaction:
UCS-A# scope org /
UCS-A /org # scope psu-policy
UCS-A /org/psu-policy # set redundancy grid
UCS-A /org/psu-policy* # commit-buffer
UCS-A /org/psu-policy #
Note This table is valid if there are four PSUs installed in the chassis.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
65
Power Management in Cisco UCS
Policy Driven Power Capping
Note The system reserves enough power to boot a server in each slot, even if that slot is empty. This reserved
power cannot be leveraged by servers requiring more power. Blades that fail to comply with the power
cap are penalized.
Note If all the blade servers are set with no-cap priority and all of them run high power consuming loads, then
there is a chance that some of the blade servers get capped under high power usage, based on the power
distribution done through dynamic balance.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
66
Power Management in Cisco UCS
Creating a Power Control Policy
Global Power Control Policy options are inherited by all the chassis managed by the Cisco UCS Manager.
Note You must include the power control policy in a service profile and that service profile must be associated
with a server for it to take effect.
SUMMARY STEPS
1. UCS-A# scope org org-name
2. UCS-A /org # create power-control-policy power-control-pol-name
3. UCS-A /org/power-control-policy # set fanspeed {any |
balanced|high-power|low-power|max-power|performance | acoustic}
4. UCS-A /org/power-control-policy # set priority {priority-num | no-cap}
5. UCS-A /org/power-control-policy # commit-buffer
DETAILED STEPS
Step 2 UCS-A /org # create power-control-policy Creates a power control policy and enters power control
power-control-pol-name policy mode.
Step 3 UCS-A /org/power-control-policy # set fanspeed {any | Specifies the fan speed for the power control policy.
balanced|high-power|low-power|max-power|performance
Note The performance option is not supported on
| acoustic}
Cisco UCS C-Series M5 and M6 servers.
Step 4 UCS-A /org/power-control-policy # set priority Specifies the priority for the power control policy.
{priority-num | no-cap}
Step 5 UCS-A /org/power-control-policy # commit-buffer Commits the transaction to the system configuration.
Example
The following example creates a power control policy called powerpolicy15, sets the priority at level
2, and commits the transaction:
UCS-A# scope org /
UCS-A /org # create power-control-policy powerpolicy15
UCS-A /org/power-control policy* # set priority 2
UCS-A /org/power-control policy* # commit-buffer
UCS-A /org/power-control policy #
What to do next
Include the power control policy in a service profile.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
67
Power Management in Cisco UCS
Configuring Acoustic Mode
SUMMARY STEPS
1. UCS-A# scope org org-name
2. UCS-A /org # create power-control-policy fan-policy-name
3. UCS-A /org/power-control-policy # set fanspeed { acoustic }
4. UCS-A /org/power-control-policy # set priority {priority-num | no-cap}
5. UCS-A /org/power-control-policy # commit-buffer
DETAILED STEPS
Step 2 UCS-A /org # create power-control-policy Creates a fan control policy and enters power control policy
fan-policy-name mode. Fan policies are created through the power control
interface.
Step 3 UCS-A /org/power-control-policy # set fanspeed { Specifies Acoustic Mode as the fan speed for the power
acoustic } control policy.
Step 4 UCS-A /org/power-control-policy # set priority Specifies the priority for the fan's power control policy.
{priority-num | no-cap}
Step 5 UCS-A /org/power-control-policy # commit-buffer Commits the transaction to the system configuration.
What to do next
Include the power control policy in a service profile.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
68
Power Management in Cisco UCS
Deleting a Power Control Policy
SUMMARY STEPS
1. UCS-A# scope org org-name
2. UCS-A /org # delete power-control-policy power-control-pol-name
3. UCS-A /org # commit-buffer
DETAILED STEPS
Step 2 UCS-A /org # delete power-control-policy Deletes the specified power control policy.
power-control-pol-name
Step 3 UCS-A /org # commit-buffer Commits the transaction to the system configuration.
Example
The following example deletes a power control policy called powerpolicy15 and commits the
transaction:
UCS-A# scope org /
UCS-A /org # delete power-control-policy powerpolicy15
UCS-A /org* # commit-buffer
UCS-A /org #
The peak power cap is a static value that represents the maximum power available to all blade servers within
a given power group. If you add or remove a blade from a power group, but do not manually modify the peak
power value, the power group adjusts the peak power cap to accommodate the basic power-on requirements
of all blades within that power group.
A minimum of 890 AC watts should be set for each chassis. This converts to 800 watts of DC power, which
is the minimum amount of power required to power an empty chassis. To associate a half-width blade, the
group cap needs to be set to 1475 AC watts. For a full-width blade, it needs to be set to 2060 AC watts.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
69
Power Management in Cisco UCS
Power Groups in UCS Manager
After a chassis is added to a power group, all service profile associated with the blades in the chassis become
part of that power group. Similarly, if you add a new blade to a chassis, that blade inherently becomes part
of the chassis' power group.
Note Creating a power group is not the same as creating a server pool. However, you can populate a server
pool with members of the same power group by creating a power qualifier and adding it to server pool
policy.
When a chassis is removed or deleted, the chassis gets removed from the power group.
UCS Manager supports explicit and implicit power groups.
• Explicit: You can create a power group, add chassis' and racks, and assign a budget for the group.
• Implicit: Ensures that the chassis is always protected by limiting the power consumption within safe
limits. By default, all chassis that are not part of an explicit power group are assigned to the default group
and the appropriate caps are placed. New chassis that connect to UCS Manager are added to the default
power group until you move them to a different power group.
The following table describes the error messages you might encounter while assigning power budget and
working with power groups.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
70
Power Management in Cisco UCS
Creating a Power Group
P-State lowered as Displays when the server is capped This is an information message.
consumption hit power to reduce the power consumption
If a server should not be capped, in
cap for server below the allocated power.
the service profile set the value of
the power control policy Power
Capping field to no-cap.
Chassis N has a mix of This fault is raised when a chassis This is an unsupported
high-line and low-line has a mix of high-line and low-line configuration. All PSUs must be
PSU input power sources. PSU input sources connected. connected to similar power sources.
SUMMARY STEPS
1. UCS-A# scope power-cap-mgmt
2. UCS-A /power-cap-mgmt # create power-group power-group-name
3. UCS-A /power-cap-mgmt/power-group # set peak {peak-num | disabled | uninitialized}
4. UCS-A /power-cap-mgmt/power-group # create chassis chassis-id
5. UCS-A /power-cap-mgmt/power-group # create rack rack-id
6. UCS-A /power-cap-mgmt/power-group # create fex fex-id
7. UCS-A /power-cap-mgmt/power-group # create fi fi-id
8. UCS-A /power-cap-mgmt/power-group/chassis # commit-buffer
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
71
Power Management in Cisco UCS
Deleting a Power Group
Step 4 UCS-A /power-cap-mgmt/power-group # create chassis Adds the specified chassis to the power group and enters
chassis-id power group chassis mode.
Step 5 UCS-A /power-cap-mgmt/power-group # create rack Adds the specified rack to the power group.
rack-id
Step 6 UCS-A /power-cap-mgmt/power-group # create fex Adds the specified FEX to the power group.
fex-id
Step 7 UCS-A /power-cap-mgmt/power-group # create fi fi-id Adds the specified FI to the power group.
Example
The following example creates a power group called powergroup1, specifies the maximum peak
power for the power group (10000 watts), adds chassis 1 to the group, and commits the transaction:
UCS-A# scope power-cap-mgmt
UCS-A /power-cap-mgmt # create power-group powergroup1
UCS-A /power-cap-mgmt/power-group* # set peak 10000
UCS-A /power-cap-mgmt/power-group* # create chassis 1
UCS-A /power-cap-mgmt/power-group/chassis* # commit-buffer
UCS-A /power-cap-mgmt/power-group/chassis #
SUMMARY STEPS
1. UCS-A# scope power-cap-mgmt
2. UCS-A /power-cap-mgmt # delete power-group power-group-name
3. UCS-A /power-cap-mgmt/power-group/chassis # commit-buffer
DETAILED STEPS
Step 2 UCS-A /power-cap-mgmt # delete power-group Deletes the specified power group.
power-group-name
Step 3 UCS-A /power-cap-mgmt/power-group/chassis # Commits the transaction to the system configuration.
commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
72
Power Management in Cisco UCS
Blade Level Power Capping
Example
The following example deletes a power group called powergroup1 and commits the transaction:
UCS-A# scope power-cap-mgmt
UCS-A /power-cap-mgmt # delete power-group powergroup1
UCS-A /power-cap-mgmt* # commit-buffer
UCS-A /power-cap-mgmt #
Note B480 M5 systems using 256GB DIMMs must have a manual blade level
cap at 1300W.
• Unbounded—No power usage limitations are imposed on the server. The server can use as much power
as it requires.
If the server encounters a spike in power usage that meets or exceeds the maximum configured for the server,
Cisco UCS Manager does not disconnect or shut down the server. Instead, Cisco UCS Manager reduces the
power that is made available to the server. This reduction can slow down the server, including a reduction in
CPU speed.
Note If you configure the manual blade-level power cap using Equipment > Policies > Global Policies >
Global Power Allocation Policy, the priority set in the Power Control Policy is no longer relevant.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
73
Power Management in Cisco UCS
Setting the Blade-Level Power Cap for a Server
Procedure
Step 2 UCS-A /chassis/server # set power-budget committed Commits the server to one of the following power usage
{unbounded | watts} levels:
• unbounded —Does not impose any power usage
limitations on the server.
• watts —Allows you to specify the upper level for
power usage by the server. If you choose this setting,
enter the maximum number of watts that the server
can use. The range is 0 to 10000000 watts.
Step 3 UCS-A /chassis/server # commit-buffer Commits the transaction to the system configuration.
Step 4 UCS-A /chassis/server # show power-budget (Optional) Displays the power usage level setting.
Example
The following example limits the power usage for a server to unbounded and then to 1000 watts and
commits the transaction:
UCS-A# scope server 1/7
UCS-A /chassis/server # show power-budget
Budget:
AdminCommitted (W)
-----------------
139
UCS-A /chassis/server # set power-budget committed unbounded
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server # show power-budget
Budget:
AdminCommitted (W)
-----------------
Unbounded
Budget:
AdminCommitted (W)
-----------------
1000
UCS-A /chassis/server #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
74
Power Management in Cisco UCS
Configuring a Chassis Level Fan Policy
The new option takes effect when the new selection is saved. Use Low Power to save on system power.
SUMMARY STEPS
1. In the Navigation pane, click Equipment.
2. Click the Equipment node.
3. In the Work pane, click the Policies tab.
4. Click the Global Policies subtab.
5. In the Fan Control Policy area, click one of the following radio buttons:
• Balanced—This is the default option.
• Low Power
6. Click Save Changes.
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
75
Power Management in Cisco UCS
Viewing Server Statistics
Step 2 UCS-A /chassis/server # show stats Displays the following server statistics:
• Ethernet Port Error
• Ethernet Port Multicast
• Ethernet Port
• Virtual Interface
• Motherboard Power
• PC Ie Fatal Completion Error
• PC Ie Fatal Protocol Error
• PC Ie Fatal Receiving Error
• PC Ie Fatal Error
• Memory Error
• DIMM Env
• CPU Env
Example
The following example shows the section on motherboard power usage statistics:
UCS-A# scope server 2/4
UCS-A /chassis/server # show stats
UCS-A /chassis/server #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
76
Power Management in Cisco UCS
Global Power Profiling Policy Configuration
Note After enabling the Global Power Profiling Policy, you must re-acknowledge the blades to obtain the
minimum and maximum power cap.
DETAILED STEPS
Step 2 UCS-A /power-cap-mgmt # set profile-policy {no | yes} Enables or disables the global power profiling policy.
Step 3 UCS-A /power-cap-mgmt # commit-buffer Commits the transaction to the system configuration.
Example
The following example shows how to enable the global power profile policy and commit the
transaction:
UCS-A# scope power-cap-mgmt
UCS-A /power-cap-mgmt # set profile-policy yes
UCS-A /power-cap-mgmt* # commit-buffer
UCS-A /power-cap-mgmt #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
77
Power Management in Cisco UCS
Global Power Allocation Policy
Important Any change to the Manual Blade level Power Cap configuration results in the loss of any groups or
configuration options set for the Policy Driven Chassis Group Power Cap.
DETAILED STEPS
Step 2 UCS-A /power-cap-mgmt # set cap-policy Sets the global cap policy to the specified power cap
{manual-blade-level-cap | management mode.
policy-driven-chassis-group-cap}
By default, the global cap policy is set to policy driven
chassis group cap.
Step 3 UCS-A /power-cap-mgmt # commit-buffer Commits the transaction to the system configuration.
Example
The following example sets the global cap policy to manual blade power cap and commits the
transaction:
UCS-A# scope power-cap-mgmt
UCS-A /power-cap-mgmt # set cap-policy manual-blade-level-cap
UCS-A /power-cap-mgmt* # commit-buffer
UCS-A /power-cap-mgmt #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
78
Power Management in Cisco UCS
Viewing the Power Cap Values for Servers
DETAILED STEPS
Step 2 UCS-A /power-cap-mgmt # show power-measured Displays the minimum and maximum power cap values.
Example
The following example shows how to display the minimum and maximum power cap values:
UCS-A# scope power-cap-mgmt
UCS-A /power-cap-mgmt # show power-measured
Measured Power:
Device Id (W) Minimum power (W) Maximum power (W) OperMethod
-------------- ----------------- ----------------- ----------
blade 1/1 234 353 Pnuos
UCS-A /power-cap-mgmt #
Note When the power budget that was allocated to the blade is reclaimed, the allocated power displays as 0
Watts.
Limitation
If you power on a blade outside of the Cisco UCS Manager and if there is not enough power available for
allocation, the following fault is raised:
Power cap application failed for server x/y
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
79
Power Management in Cisco UCS
Power Sync Policy Configuration
Note If the priority of an associated blade is changed to no-cap, and is not able to allocate the maximum power
cap, you might see one of the following faults:
• PSU-insufficient—There is not enough available power for the PSU.
• Group-cap-insufficient—The group cap value is not sufficient for the blade.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
80
Power Management in Cisco UCS
Displaying the Global Power Sync Policy
Event Preferred Power State Actual Power State Actual Power State After
Before Event Event
Shallow Association ON ON ON
Step 2 UCS-A/org # scope power-sync-policy default Enters the global power sync policy mode.
Step 3 UCS-A /org/power/-sync-policy # show {detail | expand Displays the global power sync policy information.
| detail expand }
Example
The following example displays the global (default) power sync policy:
UCS-A # scope org
UCS-A /org # scope power-sync-policy default-sync
UCS-A /org/power-sync-policy # show expand
UCS-A /org/power-sync-policy #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
81
Power Management in Cisco UCS
Setting Global Policy Reference for a Service Profile
Procedure
Step 2 UCS-A/org # scope service-profile service-profile-name Enters the service profile mode for the specified service
profile. The name of the service profile can be a minimum
of two characters and a maximum up to 32 characters.
Step 3 UCS-A /org/service-profile # set power-sync-policy Specifies the global power sync policy that can be
default referenced in the service profile. You can also change the
policy reference from the default to other power sync
policies using this command.
Step 4 UCS-A /org/service-profile* # commit-buffer Commits the transaction to the system configuration.
Example
The following example sets the reference to the global power sync policy for use in the service
profile.
UCS-A # scope org
UCS-A/org # scope service-profile spnew
UCS-A/org/service-profile # set power-sync-policy default
UCS-A/org/service-profile* # commit-buffer
Step 2 UCS-A /org # create power-sync-policy Creates a power sync policy and enters power sync policy
power-sync-pol-name mode. The power sync policy name can be up to 16
characters.
Step 3 (Optional) UCS-A /org/power-sync-policy* # set descr Specifies the description of the power-sync-policy. You
optionall-description can also modify the description using the descr keyword.
Step 4 UCS-A /org/power-sync-policy* # set sync-option { Specifies the power synchronization option to the physical
always-sync | default-sync | initial-only-sync } server. You can also modify the power synchronization
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
82
Power Management in Cisco UCS
Deleting a Power Sync Policy
Step 5 UCS-A /org/power-sync-policy* # commit-buffer Commits the transaction to the system configuration.
Example
The following example creates a power sync policy called newSyncPolicy, sets the default sync-option,
and commits the transaction to the system configuration:
UCS-A # scope org
UCS-A /org # create power-sync-policy newSyncPolicy
UCS-A /org/power-sync-policy* # set decsr newSyncPolicy
UCS-A /org/power-sync-policy* # set sync-option default-sync
UCS-A /org/power-sync-policy* # commit-buffer
UCS-A /org/power-sync-policy #
What to do next
Include the power sync policy in a service profile or in a service profile template.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
83
Power Management in Cisco UCS
Displaying All Power Sync Policies
Example
The following example deletes the power sync policy called spnew and commits the transaction to
the system:
UCS-A # scope org
UCS-A /org # delete power-sync-policy spnew
UCS-A /org # commit-buffer
Step 2 UCS-A /org # show power-sync-policy {detail | expand Displays the default, local, and other power sync policies.
| detail expand }
Example
The following example displays power sync policies that are defined:
UCS-A # scope org
UCS-A /org # show power-sync-policy expand
Power Sync Policy:
Name Power Sync Option
-------------------- -----------------
default Default Sync
policy-1 Default Sync
UCS-A /org #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
84
Power Management in Cisco UCS
Creating a Local Policy
Procedure
Step 2 UCS-A /org # scope service-profile service-profile-name Enters the service profile mode for the specified service
profile. The name of the service profile can be a minimum
of two characters and a maximum up to 32 characters.
Step 3 UCS-A /org/service-profile # create Enters the power sync definition mode. You can create a
power-sync-definition power sync policy definition that you defined for the power
sync policy.
Example
The following example creates a local policy using the policy sync definition, sets the sync-option,
and commits the transaction to the system configuration:
UCS-A # scope org
UCS-A/org # scope service-profile spnew
UCS-A/org/service-profile # create power-sync-definition
UCS-A/org/service-profile/power-sync-definition* # set decsr spnew
UCS-A/org/service-profile/power-sync-definition* # set sync-option default-sync
UCS-A/org/service-profile/power-sync-definition* # commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
85
Power Management in Cisco UCS
Deleting a Local Policy
Step 3 (Optional) UCS-A /org/service-profile # show Displays the local policy in the power-sync-policy mode.
power-sync-policy {detail | expand | detail expand }
Step 4 UCS-A /org/service-profile # show power-sync-definition Displays the local policy for the specified service policy in
{detail | expand | detail expand } the power-sync-definition mode.
Note If you do not have a definition for the power
sync policy, you can still use the command, but
you cannot see anything displayed.
Example
The following example displays the local policy in use by the service profile spnew:
UCS-A # scope org
UCS-A/org # scope service-profile spnew
UCS-A/org/service-profile # show power-sync-definition expand
UCS-A/org/service-profile #
Step 2 UCS-A/org # scope service-profile service-profile-name Enters the service profile mode for the specified service
profile. The name of the service profile can be a minimum
of two characters and a maximum up to 32 characters.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
86
Power Management in Cisco UCS
Rack Server Power Management
Step 4 UCS-A /org/service-profile* # commit-buffer Commits the transaction to the system configuration.
Example
The following example deletes the local policy in use by the service profile.
UCS-A # scope org
UCS-A/org # scope service-profile spnew
UCS-A/org/service-profile # delete power-sync-definition
UCS-A/org/service-profile* # commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
87
Power Management in Cisco UCS
UCS Mini Power Management
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
88
CHAPTER 8
Blade Server Hardware Management
• Blade Server Management, on page 89
• Guidelines for Removing and Decommissioning Blade Servers, on page 90
• Recommendations for Avoiding Unexpected Server Power Changes, on page 90
• Booting a Blade Server, on page 91
• Shutting Down a Blade Server, on page 92
• Resetting a Blade Server to Factory Default Settings, on page 93
• Power Cycling a Blade Server, on page 95
• Performing a Hard Reset on a Blade Server, on page 95
• Acknowledging a Blade Server, on page 96
• Removing a Blade Server from a Chassis, on page 97
• Decommissioning a Blade Server, on page 97
• Turning On the Locator LED for a Blade Server, on page 98
• Turning Off the Locator LED for a Blade Server, on page 99
• Resetting the CMOS for a Blade Server, on page 100
• Resetting the CIMC for a Blade Server, on page 100
• Clearing TPM for a Blade Server, on page 101
• Resetting the BIOS Password for a Blade Server, on page 102
• Issuing an NMI from a Blade Server, on page 103
• Health LED Alarms, on page 103
• Smart SSD, on page 104
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
89
Blade Server Hardware Management
Guidelines for Removing and Decommissioning Blade Servers
Note Only servers added to a server pool automatically during discovery are removed automatically. Servers
that were manually added to a server pool must be removed manually.
To add a removed blade server back to the configuration, it must be reconnected, then rediscovered. When a
server is reintroduced to Cisco UCS Manager, it is treated as a new server and is subject to the deep discovery
process. For this reason, it is possible for Cisco UCS Manager to assign the server a new ID that might be
different from the ID that it held before.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
90
Blade Server Hardware Management
Booting a Blade Server
Important Do not use any of the following options on an associated server that is currently powered off:
• Reset in the GUI
• cycle cycle-immediate or reset hard-reset-immediate in the CLI
• The physical Power or Reset buttons on the server
If you reset, cycle, or use the physical power buttons on a server that is currently powered off, the server's
actual power state might become out of sync with the desired power state setting in the service profile. If the
communication between the server and Cisco UCS Manager is disrupted or if the service profile configuration
changes, Cisco UCS Manager might apply the desired power state from the service profile to the server,
causing an unexpected power change.
Power synchronization issues can lead to an unexpected server restart, as shown below:
Desired Power State in Service Current Server Power State Server Power State After
Profile Communication Is Disrupted
SUMMARY STEPS
1. UCS-A# scope org org-name
2. UCS-A /org # scope service-profile profile-name
3. UCS-A /org/service-profile # power up
4. UCS-A /org/service-profile # commit-buffer
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
91
Blade Server Hardware Management
Shutting Down a Blade Server
Step 3 UCS-A /org/service-profile # power up Boots the blade server associated with the service profile.
Step 4 UCS-A /org/service-profile # commit-buffer Commits the transaction to the system configuration.
Example
The following example boots the blade server associated with the service profile named ServProf34
and commits the transaction:
UCS-A# scope org /
UCS-A /org* # scope service-profile ServProf34
UCS-A /org/service-profile* # power up
UCS-A /org/service-profile* # commit-buffer
UCS-A /org/service-profile #
Note When a blade server that is associated with a service profile is shut down, the VIF down alerts F0283
and F0479 are automatically suppressed.
SUMMARY STEPS
1. UCS-A# scope org org-name
2. UCS-A /org # scope service-profile profile-name
3. UCS-A /org/service-profile # power down
4. UCS-A /org/service-profile # commit-buffer
DETAILED STEPS
Step 2 UCS-A /org # scope service-profile profile-name Enters organization service profile mode for the specified
service profile.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
92
Blade Server Hardware Management
Resetting a Blade Server to Factory Default Settings
Step 4 UCS-A /org/service-profile # commit-buffer Commits the transaction to the system configuration.
Example
The following example shuts down the blade server associated with the service profile named
ServProf34 and commits the transaction:
UCS-A# scope org /
UCS-A /org # scope service-profile ServProf34
UCS-A /org/service-profile # power down
UCS-A /org/service-profile* # commit-buffer
UCS-A /org/service-profile #
Perform the following procedure to reset the server to factory default settings.
SUMMARY STEPS
1. UCS-A# scope server [chassis-num/server-num | dynamic-uuid]
2. UCS-A /chassis/server # reset factory-default [delete-flexflash-storage | delete-storage
[create-initial-storage-volumes] ]
3. UCS-A /chassis/server* # commit-buffer
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
93
Blade Server Hardware Management
Resetting a Blade Server to Factory Default Settings
Example
The following example resets the server settings to factory default without deleting storage, and
commits the transaction:
UCS-A# scope server 2/4
UCS-A /chassis/server # reset factory-default
UCS-A /chassis/server* # commit-buffer
The following example resets the server settings to factory default, deletes flexflash storage, and
commits the transaction:
UCS-A# scope server 2/4
UCS-A /chassis/server # reset factory-default delete-flexflash-storage
The following example resets the server settings to factory default, deletes all storage, and commits
the transaction:
UCS-A# scope server 2/4
UCS-A /chassis/server # reset factory-default delete-storage
UCS-A /chassis/server* # commit-buffer
The following example resets the server settings to factory default, deletes all storage, sets all disks
to their initial state, and commits the transaction:
UCS-A# scope server 2/4
UCS-A /chassis/server # reset factory-default delete-storage create-initial-storage-volumes
UCS-A /chassis/server* # commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
94
Blade Server Hardware Management
Power Cycling a Blade Server
DETAILED STEPS
Step 2 UCS-A /chassis/server # cycle {cycle-immediate | Power cycles the blade server.
cycle-wait}
Use the cycle-immediate keyword to immediately begin
power cycling the blade server; use the cycle-wait keyword
to schedule the power cycle to begin after all pending
management operations have completed.
Example
The following example immediately power cycles blade server 4 in chassis 2 and commits the
transaction:
UCS-A# scope server 2/4
UCS-A /chassis/server # cycle cycle-immediate
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
Note If you are trying to boot a server from a power-down state, you should not use Reset.
If you continue the power-up with this process, the desired power state of the servers become out of
sync with the actual power state and the servers might unexpectedly shut down at a later time. To safely
reboot the selected servers from a power-down state, click Cancel, then select the Boot Server action.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
95
Blade Server Hardware Management
Acknowledging a Blade Server
SUMMARY STEPS
1. UCS-A# scope server chassis-num / server-num
2. UCS-A /chassis/server # reset {hard-reset-immediate | hard-reset-wait}
3. UCS-A /server # commit-buffer
DETAILED STEPS
Step 2 UCS-A /chassis/server # reset {hard-reset-immediate Performs a hard reset of the blade server.
| hard-reset-wait}
Use the hard-reset-immediate keyword to immediately
begin hard resetting the server; use the hard-reset-wait
keyword to schedule the hard reset to begin after all pending
management operations have completed.
Step 3 UCS-A /server # commit-buffer Commits the transaction to the system configuration.
Example
The following example performs an immediate hard reset of blade server 4 in chassis 2 and commits
the transaction:
UCS-A# scope server 2/4
UCS-A /chassis/server # reset hard-reset-immediate
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
SUMMARY STEPS
1. UCS-A# acknowledge server chassis-num / server-num
2. UCS-A# commit-buffer
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
96
Blade Server Hardware Management
Removing a Blade Server from a Chassis
Example
The following example acknowledges server 4 in chassis 2 and commits the transaction:
UCS-A# acknowledge server 2/4
UCS-A* # commit-buffer
UCS-A #
DETAILED STEPS
Step 3 Go to the physical location of the chassis and remove the For instructions on how to remove the server hardware, see
server hardware from the slot. the Cisco UCS Hardware Installation Guide for your
chassis.
Example
The following example removes blade server 4 in chassis 2 and commits the transaction:
UCS-A# remove server 2/4
UCS-A* # commit-buffer
UCS-A #
What to do next
If you physically re-install the blade server, you must re-acknowledge the slot for the Cisco UCS Manager to
rediscover the server.
For more information, see Acknowledging a Blade Server, on page 96.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
97
Blade Server Hardware Management
Turning On the Locator LED for a Blade Server
2. UCS-A# commit-buffer
DETAILED STEPS
Example
The following example decommissions blade server 4 in chassis 2 and commits the transaction:
UCS-A# decommission server 2/4
UCS-A* # commit-buffer
UCS-A #
DETAILED STEPS
Step 2 UCS-A /chassis/server # enable locator-led [multi-master Turns on the blade server locator LED. For the Cisco UCS
| multi-slave] B460 M4 blade server, you can add the following keywords:
• multi-master—Turns on the LED for the master node
only.
• multi-slave—Turns on the LED for the slave node
only.
Step 3 UCS-A /chassis/server # commit-buffer Commits the transaction to the system configuration.
Example
The following example turns on the locator LED on blade server 4 in chassis 2 and commits the
transaction:
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
98
Blade Server Hardware Management
Turning Off the Locator LED for a Blade Server
The following example turns on the locator LED for the master node only on blade server 7 in chassis
2 and commits the transaction:
UCS-A# scope chassis 2/7
UCS-A /chassis/server # enable locator-led multi-master
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
DETAILED STEPS
Step 2 UCS-A /chassis/server # disable locator-led [multi-master Turns off the blade server locator LED. For the Cisco UCS
| multi-slave] B460 M4 blade server, you can add the following keywords:
• multi-master—Turns off the LED for the master node
only.
• multi-slave—Turns off the LED for the slave node
only.
Step 3 UCS-A /chassis/server # commit-buffer Commits the transaction to the system configuration.
Example
The following example turns off the locator LED on blade server 4 in chassis 2 and commits the
transaction:
UCS-A# scope chassis 2/4
UCS-A /chassis/server # disable locator-led
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
The following example turns off the locator LED for the master node on blade server 7 in chassis 2
and commits the transaction:
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
99
Blade Server Hardware Management
Resetting the CMOS for a Blade Server
SUMMARY STEPS
1. UCS-A# scope server chassis-num / server-num
2. UCS-A /chassis/server # reset-cmos
3. UCS-A /chassis/server # commit-buffer
DETAILED STEPS
Step 2 UCS-A /chassis/server # reset-cmos Resets the CMOS for the blade server.
Step 3 UCS-A /chassis/server # commit-buffer Commits the transaction to the system configuration.
Example
The following example resets the CMOS for blade server 4 in chassis 2 and commits the transaction:
UCS-A# scope server 2/4
UCS-A /chassis/server # reset-cmos
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
SUMMARY STEPS
1. UCS-A# scope server chassis-num / server-num
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
100
Blade Server Hardware Management
Clearing TPM for a Blade Server
DETAILED STEPS
Step 2 UCS-A /chassis/server # scope CIMC Enters chassis server CIMC mode
Step 3 UCS-A /chassis/server/CIMC # reset Resets the CIMC for the blade server.
Step 4 UCS-A /chassis/server/CIMC # commit-buffer Commits the transaction to the system configuration.
Example
The following example resets the CIMC for blade server 4 in chassis 2 and commits the transaction:
UCS-A# scope server 2/4
UCS-A /chassis/server # scope CIMC
UCS-A /chassis/server/cimc # reset
UCS-A /chassis/server/cimc* # commit-buffer
UCS-A /chassis/server/cimc #
Caution Clearing TPM is a potentially hazardous operation. The OS may stop booting. You may also see loss
of data.
SUMMARY STEPS
1. UCS-A# scope server [chassis-num/server-num | dynamic-uuid]
2. UCS-A# /chassis/server # scope tpm tpm-ID
3. UCS-A# /chassis/server/tpm # set adminaction clear-config
4. UCS-A# /chassis/server/tpm # commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
101
Blade Server Hardware Management
Resetting the BIOS Password for a Blade Server
DETAILED STEPS
Step 3 UCS-A# /chassis/server/tpm # set adminaction Specifies that the TPM is to be cleared.
clear-config
Step 4 UCS-A# /chassis/server/tpm # commit-buffer Commits the transaction to the system configuration.
Example
The following example shows how to clear TPM for a blade server:
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
102
Blade Server Hardware Management
Issuing an NMI from a Blade Server
SUMMARY STEPS
1. UCS-A# scope server [chassis-num/server-num | dynamic-uuid]
2. UCS-A /chassis/server # diagnostic-interrupt
3. UCS-A /chassis/server* # commit-buffer
DETAILED STEPS
Example
The following example sends an NMI from server 4 in chassis 2 and commits the transaction:
UCS-A# scope server 2/4
UCS-A /chassis/server # diagnostic-interrupt
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
Name Description
Severity column The severity of the alarm. This can be one of the following:
• Critical—The blade health LED is blinking amber.
• Minor—The blade health LED is amber.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
103
Blade Server Hardware Management
Viewing Health LED Status
Name Description
Sensor Name column The name of the sensor that triggered the alarm.
DETAILED STEPS
Step 2 UCS-A /chassis/server # show health-led expand Displays the health LED and sensor alarms for the selected
server.
Example
The following example shows how to display the health LED status and sensor alarms for chassis 1
server 3:
UCS-A# scope server 1/3
UCS-A /chassis/server # show health-led expand
Health LED:
Severity: Normal
Reason:
Color: Green
Oper State: On
UCS-A /chassis/server #
Smart SSD
Beginning with release 3.1(3), Cisco UCS Manager supports monitoring SSD health. This feature is called
Smart SSD. It provides statistical information about the properties like wear status in days, percentage life
remaining, and so on. For every property, a minimum, a maximum and an average value is recorded and
displayed. The feature also allows you to provide threshold limit for the properties.
Note The Smart SSD feature is supported only for a selected range of SSDs. It is not supported for any HDDs.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
104
Blade Server Hardware Management
Viewing SSD Health Statistics
• Micron
SUMMARY STEPS
1. UCS-A# scope server chassis-id / server-id
2. UCS-A /chassis/server # show stats
DETAILED STEPS
Step 2 UCS-A /chassis/server # show stats Displays the SSD health statistics for the specified server.
Example
The following example displays the SSD health statistics for blade 3 in chassis 1:
UCS-A# scope server 1/3
UCS-A /chassis/server # show stats
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
105
Blade Server Hardware Management
Viewing SSD Health Statistics
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
106
CHAPTER 9
Rack-Mount Server Hardware Management
• Rack-Mount Server Management, on page 107
• Rack-Enclosure Server Management, on page 108
• Guidelines for Removing and Decommissioning Rack-Mount Servers, on page 109
• Recommendations for Avoiding Unexpected Server Power Changes, on page 109
• Booting a Rack-Mount Server, on page 110
• Shutting Down a Rack-Mount Server, on page 111
• Resetting a Rack-Mount Server to Factory Default Settings, on page 112
• Performing Persistent Memory Scrub, on page 113
• Power Cycling a Rack-Mount Server, on page 114
• Performing a Hard Reset on a Rack-Mount Server, on page 114
• Acknowledging a Rack-Mount Server, on page 115
• Decommissioning a Rack-Mount Server, on page 116
• Recommissioning a Rack-Mount Server, on page 116
• Renumbering a Rack-Mount Server, on page 117
• Removing a Rack-Mount Server, on page 119
• Turning On the Locator LED for a Rack-Mount Server, on page 119
• Turning Off the Locator LED for a Rack-Mount Server, on page 120
• Resetting the CMOS for a Rack-Mount Server, on page 120
• Resetting the CIMC for a Rack-Mount Server, on page 121
• Clearing TPM for a Rack-Mount Server, on page 122
• Showing the Status for a Rack-Mount Server, on page 123
• Issuing an NMI from a Rack-Mount Server, on page 123
• Viewing the Power Transition Log, on page 124
• Viewing Rack Enclosure Slot Statistics, on page 125
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
107
Rack-Mount Server Hardware Management
Rack-Enclosure Server Management
Tip For information on how to integrate a supported Cisco UCS rack-mount server with Cisco UCS Manager,
see the Cisco UCS C-series server integration guide or Cisco UCS S-series server integration guide for
your Cisco UCS Manager release.
fan-module and psu can be managed the same way as other rack servers. For slot, see Viewing Rack
Enclosure Slot Statistics, on page 125.
You can also use the show command to view the following in rack-enclosure:
• detail
• event
• expand
• fan-module
• fault
• fsm
• psu
• slot
• stats
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
108
Rack-Mount Server Hardware Management
Guidelines for Removing and Decommissioning Rack-Mount Servers
Note Only those servers added to a server pool automatically during discovery will be removed automatically.
Servers that have been manually added to a server pool have to be removed manually.
If you need to add a removed rack-mount server back to the configuration, it must be reconnected and then
rediscovered. When a server is reintroduced to Cisco UCS Manager it is treated like a new server and is subject
to the deep discovery process. For this reason, it's possible that Cisco UCS Manager will assign the server a
new ID that may be different from the ID that it held before.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
109
Rack-Mount Server Hardware Management
Booting a Rack-Mount Server
Important Do not use any of the following options on an associated server that is currently powered off:
• Reset in the GUI
• cycle cycle-immediate or reset hard-reset-immediate in the CLI
• The physical Power or Reset buttons on the server
If you reset, cycle, or use the physical power buttons on a server that is currently powered off, the server's
actual power state might become out of sync with the desired power state setting in the service profile. If the
communication between the server and Cisco UCS Manager is disrupted or if the service profile configuration
changes, Cisco UCS Manager might apply the desired power state from the service profile to the server,
causing an unexpected power change.
Power synchronization issues can lead to an unexpected server restart, as shown below:
Desired Power State in Service Current Server Power State Server Power State After
Profile Communication Is Disrupted
SUMMARY STEPS
1. UCS-A# scope org org-name
2. UCS-A /org # scope service-profile profile-name
3. UCS-A /org/service-profile # power up
4. UCS-A /org/service-profile # commit-buffer
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
110
Rack-Mount Server Hardware Management
Shutting Down a Rack-Mount Server
Step 3 UCS-A /org/service-profile # power up Boots the rack-mount server associated with the service
profile.
Step 4 UCS-A /org/service-profile # commit-buffer Commits the transaction to the system configuration.
Example
The following example boots the rack-mount server associated with the service profile named
ServProf34 and commits the transaction:
UCS-A# scope org /
UCS-A /org* # scope service-profile ServProf34
UCS-A /org/service-profile # power up
UCS-A /org/service-profile* # commit-buffer
UCS-A /org/service-profile #
SUMMARY STEPS
1. UCS-A# scope org org-name
2. UCS-A /org # scope service-profile profile-name
3. UCS-A /org/service-profile # power down
4. UCS-A /org/service-profile # commit-buffer
DETAILED STEPS
Step 2 UCS-A /org # scope service-profile profile-name Enters organization service profile mode for the specified
service profile.
Step 3 UCS-A /org/service-profile # power down Shuts down the rack-mount server associated with the
service profile.
Step 4 UCS-A /org/service-profile # commit-buffer Commits the transaction to the system configuration.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
111
Rack-Mount Server Hardware Management
Resetting a Rack-Mount Server to Factory Default Settings
Example
The following example shuts down the rack-mount server associated with the service profile named
ServProf34 and commits the transaction:
UCS-A# scope org /
UCS-A /org # scope service-profile ServProf34
UCS-A /org/service-profile # power down
UCS-A /org/service-profile* # commit-buffer
UCS-A /org/service-profile #
Perform the following procedure if you need to reset the server to factory default settings.
SUMMARY STEPS
1. UCS-A# scope server server-num
2. UCS-A /server # reset factory-default [delete-flexflash-storage | delete-storage
[create-initial-storage-volumes] ]
3. UCS-A /server # commit-buffer
DETAILED STEPS
Step 2 UCS-A /server # reset factory-default Resets server settings to factory default using the following
[delete-flexflash-storage | delete-storage command options:
[create-initial-storage-volumes] ]
• factory-default—Resets the server to factory defaults
without deleting storage
• delete-flexflash-storage—Resets the server to factory
defaults and deletes flexflash storage
• delete-storage—Resets the server to factory defaults
and deletes all storage
• create-initial-storage-volumes—Resets the server to
factory defaults, deletes all storage, sets all disks to
their initial state
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
112
Rack-Mount Server Hardware Management
Performing Persistent Memory Scrub
Step 3 UCS-A /server # commit-buffer Commits the transaction to the system configuration.
Example
The following example resets the server settings to factory default without deleting storage, and
commits the transaction:
UCS-A# scope server 2
UCS-A /server # reset factory-default
UCS-A /server* # commit-buffer
UCS-A /server #
The following example resets the server settings to factory default, deletes flexflash storage, and
commits the transaction:
UCS-A# scope server 2
UCS-A /server # reset factory-default delete-flexflash-storage
UCS-A /server* # commit-buffer
The following example resets the server settings to factory default, deletes all storage, and commits
the transaction:
UCS-A# scope server 2
UCS-A /server # reset factory-default delete-storage
UCS-A /server* # commit-buffer
The following example resets the server settings to factory default, deletes all storage, sets all disks
to their initial state, and commits the transaction:
UCS-A# scope server 2
UCS-A /server # reset factory-default delete-storage create-initial-storage-volumes
UCS-A /server* # commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
113
Rack-Mount Server Hardware Management
Power Cycling a Rack-Mount Server
DETAILED STEPS
Step 2 UCS-A /server # cycle {cycle-immediate | cycle-wait} Power cycles the rack-mount server.
Use the cycle-immediate keyword to immediately begin
power cycling the rack-mount server; use the cycle-wait
keyword to schedule the power cycle to begin after all
pending management operations have completed.
Example
The following example immediately power cycles rack-mount server 2 and commits the transaction:
UCS-A# scope server 2
UCS-A /server # cycle cycle-immediate
UCS-A /server* # commit-buffer
UCS-A /server #
Note If you are trying to boot a server from a power-down state, you should not use Reset.
If you continue the power-up with this process, the desired power state of the servers become out of
sync with the actual power state and the servers might unexpectedly shut down at a later time. To safely
reboot the selected servers from a power-down state, click Cancel, then select the Boot Server action.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
114
Rack-Mount Server Hardware Management
Acknowledging a Rack-Mount Server
SUMMARY STEPS
1. UCS-A# scope server server-num
2. UCS-A /server # reset {hard-reset-immediate | hard-reset-wait}
3. UCS-A /server # commit-buffer
DETAILED STEPS
Step 2 UCS-A /server # reset {hard-reset-immediate | Performs a hard reset of the rack-mount server.
hard-reset-wait}
Use the hard-reset-immediate keyword to immediately
begin hard resetting the rack-mount server; use the
hard-reset-wait keyword to schedule the hard reset to begin
after all pending management operations have completed.
Step 3 UCS-A /server # commit-buffer Commits the transaction to the system configuration.
Example
The following example performs an immediate hard reset of rack-mount server 2 and commits the
transaction:
UCS-A# scope server 2
UCS-A /server # reset hard-reset-immediate
UCS-A /server* # commit-buffer
UCS-A /server #
SUMMARY STEPS
1. UCS-A# acknowledge server server-num
2. UCS-A# commit-buffer
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
115
Rack-Mount Server Hardware Management
Decommissioning a Rack-Mount Server
Example
The following example acknowledges rack-mount server 2 and commits the transaction:
UCS-A# acknowledge server 2
UCS-A* # commit-buffer
UCS-A #
DETAILED STEPS
Example
The following example decommissions rack-mount server 2 and commits the transaction:
UCS-A# decommission server 2
UCS-A* # commit-buffer
UCS-A #
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
116
Rack-Mount Server Hardware Management
Renumbering a Rack-Mount Server
Example
The following example recommissions rack-mount server 2 and commits the transaction:
UCS-A# recommission server 2
UCS-A* # commit-buffer
UCS-A #
SUMMARY STEPS
1. UCS-A# show server inventory
2. Verify that the server inventory does not include the following:
3. UCS-A# recommission server vendor-name model-name serial-numnew-id
4. UCS-A# commit-buffer
DETAILED STEPS
Step 2 Verify that the server inventory does not include the • The rack-mount server you want to renumber
following:
• A rack-mount server with the number you want to use
Step 3 UCS-A# recommission server vendor-name model-name Recommissions and renumbers the specified rack-mount
serial-numnew-id server.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
117
Rack-Mount Server Hardware Management
Renumbering a Rack-Mount Server
Example
The following example decommissions a rack-mount server with ID 2, changes the ID to 3,
recommissions that server, and commits the transaction:
UCS-A# show server inventory
Server Equipped PID Equipped VID Equipped Serial (SN) Slot Status Ackd Memory (MB)
Ackd Cores
------- ------------ ------------ -------------------- ---------------- ----------------
----------
1/1 UCSB-B200-M4 V01 FCH1532718P Equipped 131072
16
1/2 UCSB-B200-M4 V01 FCH153271DF Equipped 131072
16
1/3 UCSB-B200-M4 V01 FCH153271DL Equipped 114688
16
1/4 UCSB-B200-M4 V01 Empty
1/5 Empty
1/6 Empty
1/7 N20-B6730-1 V01 JAF1432CFDH Equipped 65536
16
1/8 Empty
1 R200-1120402W V01 QCI1414A02J N/A 49152
12
2 R210-2121605W V01 QCI1442AHFX N/A 24576 8
4 UCSC-BSE-SFF-C200 V01 QCI1514A0J7 N/A 8192 8
Server Equipped PID Equipped VID Equipped Serial (SN) Slot Status Ackd Memory (MB)
Ackd Cores
------- ------------ ------------ -------------------- ---------------- ----------------
----------
1/1 UCSB-B200-M4 V01 FCH1532718P Equipped 131072
16
1/2 UCSB-B200-M4 V01 FCH153271DF Equipped 131072
16
1/3 UCSB-B200-M4 V01 FCH153271DL Equipped 114688
16
1/4 UCSB-B200-M4 V01 Empty
1/5 Empty
1/6 Empty
1/7 N20-B6730-1 V01 JAF1432CFDH Equipped 65536
16
1/8 Empty
1 R200-1120402W V01 QCI1414A02J N/A 49152
12
3 R210-2121605W V01 QCI1442AHFX N/A 24576 8
4 UCSC-BSE-SFF-C200 V01 QCI1514A0J7 N/A 8192 8
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
118
Rack-Mount Server Hardware Management
Removing a Rack-Mount Server
SUMMARY STEPS
1. UCS-A# remove server server-num
2. UCS-A# commit-buffer
DETAILED STEPS
Example
The following example removes rack-mount server 4 and commits the transaction:
UCS-A# remove server 4
UCS-A* # commit-buffer
UCS-A #
What to do next
If you physically reconnect the rack-mount server, you must re-acknowledge it for the Cisco UCS Manager
to rediscover the server.
For more information, see Acknowledging a Rack-Mount Server, on page 115.
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
119
Rack-Mount Server Hardware Management
Turning Off the Locator LED for a Rack-Mount Server
Step 3 UCS-A /server # commit-buffer Commits the transaction to the system configuration.
Example
The following example turns on the locator LED for rack-mount server 2 and commits the transaction:
UCS-A# scope server 2
UCS-A /server # enable locator-led
UCS-A /server* # commit-buffer
UCS-A /server #
DETAILED STEPS
Step 2 UCS-A /server # disable locator-led Turns off the rack-mount server locator LED.
Step 3 UCS-A /server # commit-buffer Commits the transaction to the system configuration.
Example
The following example turns off the locator LED for rack-mount server 2 and commits the transaction:
UCS-A# scope server 2
UCS-A /server # disable locator-led
UCS-A /server* # commit-buffer
UCS-A /server #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
120
Rack-Mount Server Hardware Management
Resetting the CIMC for a Rack-Mount Server
SUMMARY STEPS
1. UCS-A# scope server server-num
2. UCS-A /server # reset-cmos
3. UCS-A /server # commit-buffer
DETAILED STEPS
Step 2 UCS-A /server # reset-cmos Resets the CMOS for the rack-mount server.
Step 3 UCS-A /server # commit-buffer Commits the transaction to the system configuration.
Example
The following example resets the CMOS for rack-mount server 2 and commits the transaction:
UCS-A# scope server 2
UCS-A /server # reset-cmos
UCS-A /server* # commit-buffer
UCS-A /server #
SUMMARY STEPS
1. UCS-A# scope server server-num
2. UCS-A /server # scope CIMC
3. UCS-A /server/CIMC # reset
4. UCS-A /server/CIMC # commit-buffer
DETAILED STEPS
Step 3 UCS-A /server/CIMC # reset Resets the CIMC for the rack-mount server.
Step 4 UCS-A /server/CIMC # commit-buffer Commits the transaction to the system configuration.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
121
Rack-Mount Server Hardware Management
Clearing TPM for a Rack-Mount Server
Example
The following example resets the CIMC for rack-mount server 2 and commits the transaction:
UCS-A# scope server 2
UCS-A /server # scope CIMC
UCS-A /server/cimc # reset
UCS-A /server/cimc* # commit-buffer
UCS-A /server/cimc #
Caution Clearing TPM is a potentially hazardous operation. The OS may stop booting. You may also see loss
of data.
SUMMARY STEPS
1. UCS-A# scope server server-num
2. UCS-A# /server # scope tpm tpm-ID
3. UCS-A# /server/tpm # set adminaction clear-config
4. UCS-A# /server/tpm # commit-buffer
DETAILED STEPS
Step 2 UCS-A# /server # scope tpm tpm-ID Enters org TPM mode for the specified TPM.
Step 3 UCS-A# /server/tpm # set adminaction clear-config Specifies that the TPM is to be cleared.
Step 4 UCS-A# /server/tpm # commit-buffer Commits the transaction to the system configuration.
Example
The following example shows how to clear TPM for a rack-mount server:
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
122
Rack-Mount Server Hardware Management
Showing the Status for a Rack-Mount Server
DETAILED STEPS
Example
The following example shows the status for all servers in the Cisco UCS domain. The servers
numbered 1 and 2 do not have a slot listed in the table because they are rack-mount servers.
SUMMARY STEPS
1. UCS-A# scope server [chassis-num/server-num | dynamic-uuid]
2. UCS-A /chassis/server # diagnostic-interrupt
3. UCS-A /chassis/server* # commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
123
Rack-Mount Server Hardware Management
Viewing the Power Transition Log
DETAILED STEPS
Example
The following example sends an NMI from server 4 in chassis 2 and commits the transaction:
UCS-A# scope server 2/4
UCS-A /chassis/server # diagnostic-interrupt
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
Step 2 UCS-A# /chassis/server # show power-transition-log Displays the computeRebootLog instances for the specified
server.
Example
The following example shows how to view the power transition log for server 3.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
124
Rack-Mount Server Hardware Management
Viewing Rack Enclosure Slot Statistics
SUMMARY STEPS
1. UCS-A# scope rack-enclosure rack-enclosure -num
2. UCS-A# /rack-enclosure # show slot
3. UCS-A# /rack-enclosure # scope slot slot_ID
4. UCS-A# /rack-enclosure/slot # show detail
DETAILED STEPS
Example
The following example shows how to view slot stats in for an enclosure and individual slot stats:
UCS-A# scope rack-enclosure 1
UCS-A /rack-enclosure # show slot
UCS-A /rack-enclosure # show slot
Slot:
Id Presence State
---------- --------------
1 Equipped
2 Empty
3 Equipped
4 Empty
UCS-A /rack-enclosure # scope slot 1
UCS-A /rack-enclosure/slot # show detail
Slot:
Id: 1
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
125
Rack-Mount Server Hardware Management
Viewing Rack Enclosure Slot Statistics
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
126
CHAPTER 10
S3X60 Server Node Hardware Management
• Cisco UCS S3260 Server Node Management, on page 127
• Booting a Server from the Service Profile, on page 128
• Acknowledging a Server, on page 128
• Power Cycling a Server, on page 129
• Shutting Down a Server, on page 130
• Performing a Hard Reset on a Server, on page 130
• Resetting a Cisco UCS S3260 Server Node to Factory Default Settings, on page 131
• Removing a Server from a Chassis, on page 133
• Decommissioning a Server, on page 134
• Recommissioning a Server, on page 134
• Turning On the Locator LED for a Server, on page 135
• Turning Off the Locator LED for a Server, on page 136
• Resetting All Memory Errors, on page 137
• Resetting IPMI to Factory Default Settings, on page 137
• Resetting the CIMC for a Server, on page 138
• Resetting the CMOS for a Server, on page 139
• Resetting KVM, on page 139
• Issuing an NMI from a Server, on page 140
• Recovering a Corrupt BIOS, on page 141
• Health LED Alarms, on page 141
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
127
S3X60 Server Node Hardware Management
Booting a Server from the Service Profile
SUMMARY STEPS
1. UCS-A# scope org org-name
2. UCS-A /org # scope service-profile profile-name
3. UCS-A /org/service-profile # power up
4. UCS-A /org/service-profile* # commit-buffer
DETAILED STEPS
Step 2 UCS-A /org # scope service-profile profile-name Enters organization service profile mode for the specified
service profile.
Step 3 UCS-A /org/service-profile # power up Boots the server associated with the service profile.
Step 4 UCS-A /org/service-profile* # commit-buffer Commits the transaction to the system configuration.
Example
The following example boots the server associated with the service profile named ServProf34 and
commits the transaction:
UCS-A# scope org /
UCS-A /org # scope service-profile ServProf34
UCS-A /org/service-profile # power up
UCS-A /org/service-profile* # commit-buffer
UCS-A /org/service-profile #
Acknowledging a Server
Perform the following procedure to rediscover the server and all endpoints in the server. For example, you
can use this procedure if a server is stuck in an unexpected state, such as the discovery state.
SUMMARY STEPS
1. UCS-A# acknowledge server chassis-num / server-num
2. UCS-A*# commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
128
S3X60 Server Node Hardware Management
Power Cycling a Server
DETAILED STEPS
Example
The following example acknowledges server 1 in chassis 3 and commits the transaction:
UCS-A# acknowledge server 3/1
UCS-A* # commit-buffer
UCS-A #
DETAILED STEPS
Step 3 UCS-A /chassis/server* # commit-buffer Commits the transaction to the system configuration.
Example
The following example immediately power cycles server 1 in chassis 3 and commits the transaction:
UCS-A# scope server 3/1
UCS-A /chassis/server # cycle cycle-immediate
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
129
S3X60 Server Node Hardware Management
Shutting Down a Server
SUMMARY STEPS
1. UCS-A# scope org org-name
2. UCS-A /org # scope service-profile profile-name
3. UCS-A /org/service-profile # power down
4. UCS-A /org/service-profile* # commit-buffer
DETAILED STEPS
Step 2 UCS-A /org # scope service-profile profile-name Enters organization service profile mode for the specified
service profile.
Step 3 UCS-A /org/service-profile # power down Shuts down the server associated with the service profile.
Step 4 UCS-A /org/service-profile* # commit-buffer Commits the transaction to the system configuration.
Example
The following example shuts down the server associated with the service profile named ServProf34
and commits the transaction:
UCS-A# scope org /
UCS-A /org # scope service-profile ServProf34
UCS-A /org/service-profile # power down
UCS-A /org/service-profile* # commit-buffer
UCS-A /org/service-profile #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
130
S3X60 Server Node Hardware Management
Resetting a Cisco UCS S3260 Server Node to Factory Default Settings
Note If you are trying to boot a server from a power-down state, you should not use Reset.
If you continue the power-up with this process, the desired power state of the servers become out of
sync with the actual power state and the servers might unexpectedly shut down at a later time. To safely
reboot the selected servers from a power-down state, click Cancel, then select the Boot Server action.
SUMMARY STEPS
1. UCS-A# scope server chassis-num / server-num
2. UCS-A /chassis/server # reset {hard-reset-immediate | hard-reset-wait}
3. UCS-A /server* # commit-buffer
DETAILED STEPS
Step 2 UCS-A /chassis/server # reset {hard-reset-immediate Performs a hard reset of the server.
| hard-reset-wait}
Use the:
• hard-reset-immediate keyword to immediately begin
hard resetting the server.
• hard-reset-wait keyword to schedule the hard reset
to begin after all pending management operations have
completed.
Step 3 UCS-A /server* # commit-buffer Commits the transaction to the system configuration.
Example
The following example performs an immediate hard reset of server 1 in chassis 3 and commits the
transaction:
UCS-A# scope server 3/1
UCS-A /chassis/server # reset hard-reset-immediate
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
131
S3X60 Server Node Hardware Management
Resetting a Cisco UCS S3260 Server Node to Factory Default Settings
The following guidelines apply to Cisco UCS S3260 Server Nodes when using scrub policies:
• For Cisco UCS S3260 Server Nodes, you cannot delete storage by using the scrub policy.
• Cisco UCS S3260 Server Nodes do not support FlexFlash drives.
• For Cisco UCS S3260 Server Nodes, you can only reset the BIOS by using the scrub policy.
Perform the following procedure to reset the server to factory default settings.
SUMMARY STEPS
1. UCS-A# scope server chassis-num / server-num
2. UCS-A /chassis/server # reset factory-default [delete-flexflash-storage | delete-storage
[create-initial-storage-volumes] ]
3. UCS-A /chassis/server* # commit-buffer
DETAILED STEPS
Step 2 UCS-A /chassis/server # reset factory-default Resets server settings to factory default using the following
[delete-flexflash-storage | delete-storage command options:
[create-initial-storage-volumes] ]
• factory-default—Resets the server to factory defaults
without deleting storage
Note This operation resets the BIOS.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
132
S3X60 Server Node Hardware Management
Removing a Server from a Chassis
Example
The following example resets the server settings to factory default without deleting storage, and
commits the transaction:
UCS-A# scope server 3/1
UCS-A /chassis/server # reset factory-default
UCS-A /chassis/server* # commit-buffer
The following example resets the server settings to factory default, deletes flexflash storage, and
commits the transaction:
UCS-A# scope server 3/1
UCS-A /chassis/server # reset factory-default delete-flexflash-storage
The following example resets the server settings to factory default, deletes all storage, and commits
the transaction:
UCS-A# scope server 3/1
UCS-A /chassis/server # reset factory-default delete-storage
UCS-A /chassis/server* # commit-buffer
The following example resets the server settings to factory default, deletes all storage, sets all disks
to their initial state, and commits the transaction:
UCS-A# scope server 3/1
UCS-A /chassis/server # reset factory-default delete-storage create-initial-storage-volumes
UCS-A /chassis/server* # commit-buffer
DETAILED STEPS
Step 3 Go to the physical location of the chassis and remove the For instructions on how to remove the server hardware, see
server hardware from the slot. the Cisco UCS Hardware Installation Guide for your
chassis.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
133
S3X60 Server Node Hardware Management
Decommissioning a Server
Example
The following example removes server 1 in chassis 3 and commits the transaction:
UCS-A# remove server 3/1
UCS-A* # commit-buffer
UCS-A #
What to do next
If you physically re-install the blade server, you must re-acknowledge the slot for the Cisco UCS Manager to
rediscover the server.
For more information, see Acknowledging a Server, on page 128.
Decommissioning a Server
SUMMARY STEPS
1. UCS-A# decommission server chassis-num / server-num
2. UCS-A*# commit-buffer
DETAILED STEPS
Example
The following example decommissions server 1 in chassis 3 and commits the transaction:
UCS-A# decommission server 3/1
UCS-A* # commit-buffer
UCS-A #
Recommissioning a Server
SUMMARY STEPS
1. UCS-A# recommission server chassis-num / server-num
2. UCS-A*# commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
134
S3X60 Server Node Hardware Management
Turning On the Locator LED for a Server
DETAILED STEPS
Example
The following example recommissions server 1 in chassis 3 and commits the transaction:
UCS-A# recommission server 3/1
UCS-A* # commit-buffer
UCS-A #
DETAILED STEPS
Step 2 UCS-A /chassis/server # enable locator-led [multi-master Turns on the server locator LED. The following command
| multi-slave] options are not applicable to Cisco UCS S3260 Server
Nodes:
• multi-master—Turns on the LED for the master node
only.
• multi-slave—Turns on the LED for the slave node
only.
Step 3 UCS-A /chassis/server* # commit-buffer Commits the transaction to the system configuration.
Example
The following example turns on the locator LED on server 1 in chassis 3 and commits the transaction:
UCS-A# scope server 3/1
UCS-A /chassis/server # enable locator-led
UCS-A /chassis/server* # commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
135
S3X60 Server Node Hardware Management
Turning Off the Locator LED for a Server
UCS-A /chassis/server #
The following example turns on the locator LED for the master node only on server 1 in chassis 3
and commits the transaction:
UCS-A# scope chassis 3/1
UCS-A /chassis/server # enable locator-led multi-master
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
DETAILED STEPS
Step 2 UCS-A /chassis/server # disable locator-led [multi-master Turns off the server locator LED. The following command
| multi-slave] options are not applicable to Cisco UCS S3260 Server
Nodes:
• multi-master—Turns off the LED for the master node
only.
• multi-slave—Turns off the LED for the slave node
only.
Step 3 UCS-A /chassis/server* # commit-buffer Commits the transaction to the system configuration.
Example
The following example turns off the locator LED on server 1 in chassis 3 and commits the transaction:
UCS-A# scope chassis 3/1
UCS-A /chassis/server # disable locator-led
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
The following example turns off the locator LED for the master node on server 1 in chassis 3 and
commits the transaction:
UCS-A# scope chassis 3/1
UCS-A /chassis/server # disable locator-led multi-master
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
136
S3X60 Server Node Hardware Management
Resetting All Memory Errors
SUMMARY STEPS
1. UCS-A# scope server chassis-num / server-num
2. UCS-A /chassis/server # reset-all-memory-errors
3. UCS-A /chassis/server* # commit-buffer
DETAILED STEPS
Step 3 UCS-A /chassis/server* # commit-buffer Commits the transaction to the system configuration.
Example
The following example performs an immediate hard reset of server 1 in chassis 3 and commits the
transaction:
UCS-A# scope server 3/1
UCS-A /chassis/server # reset-all-memory-errors
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
SUMMARY STEPS
1. UCS-A# scope server chassis-num / server-num
2. UCS-A /chassis/server # reset-ipmi
3. UCS-A /chassis/server* # commit-buffer
DETAILED STEPS
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
137
S3X60 Server Node Hardware Management
Resetting the CIMC for a Server
Example
The following example resets the IPMI settings to factory default and commits the transaction:
UCS-A# scope server 3/1
UCS-A /chassis/server # reset-ipmi
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
SUMMARY STEPS
1. UCS-A# scope server chassis-num / server-num
2. UCS-A /chassis/server # scope cimc
3. UCS-A /chassis/server/cimc # reset
4. UCS-A /chassis/server/cimc* # commit-buffer
DETAILED STEPS
Step 2 UCS-A /chassis/server # scope cimc Enters chassis server CIMC mode
Step 3 UCS-A /chassis/server/cimc # reset Resets the CIMC for the server.
Step 4 UCS-A /chassis/server/cimc* # commit-buffer Commits the transaction to the system configuration.
Example
The following example resets the CIMC for server 1 in chassis 3 and commits the transaction:
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
138
S3X60 Server Node Hardware Management
Resetting the CMOS for a Server
SUMMARY STEPS
1. UCS-A# scope server chassis-num / server-num
2. UCS-A /chassis/server # reset-cmos
3. UCS-A /chassis/server* # commit-buffer
DETAILED STEPS
Step 2 UCS-A /chassis/server # reset-cmos Resets the CMOS for the server.
Step 3 UCS-A /chassis/server* # commit-buffer Commits the transaction to the system configuration.
Example
The following example resets the CMOS for server 1 in chassis 3 and commits the transaction:
UCS-A# scope server 3/1
UCS-A /chassis/server # reset-cmos
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
Resetting KVM
Perform the following procedure if you need to reset and clear all KVM sessions.
SUMMARY STEPS
1. UCS-A# scope server chassis-num / server-num
2. UCS-A /chassis/server # reset-kvm
3. UCS-A /chassis/server* # commit-buffer
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
139
S3X60 Server Node Hardware Management
Issuing an NMI from a Server
DETAILED STEPS
Step 2 UCS-A /chassis/server # reset-kvm Resets and clears all KVM sessions.
Example
The following example resets and clears all KVM sessions and commits the transaction:
UCS-A# scope server 3/1
UCS-A /chassis/server # reset-kvm
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
SUMMARY STEPS
1. UCS-A# scope server chassis-num / server-num
2. UCS-A /chassis/server # diagnostic-interrupt
3. UCS-A /chassis/server* # commit-buffer
DETAILED STEPS
Example
The following example sends an NMI from server 1 in chassis 3 and commits the transaction:
UCS-A# scope server 3/1
UCS-A /chassis/server # diagnostic-interrupt
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
140
S3X60 Server Node Hardware Management
Recovering a Corrupt BIOS
SUMMARY STEPS
1. UCS-A# scope server chassis-num / server-num
2. UCS-A /chassis/server # recover-bios version
3. UCS-A /chassis/server* # commit-buffer
DETAILED STEPS
Step 2 UCS-A /chassis/server # recover-bios version Loads and activates the specified BIOS version.
Step 3 UCS-A /chassis/server* # commit-buffer Commits the transaction to the system configuration.
Example
The following example shows how to recover the BIOS:
UCS-A# scope server 3/1
UCS-A /chassis/server # recover-bios S5500.0044.0.3.1.010620101125
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
Name Description
Severity column The severity of the alarm. This can be one of the
following:
• Critical - The server health LED blinks amber.
This is indicated with a red dot.
• Minor - The server health LED is amber. This is
indicated with an orange dot.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
141
S3X60 Server Node Hardware Management
Viewing Health LED Status
Name Description
Sensor Name column The name of the sensor that triggered the alarm.
DETAILED STEPS
Step 2 UCS-A /chassis/server # show health-led expand Displays the health LED and sensor alarms for the selected
server.
Example
The following example shows how to display the health LED status and sensor alarms for chassis 1
server 3:
UCS-A# scope server 1/3
UCS-A /chassis/server # show health-led expand
Health LED:
Severity: Normal
Reason:
Color: Green
Oper State: On
UCS-A /chassis/server #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
142
CHAPTER 11
Virtual Interface Management
• Virtual Circuits, on page 143
• Virtual Interfaces, on page 143
• Virtual Interface Subscription Management and Error Handling, on page 144
• Virtualization in Cisco UCS , on page 144
Virtual Circuits
A virtual circuit or virtual path refers to the path that a frame takes from its source vNIC to its destination
virtual switch port (vEth) or from a source virtual switch port to its destination vNIC. There are many possible
virtual circuits that traverse through a physical cable. Cisco UCS Manager uses virtual network tags (VN-TAG)
to identify these virtual circuits and differentiate between them. The OS decides the virtual circuit that a frame
must traverse on a basis of a series of decisions.
In the server, the OS decides the Ethernet interface from which to send the frame.
Note During service profile configuration, you can select the fabric interconnect to be associated with a vNIC.
You can also choose whether fabric failover is enabled for the vNIC. If fabric failover is enabled, the
vNIC can access the second fabric interconnect when the default fabric interconnect is unavailable.
Cisco UCS Manager Server Management Guide provides more details about vNIC configuration during
service profile creation.
After the host vNIC is selected, the frame exits the selected vNIC and, through the host interface port (HIF),
enters the IOM to which the vNIC is pinned. The frame is then forwarded to the corresponding network
Interface port (NIF) and then to the Fabric Interconnect to which the IOM is pinned.
The NIF is selected based on the number of physical connections between the IOM and the Fabric Interconnect,
and on the server ID from which the frame originated.
Virtual Interfaces
In a blade server environment, the number of vNICs and vHBAs configurable for a service profile is determined
by adapter capability and the amount of virtual interface (VIF) namespace available on the adapter. In Cisco
UCS, portions of VIF namespace are allotted in chunks called VIFs. Depending on your hardware, the maximum
number of VIFs are allocated on a predefined, per-port basis.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
143
Virtual Interface Management
Virtual Interface Subscription Management and Error Handling
The maximum number of VIFs varies based on hardware capability and port connectivity. For each configured
vNIC or vHBA, one or two VIFs are allocated. Stand-alone vNICs and vHBAs use one VIF and failover
vNICs and vHBAs use two.
The following variables affect the number of VIFs available to a blade server, and therefore, how many vNICs
and vHBAs you can configure for a service profile.
• Maximum number of VIFs supported on your fabric interconnect
• How the fabric interconnects are cabled
• If your fabric interconnect and IOM are configured in fabric port channel mode
For more information about the maximum number of VIFs supported by your hardware configuration, see
the appropriate Cisco UCS Configuration Limits for Cisco UCS Manager for your software release.
If you change your configuration in a way that decreases the number of VIFs available to a blade, UCS
Manager will display a warning and ask you if you want to proceed. This includes several scenarios, including
times where adding or moving a connection decreases the number of VIFs.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
144
Virtual Interface Management
Overview of Cisco Virtual Machine Fabric Extender
Important VM-FEX is not supported with Cisco UCS 6454 Fabric Interconnects.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
145
Virtual Interface Management
Virtualization with a Virtual Interface Card Adapter
VIC adapters support VM-FEX to provide hardware-based switching of traffic to and from virtual machine
interfaces.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
146
CHAPTER 12
Troubleshoot Infrastructure
• Recovering the Corrupt BIOS on a Blade Server, on page 147
• Recovering the Corrupt BIOS on a Rack-Mount Server, on page 148
Important Remove all attached or mapped USB storage from a server before you attempt to recover the corrupt
BIOS on that server. If an external USB drive is attached or mapped from vMedia to the server, BIOS
recovery fails.
SUMMARY STEPS
1. UCS-A# scope server chassis-id / server-id
2. UCS-A /chassis/server # recover-bios version
3. UCS-A /chassis/server # commit-buffer
DETAILED STEPS
Step 2 UCS-A /chassis/server # recover-bios version Loads and activates the specified BIOS version.
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
147
Troubleshoot Infrastructure
Recovering the Corrupt BIOS on a Rack-Mount Server
Example
The following example shows how to recover the BIOS:
UCS-A# scope server 1/7
UCS-A /chassis/server # recover-bios S5500.0044.0.3.1.010620101125
UCS-A /chassis/server* # commit-buffer
UCS-A /chassis/server #
Important Remove all attached or mapped USB storage from a server before you attempt to recover the corrupt
BIOS on that server. If an external USB drive is attached or mapped from vMedia to the server, BIOS
recovery fails.
SUMMARY STEPS
1. UCS-A# scope server server-id
2. UCS-A /server # recover-bios version
3. UCS-A /server # commit-buffer
DETAILED STEPS
Step 2 UCS-A /server # recover-bios version Loads and activates the specified BIOS version.
Example
The following example shows how to recover the BIOS:
UCS-A# scope server 1
UCS-A /server # recover-bios S5500.0044.0.3.1.010620101125
UCS-A /server* # commit-buffer
UCS-A /server #
Cisco UCS Manager Infrastructure Management Using the CLI, Release 4.0
148