0% found this document useful (0 votes)
223 views

CHAPTER 3 Initializing An ACI Fabric

Uploaded by

scribdmax404
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
223 views

CHAPTER 3 Initializing An ACI Fabric

Uploaded by

scribdmax404
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 75

Skip to content

 Home

o Your O'Reilly
 Profile
 History
 Playlists
 Highlights
o Answers
o Explore
 All Topics
 Most Popular Titles
 Recommended
 Early Releases
 Shared Playlists
 Resource Centers
o Live Events
 All Events
 Architectural Katas
 AI & ML
 Data Sci & Eng
 Programming
 Infra & Ops
 Software Arch
o Interactive
 Scenarios
 Sandboxes
 Jupyter Notebooks
o Certifications
o Settings
o Support
o Newsletters
o Sign Out

Table of Contents for


 CCNP Data Center Application Centric Infrastructure 300-620 DCACI
Official Cert Guide

CLOSE
CCNP Data Center Application
Centric Infrastructure 300-620 DCACI Official Cert Guideby Ammar
AhmadiPublished by Cisco Press, 2021
1. Cover Page (01:09 mins)
2. About This eBook (01:09 mins)
3. Title Page (01:09 mins)
4. Copyright Page (03:27 mins)
5. About the Author (01:09 mins)
6. About the Technical Reviewers (01:09 mins)
7. Dedication (01:09 mins)
8. Acknowledgments (01:09 mins)
9. Contents at a Glance (01:09 mins)
10. Reader Services (01:09 mins)
11. Contents (13:48 mins)
12. Icons Used in This Book (01:09 mins)
13. Command Syntax Conventions (01:09 mins)
14. Introduction (12:39 mins)
15. Figure Credit (01:09 mins)
16. Part I Introduction to Deployment (01:09 mins)
o  Chapter 1 The Big Picture: Why ACI? (32:12 mins)
o  Chapter 2 Understanding ACI Hardware and Topologies  (42:33 mins)
o  Chapter 3 Initializing an ACI Fabric (93:09 mins)
o  Chapter 4 Exploring ACI (59:48 mins)
17. Part II ACI Fundamentals (01:09 mins)
o  Chapter 5 Tenant Building Blocks (44:51 mins)
o  Chapter 6 Access Policies (55:12 mins)
o  Chapter 7 Implementing Access Policies (92:00 mins)
o  Chapter 8 Implementing Tenant Policies  (97:45 mins)
18. Part III External Connectivity (01:09 mins)
o  Chapter 9 L3Outs (125:21 mins)
o  Chapter 10 Extending Layer 2 Outside ACI  (60:57 mins)
19. Part IV Integrations (01:09 mins)
o  Chapter 11 Integrating ACI into vSphere Using VDS  (54:03 mins)
o  Chapter 12 Implementing Service Graphs  (69:00 mins)
20. Part V Management and Monitoring (01:09 mins)
o  Chapter 13 Implementing Management (35:39 mins)
o  Chapter 14 Monitoring ACI Using Syslog and SNMP  (51:45 mins)
o  Chapter 15 Implementing AAA and RBAC  (63:15 mins)
21. Part VI Operations (01:09 mins)
o  Chapter 16 ACI Anywhere (26:27 mins)
22. Part VII Final Preparation (01:09 mins)
o  Chapter 17 Final Preparation (10:21 mins)
23. Appendix A Answers to the “Do I Know This Already?” Questions  (27:36 mins)
24. Appendix B CCNP Data Center Application Centric Infrastructure DCACI 300-620
Exam Updates (02:18 mins)
25. Glossary (23:00 mins)
26. Index (69:00 mins)
27. Appendix C Memory Tables (32:12 mins)
28. Appendix D Memory Tables Answer Key (34:30 mins)
29. Appendix E Study Planner (04:36 mins)
30. Where are the companion content files? - Register  (01:09 mins)
31. Inside Front Cover (01:09 mins)
32. Inside Back Cover (01:09 mins)
33. Code Snippets (05:45 mins)
 Search in book...

 Toggle Font Controls

o
o
o
o
PREV Previous Chapter

Chapter 2 Understanding ACI Hardware and Topologies

NEXT Next Chapter


Chapter 4 Exploring ACI

Chapter 3
Initializing an ACI Fabric

This chapter covers the following topics:

Understanding ACI Fabric Initialization: This section describes the planning


needed prior to fabric initialization and the process of initializing a new ACI
fabric.
Initializing an ACI Fabric: This section walks through the process of
initializing an ACI fabric.
Basic Post-Initialization Tasks: This section touches on some of the basic
tasks often performed right after fabric initialization.

This chapter covers the following exam topics:


 1.4 Describe ACI fabric discovery
 5.1 Implement out-of-band and in-band
 5.3 Implement configuration backup (snapshot/config import
export)
 5.5 Configure an upgrade
Not all ACI engineers will be initializing new fabrics. Some will be more
operations focused; others will be more implementation or design focused. But
understanding the fabric discovery and initialization process is important for all
ACI engineers.
For operations engineers, there is a possibility that new switch onboarding may
necessitate troubleshooting of the switch discovery process. Implementation-
focused individuals, on the other hand, may be more interested in understanding
the planning necessary to deploy ACI fabrics.
This chapter first reviews the fabric discovery process. It then reviews the steps
necessary for initializing an ACI fabric, discovering and onboarding switches,
and completing basic post-initialization tasks, such as APIC and switch
upgrades.

“DO I KNOW THIS ALREADY?” QUIZ


The “Do I Know This Already?” quiz allows you to assess whether you should
read this entire chapter thoroughly or jump to the “Exam Preparation Tasks”
section. If you are in doubt about your answers to these questions or your own
assessment of your knowledge of the topics, read the entire chapter. Table 3-
1 lists the major headings in this chapter and their corresponding “Do I Know
This Already?” quiz questions. You can find the answers in Appendix A,
“Answers to the ‘Do I Know This Already?’ Questions.”
Table 3-1 “Do I Know This Already?” Section-to-Question Mapping
Foundation Topics Section Q

Understanding ACI Fabric Initialization 1

Initializing an ACI Fabric 5

Basic Post-Initialization Tasks 7


Caution
The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you
do not know the answer to a question or are only partially sure of the answer, you should
mark that question as wrong for purposes of the self-assessment. Giving yourself credit
for an answer you correctly guess skews your self-assessment results and might provide
you with a false sense of security.
1. A company has purchased APICs for an ACI deployment. Which of the
following switch platforms is the best candidate for connecting the APICs to the
fabric?
1. Nexus 9364C
2. Nexus 9336PQ
3. Nexus 9332C
4. Nexus 93180YC-FX
2. Changing which of the following parameters necessitates a fabric rebuild?
(Choose all that apply.)
1. Infrastructure VLAN
2. APIC OOB IP address
3. Fabric ID
4. Active or standby status of a controller
3. At the end of which stage in the switch discovery process are switches
considered to be fully activated?
1. Switch software upgrades
2. IFM establishment
3. LLDP neighbor discovery
4. TEP IP assignment to nodes
4. An ACI engineer is initializing a fabric, but the first APIC is unable to add a
seed switch to the Fabric Membership view. Which of the following could
potentially be the causes? (Choose all that apply.)
1. No spines have yet been discovered.
2. The active APIC in-band interface connects to an NX-OS switch.
3. The APIC has not received a DHCP Discover message from the seed leaf.
4. The APICs need to form a cluster first.
5. An administrator has made several changes pertinent to the Cisco IMC while
bootstrapping an APIC. Which of the following might be preventing fabric
discovery?
1. The IP address assigned to the Cisco IMC is incorrect.
2. The NIC mode has been updated to Shared LOM.
3. The Cisco IMC default gateway settings is incorrect.
4. The Cisco IMC firmware has been updated.
6. Which of the following is associated exclusively with spine switches?
1. VTEP
2. PTEP
3. DTEP
4. Proxy-TEP
7. Which of the following import types and modes enables a user to overwrite
all current configurations with settings from a backup file?
1. Atomic Merge
2. Best Effort Merge
3. Atomic Replace
4. Best Effort Replace
8. Which of the following are valid protocols for forwarding ACI backups to a
remote server? (Choose all that apply.)
1. TFTP
2. FTP
3. SFTP
4. SCP
9. An administrator wants to conduct an upgrade of an ACI fabric. How can he
best group the switches to ensure minimal outage, assuming that servers are
dual-homed?
1. Create two upgrade groups: one for spines and one for leafs.
2. Create two upgrade groups: one for odd switch node IDs and one for even
switch node IDs.
3. Create four upgrade groups and randomly assign node IDs to each.
4. Create four upgrade groups: one for odd leafs, one for even leafs, one for
odd spines, one for even spines.
10. True or false: ACI can take automated scheduled backups.
1. True
2. False

FOUNDATION TOPICS
UNDERSTANDING ACI FABRIC
INITIALIZATION
Before administrators can create subnets within ACI and configure switch ports
for server traffic, an ACI fabric needs to be initialized.
The process of fabric initialization involves attaching APICs to leaf switches,
attaching leaf switches to spines, configuring APICs to communicate with leaf
switches, and activating the switches one by one until the APICs are able to
configure all switches in the fabric. Let’s look first at the planning needed for
fabric initialization.

Planning Fabric Initialization


The planning necessary for fabric initialization can be divided into two
categories:
 Cabling and physical deployment planning: This category of
tasks includes racking and stacking of hardware, cabling, powering on
devices, and guaranteeing proper cooling. This book addresses only some
of the basic cabling requirements because facilities issues are not the
focus of the Implementing Cisco Application Centric Infrastructure
DCACI 300-620 exam.
 Planning of minimal configuration parameters: This includes
preparation of all the configurations needed to bootstrap the APICs,
enable all ACI switches, and join APICs to a cluster.
One way to approach planning an ACI fabric initialization is to create a fabric
initialization checklist or a basic table that includes all the information needed
to set up the fabric.

Understanding Cabling Requirements


Before initializing a fabric, you need to run cabling between leaf and spine
fabric ports. By default, fabric ports are the high-order ports on the right side of
leaf switches. They are generally high-bandwidth ports compared to the server
downlinks. Figure 3-1 shows a Nexus 93180YC-FX leaf switch. The six ports to
the right are all fabric ports by default. The phrase “by default” is intentional
here: On leaf switches, fabric ports can be converted to server downlinks and
vice versa, but the switch must first be initialized into a fabric.
Figure 3-1 Nexus 93180YC-FX Leaf with Six Fabric Ports
Unlike the Nexus 93180YC-FX, a number of leaf platforms have default fabric
ports that cannot be easily distinguished by their physical appearance.
Leaf fabric ports can generally be connected to any spine ports (except the spine
out-of-band [OOB] management port and any 10 Gbps ports), as long as the
transceivers and port speeds are compatible.
Not all leaf-to-spine connections need to be run for fabric discovery to be
possible, but there needs to be enough physical connectivity to allow all
switches and APICs to have at least a single path to one another.
For example, Figure 3-2 does not represent a full-mesh connectivity between
the leaf and spine layers, but it is a perfectly valid topology for the purpose of
enabling a full fabric initialization.

Figure 3-2 Sample Topology Enabling Complete Fabric Discovery

Connecting APICs to the Fabric


In addition to leaf-to-spine fabric port connectivity, the APICs need to be able
to establish an in-band communication path through the fabric.
On the back of an APIC, you can see a number of different types of
ports. Figure 3-3 shows a rear-panel view of a third-generation APIC populated
with a VIC 1455 card.
Figure 3-3 Rear View of a Third-Generation APIC
Table 3-2 provides a legend highlighting the components shown in Figure 3-3.
Table 3-2 Legend for Components Numbered in Figure 3-3
Number Component Number Component

1 USB 3.0 ports (2) 6 Rear unit identification bu

2 Dual 1 /10 Gigabit Ethernet ports (LAN1 7 Power supplies (two, redu
and LAN2)

3 VGA video port (DB-15 connector) 8 PCIe riser 1/slot 1 (x16 lan

4 1 Gigabit Ethernet dedicated management 9 VIC 1455 with external 10


port Ethernet ports (4)

5 Serial port (RJ-45 connector) 10 Threaded holes for dual-ho

Out of the components depicted in Figure 3-3, the VIC 1455 ports are of most
importance for the fabric discovery process because they form the in-band
communication channel into the fabric. The VIC 1455 card has four 10/25
Gigabit Ethernet ports. VIC adapters in earlier generations of APICs had two 10
Gigabit Ethernet ports instead. At least one VIC port on each APIC needs to be
cabled to a leaf to enable full APIC cluster formation. For redundancy purposes,
it is best to diversify connectivity from each APIC across a pair of leaf switches
by connecting at least two ports.

In first- and second-generation APICs sold with variants of dual-port VIC 1225
cards, ports 1 and 2 would need to be cabled up to leaf switches to diversify
connectivity. In third-generation APICs, however, ports 1 and 2 together
represent logical port eth2-1, and ports 3 and 4 together represent eth2-2. Ports
eth2-1 and eth2-2 are then bundled together into an active/standby team at the
operating system level. For this reason, diversifying in-band APIC connectivity
across two leaf switches in third-generation APICs requires that one cable be
connected to either port 1 or port 2 and another cable be attached to either port 3
or port 4. Connecting both ports that represent a logical port (for example, ports
1 and 2) to leaf switches in third-generation APICs can result in unpredictable
failover issues.
Not all ACI leaf switches support 10/25 Gigabit Ethernet cabling. During the
deployment planning stage, it is important to ensure that the leaf nodes to which
the APICs connect actually support the available VIC port speeds and that
proper transceivers and cabling are available.

Initial Configuration of APICs


Out of the box, APICs come with ACI code installed. Normally, switch
configuration involves establishing console connectivity to the switch and
implementing a basic configuration that allows remote SSH access to the
switch. APICs, on the other hand, are servers and not network switches. As
such, it is easiest to configure APICs using a crash cart with a standard DB-15
VGA connector and a USB keyboard.

APIC OOB Configuration Requirements

In addition to cabling the in-band communication channel, APICs have two


embedded LAN on motherboard (LOM) ports for out-of-band management of
the APIC. In third-generation APICs, these dual LAN ports support both 1 and
10 Gigabit Ethernet. (In Figure 3-3, these two LOM ports are shown with the
number 2.) As part of the initialization process, users enter an out-of-band IP
address for each APIC. The APIC then bonds these two LOM interfaces
together and assigns the out-of-band IP address to the bond. From the out-of-
band switch to which these ports connect, these connections appear as
individual links and should not be misinterpreted as port channels. Basically,
the APIC binds the OOB MAC and IP address to a single link and repins the
traffic over to the second link if the active interface fails.

OOB management interfaces should not be confused with the Cisco Integrated
Management Controller (Cisco IMC) port on the APICs. The APIC Cisco
IMC allows lights-out management of the physical server, firmware upgrades,
and monitoring of server hardware health. While the dual 1/10 Gigabit Ethernet
LOM ports enable out-of-band access to the APIC operating system, the Cisco
IMC provides out-of-band access to the server hardware itself. With Cisco IMC
access, an engineer can gain virtual KVM access to the server and reinstall the
APIC operating system remotely in the event that the APIC is no longer
accessible. But the Cisco IMC cannot be used to gain HTTPS access to the ACI
management interface. Because of the significance of Cisco IMC in APIC
recovery, assigning an IP address to the Cisco IMC is often viewed as a
critically important fabric initialization task.
APIC OOB IP addresses and Cisco IMC IP addresses are often selected from
the same subnet even though it is not required for them to be in the same subnet.

Out-of-Band Versus In-Band Management


By default, administrators configure ACI fabrics through the dual OOB
interfaces on the APICs. The APICs, in turn, configure switches and
communicate with one another using the in-band channel over the VIC adapters.
If the default behavior of managing the fabric through the OOB interfaces is not
desirable, administrators can implement in-band management.
There are many factors to consider when determining whether to use in-band
management, but the only configuration option available during APIC
initialization is to implement OOB management. Administrators can then log in
to the ACI GUI and manually implement in-band management.
Out-of-band management of ACI fabrics is the most popular deployment
option.
Chapter 13, “Implementing Management,” discusses in-band management, its
implications, and implementation in detail.

Configuration Information for Fabric Initialization


Table 3-3 describes the basic configuration parameters that need to be planned
before an ACI fabric can be initialized and that you need to understand for the
DCACI 300-620 exam.

Table 3-3 Basic Configuration Parameters for Fabric Initialization


Configuration Description
Parameter

Fabric Name A user-friendly name for the fabric. If no name is entered, ACI use
Fabric1.

Fabric ID A numeric identifier between 1 and 128 for the ACI fabric. If no ID
uses 1 as the fabric ID.

Number of active A self-explanatory parameter whose valid values are 1 through 9. T


controllers 3 for three APICs. If the intent is to add additional APICs to the fa
select 3 and modify this parameter when it is time to add new APIC

Pod ID A parameter that determines the unique pod ID to which the APIC
attached. When ACI Multi-Pod is not being deployed, use the defa

Standby Controller An APIC added to a fabric solely to aid in fabric recovery and in r
APIC quorum during a prolonged outage. If the APIC being initial
APIC, select Yes for this parameter.

Controller ID The unique ID number for the APIC being configured. Valid value
32. The first three active APICs should always be assigned IDs bet
node ID values for standby APICs range from 16 to 32.

Controller Name The unique APIC hostname.

Pod 1 TEP Pool The TEP pool assigned to the seed pod. A TEP pool is a subnet us
fabric communication. This subnet can potentially be advertised ou
IPN or ISN or when a fabric is extended to virtual environments us
AVE. TEP pool subnets should ideally be unique across an enterpr
Cisco recommends that TEP pool subnet sizes be between /16 and
sizes do impact pod scalability, and use of /16 or /17 ranges is high
pod needs a separate TEP pool. However, during APIC initializatio
Configuration Description
Parameter

assigned to the seed pod (Pod 1) is what should be entered in the in


because all APICs in Multi-Pod environments pull their TEP addre
TEP pool.

Infrastructure (infra) The VLAN ID used for control communication between ACI fabri
VLAN switches, spine switches, and APICs). The infrastructure VLAN is
extending an ACI fabric to AVS or AVE virtual switches. The infr
unique and unused elsewhere in the environment. Acceptable IDs a
Because the VLAN may need to be extended outside ACI, ensure t
infrastructure VLAN does not fall into the reserved VLAN range o
switches.

BD Multicast The IP address range used for multicast within a fabric. In ACI Mu
Addresses (GiPo) environments, the same range can be used across sites. If the admin
change the default range, 225.0.0.0/15 will be selected for this para
are between 225.0.0.0/15 and 231.254.0.0/15. A prefix length of 15

APIC OOB Addresses Addresses assigned to OOB LOM ports for access to the APIC GU
and Default Gateway separate from the Cisco IMC ports.

Password Strength A parameter that determines whether to enforce the use of passwor
strength for all users. The default behavior is to enforce strong pas

Some of the configuration parameters listed in Table 3-3 cannot be changed and


require that a fabric be wiped clean and re-initialized in case of a
misconfiguration. Specifically, the parameters to which attention is most
critically important include Fabric Name, Fabric ID, Pod 1 TEP Pool, and
Infrastructure VLAN.

Switch Discovery Process


Following a minimal configuration bootstrap of the first APIC, switch discovery
can begin. So how do APICs use the parameters in Table 3-3 to discover
switches and enable them to join the fabric? Figure 3-4 provides a high-level
illustration of the process that takes place.
Figure 3-4 Switch Discovery Process

The process depicted in Figure 3-4 includes the following steps:


Step 1.LLDP neighbor discovery: After a minimal configuration bootstrap,
the first APIC begins sending out LLDP packets on its in-band interfaces.
Unregistered leaf switches send LLDP packets on all operational ports. The
APIC should eventually pick up LLDP packets from the neighboring leaf if the
switch is fully operational and has ACI code installed. From the LLDP packets,
the APIC can determine the serial number and hardware platform of the
attached device.
Step 2.TEP IP assignment to nodes: In addition to LLDP packets,
unregistered ACI switches send DHCP Discover packets on operational
interfaces. Once an APIC detects a switch via LLDP and is able to process
DHCP Discover packets from the leaf, it adds the device to the Fabric
Membership tab. An administrator then needs to register the switch to authorize
it to join the fabric. The registration process maps a node ID to the switch and
configures its hostname. The switch registration begins with the APIC
responding to the switch DHCP requests with a DHCP Offer packet. The leaf
confirms that it does want the offered IP address using a DHCP Request
message, following which the APIC confirms the IP assignment with a DHCP
ACK packet. APICs pull the IP addresses assigned during this process from the
TEP pool range configured during APIC initialization. Each leaf switch is
assigned a TEP address. These TEP addresses reside in a VRF instance called
overlay-1 in a tenant called infra.
Step 3.Switch software upgrades, if necessary: APICs are able to
communicate to switches that they need to undergo upgrades to a particular
code level before they can be moved into production status. If a switch upgrade
is required, the switch downloads the necessary firmware from the APICs,
performs an upgrade, and reboots. The Default Firmware Version setting
determines whether a switch upgrade is necessary. This setting is detailed later
in this chapter.
Step 4.Policy element intra-fabric messaging (IFM) setup: After the switch
boots up with the intended code revision, the APIC authenticates the switch by
using the switch certificate signed at the factory and opens communication with
the switch TEP address over the infrastructure VLAN using intra-fabric
messaging (IFM). All IFM channel communication over the infrastructure
VLAN is encrypted using TLS Version 1.2, and every message that comes to
the switch over the IFM channel must be decrypted before it is processed by the
switch. Once APICs establish IFM communication with a switch, the switch is
fully activated. Any policy push from the APICs to switches rides this
encrypted IFM communication channel.
Depending on the switch being discovered, some minor tasks may be added to
the overall discovery process. For example, a Remote Leaf discovery would
additionally require DHCP relay functionality to be enabled for DHCP packets
from the Remote Leaf to reach the APICs. (The task of enabling DHCP relay
does not conflict with the four primary steps outlined for switch discovery.)
Another example of minor tasks added to the process is establishment of IS-IS
adjacencies between leaf and spine switches using the switch loopback 0
interfaces.

Fabric Discovery Stages


After the bootstrapping of the first APIC, fabric initialization happens in the
following three phases:

1. Seed leaf initialization: Even when an APIC VIC adapter attaches to two


or more operational leaf switches, the APIC can detect only one of the leaf
switches. This is because APIC VIC adapters operate in active/standby mode.
Activation of the first leaf switch by an administrator allows the leaf to function
as a seed switch for further discovery of the fabric.
2. Spine initialization: After the seed leaf initialization, any spines with
fabric ports attached to the seed leaf are detected and added to the Fabric
Membership view to allow spine activation.
3. Initialization of leaf switches and additional APICs: As spines are
brought into the fabric, ACI can detect other leaf switches connected to them.
Administrators can then activate the leaf switches. Once the leaf switches
connected to additional APICs join the fabric, the APIC cluster forms, and
APIC synchronization begins. Controllers join the cluster based on node ID. In
other words, the third APIC (whose node ID is 3) joins the cluster only after the
first and second APICs have joined. If any critical bootstrap configuration
parameters have been entered incorrectly on the additional controllers, the
APIC fails to join the cluster and needs to be wiped clean and re-initialized.
Note that the phases outlined here describe cluster formation as part of the final
leaf initialization phase. However, if active in-band interfaces on all APICs
connect to the seed leaf switch, the APIC cluster can form during the seed leaf
initialization phase.

Switch Discovery States


During the discovery process, switches transition between various states. Table
3-4 describes the different discovery states.

Table 3-4 Fabric Node Discovery States


State Description

Unknown The node has been detected, but a node ID has not yet been as
administrator in the Fabric Membership view.

Undiscovered An administrator has prestaged a switch activation by manuall


serial number to a node ID, but a switch with the specified seri
yet been detected via LLDP and DHCP.

Discovering The node has been detected, and the APICs are in the process
specified node ID as well as a TEP IP address to the switch.

Unsupported The node is a Cisco switch, but it is not supported or the firmw
compatible with the ACI fabric.

Disabled/Decommissioned The node has been discovered and activated, but a user disable
decommissioned it. The node can be reenabled.

Maintenance An ACI administrator has put the switch into maintenance mo


insertion and removal).

Inactive The node has been discovered and activated, but it is not curre
example, it may be powered off, or its cables may be disconne

Active The node is an active member of the fabric.

INITIALIZING AN ACI FABRIC


Once all cabling has been completed and the APICs and ACI switches have
been turned on, it is time to initialize the fabric. The tasks in this section lead to
the configuration of the APIC Cisco IMC addresses, the initialization of the
APICs, and the activation of ACI switches.

Changing the APIC BIOS Password


One of the things ACI implementation engineers usually do during APIC setup
is to change the default BIOS password.
To change the BIOS password, you press the F2 key during the boot process to
enter the BIOS setup. Then you can enter the default BIOS
password password in the Enter Password dialog box and navigate to the
Security tab, choose Set Administrator Password, and enter the current
password in the Enter Current Password dialog box. When the Create New
Password dialog box appears, enter the new password and then enter the new
password again in the Confirm New Password dialog box. Finally, navigate to
the Save & Exit tab and choose Yes in the Save & Exit Setup dialog box. The
next time BIOS setup is accessed, the new BIOS password will be needed.

Configuring the APIC Cisco IMC


After changing the BIOS password, it is a good idea to configure a static IP
address for the APIC Cisco IMC addresses.
To configure a static IP address for remote Cisco IMC access, press the F8 key
during the boot process to enter Cisco IMC. Enter the desired IP addressing
details in the section IP (Basic), as shown in Figure 3-5. Then press the F10 key
to save the Cisco IMC configuration and wait up to 20 seconds for the
configuration change to take effect before rebooting the server.
Figure 3-5 Enter IP Addressing Details for Cisco IMC

As a best practice, do not modify the NIC Mode or NIC Redundancy settings in
Cisco IMC. If there are any discovery issues, ensure that Cisco IMC has been
configured with the default NIC Mode setting Dedicated and not Shared. The
NIC Redundancy setting should also be left at its default value None.

Initializing the First APIC


When the APIC boots up, basic configuration parameters need to be entered in
line with the pre-installation data captured in earlier steps. Example 3-1 shows
how the first APIC in a fabric with ID 1 and the name DC1-Fabric1 might be
configured. Note that you can leave certain parameters at their default values by
pressing the Enter key without modifying associated values. The BD multicast
addresses range, for instance, is left at its default value of 225.0.0.0/15 in the
following example.
Example 3-1 Initialization of First APIC

Click here to view code image


Cluster configuration ..

Enter the fabric name [ACI Fabric1]: DC1-Fabric1


Enter the fabric ID (1-128) [1]: 1

Enter the number of active controllers in the fabric (1-9) [3]: 3

Enter the POD ID (1-9) [1]: 1

Is this a standby controller? [NO]: NO

Enter the controller ID (1-3) [1]: 1

Enter the controller name [apic1]: DC1-APIC1

Enter address pool for TEP addresses [10.0.0.0/16]: 10.233.44.0/22

Note: The infra VLAN ID should not be used elsewhere in your environment

and should not overlap with any other reserved VLANs on other platforms.

Enter the VLAN ID for infra network (2-4094): 3600

Enter address pool for BD multicast addresses (GIPO) [225.0.0.0/15]:

Out-of-band management configuration ..

Enable IPv6 for Out of Band Mgmt Interface? [N]:

Enter the IPv4 address [192.168.10.1/24]: 172.23.142.29/21

Enter the IPv4 address of the default gateway [None]: 172.23.136.1

Enter the interface speed/duplex mode [auto]:

admin user configuration ..

Enable strong passwords? [Y]:

Enter the password for admin:

Reenter the password for admin:

Cluster configuration ..

Fabric name: DC1-Fabric1


Fabric ID: 1

Number of controllers: 3

Controller name: DC1-APIC1

POD ID: 1

Controller ID: 1

TEP address pool: 10.233.44.0/22

Infra VLAN ID: 3600

Multicast address pool: 225.0.0.0/15

Out-of-band management configuration ..

Management IP address: 172.23.142.29/21

Default gateway: 172.23.136.1

Interface speed/duplex mode: auto

admin user configuration ..

Strong Passwords: Y

User name: admin

Password: ********

The above configuration will be applied ..

Warning: TEP address pool, Infra VLAN ID and Multicast address pool

cannot be changed later, these are permanent until the

fabric is wiped.

Would you like to edit the configuration? (y/n) [n]:


After you complete the minimal configuration bootstrap for the first controller,
the APIC starts various services, and the APIC web GUI eventually becomes
accessible via the APIC out-of-band management IP address. Figure 3-6 shows
the ACI login page. By default, APICs allow web access via HTTPS and not
HTTP.

Figure 3-6 The Default ACI Login Screen


Enter admin as the username along with the password entered during setup to
log in to the APIC.

Discovering and Activating Switches


The switch activation process involves selection of node IDs for all switches.
The first three active APICs need to be assigned node IDs 1, 2, and 3. ACI
design engineers have more flexibility in the selection of switch node IDs. As of
ACI Release 4.2, valid switch node IDs are between 101 and 4000. Node IDs
are cornerstones of ACI stateless networking. Once a switch is commissioned,
node ID changes require that the node be decommissioned and cleanly rebooted.
Figure 3-7 shows a hypothetical node ID selection scheme in which spine
switches have node ID numbers between 201 and 299 and leaf switches have
node numbers between 101 and 199. It is a Cisco best practice to assign
subsequent node IDs to leaf switches that are paired into a VPC domain.
Figure 3-7 Node ID Assignment in a Topology Under Discovery
Following the initialization of DC1-APIC1 in Figure 3-7, the APIC should
detect that a leaf switch is connected to its active VIC interface and add it to the
Fabric Membership view. Navigate to Fabric, select Inventory, and then click
on Fabric Membership. In the Fabric Membership view, select Nodes Pending
Registration, right-click the detected switch entry, and select Register, as
demonstrated in Figure 3-8. This first leaf switch added to the fabric will serve
as the seed leaf for the discovery of the remaining switches in the fabric.
Figure 3-8 Selecting the Entry for Unknown Switch and Launch Registration
Wizard
In the node registration wizard, enter values in the fields Pod ID, Node ID, and
Node Name (hostname) and then click Register (see Figure 3-9). If the switch
has been auto-detected by ACI, the role should be auto-populated. The Rack
Name parameter is optional. The RL TEP Pool field should be populated only
during configuration of a Remote Leaf switch.
Figure 3-9 The Node Registration Wizard
Aside from the leaf and spine roles, the node registration wizard allows
assignment of virtualleaf and virtualspine roles for vPOD switches,
the controller role for APICs, the remoteleaf role, and tier-2-leaf role for Tier 2
leaf switches.
Minutes after registering the seed switch, it should move into an active state.
The state of commissioned fabric nodes can be verified under the Status column
in the Registered Nodes subtab of the Fabric Membership menu.
Figure 3-10 shows that all node IDs depicted in Figure 3-7 earlier in this chapter
have been initialized one by one and have moved to an active state, completing
the fabric initialization process.

Figure 3-10 Registered Nodes Submenu of the Fabric Membership View

Understanding Graceful Insertion and Removal (GIR)


Figure 3-11 shows that one of the menu options that appears when you right-
click a fabric node is Maintenance (GIR). Moving a switch into maintenance
mode simulates an uplink failure from the perspective of downstream servers.
This feature enables a more graceful way of moving a switch out of the data
plane forwarding topology when minor maintenance or switch upgrades are
necessary.
Figure 3-11 Graceful Insertion and Removal Feature

Initializing Subsequent APICs


The minimal configuration bootstrap for subsequent APICs can be performed
simultaneously with the initialization of the first APIC. However, the APICs do
not form a complete cluster until the end-to-end path between the APICs has
been established over the infrastructure VLAN.
Remember that even when multiple APICs have connections to the seed leaf
switch, it is still possible that they may not be able to form a cluster through the
one seed leaf due to the active/standby status of the VIC adapter interfaces at
the time of initialization.
But beyond the process and order of node activation, there is also the issue of
bootstrapping requirements to form a cluster. If the fabric ID, fabric name, or
Pod 1 TEP pool configured on the subsequent APICs are not the same as what
has been configured for the initial controller, the APIC cluster will never form.
In such cases, when the underlying problem is a misconfiguration on the second
or third APIC, that APIC needs to be wiped clean and re-initialized. If the first
APIC has been misconfigured, the entire fabric needs to be wiped clean and re-
initialized.
Some APIC configuration parameters that should not be the same as those
entered for the initial APIC include the out-of-band IP address and the APIC
node ID.
After establishing end-to-end connectivity, you can verify the health of an APIC
cluster by navigating to the System menu, selecting Controllers, opening the
Controllers folder, double-clicking an APIC, and then selecting Cluster as Seen
by Node. If the controllers are healthy and fully synchronized, all APICs should
display Fully Fit in the Health State column, as shown in Figure 3-12.

Figure 3-12 Verifying Health and Synchronization Status of APICs

Understanding Connectivity Following Switch


Initialization
What actually happens during the switch node activation process from a routing
perspective? One of the first things that happens is that IS-IS adjacencies are
established between the leaf and spine switches, as shown in Example 3-2.
Here, interfaces Ethernet 1/49 and 1/50 are the leaf fabric ports.
Example 3-2 Verifying IS-IS Adjacencies Within the Fabric

Click here to view code image


LEAF101# show isis adjacency detail vrf overlay-1

IS-IS process: isis_infra VRF:overlay-1

IS-IS adjacency database:

System ID SNPA Level State Hold Time Interface


212E.E90A.0000 N/A 1 UP 00:01:01 Ethernet1/49.34

Up/Down transitions: 1, Last transition: 21d17h ago

Circuit Type: L1

IPv4 Address: 10.233.46.33

232E.E90A.0000 N/A 1 UP 00:00:55 Ethernet1/50.35

Up/Down transitions: 1, Last transition: 21d17h ago

Circuit Type: L1

IPv4 Address: 10.233.46.35

A look at the addresses with which LEAF101 has established adjacencies


indicates that IS-IS adjacencies are sourced and destined from and to loopback 0
interfaces on leaf and spine switches. Furthermore, loopback 0 interfaces get
associated with all operational fabric ports, as indicated in Example 3-3. The IP
address ACI assigns to the loopback 0 interface of a given switch is a specific
type of TEP address referred to as a physical tunnel endpoint (PTEP) address.
Example 3-3 Verifying Switch TEP Addresses

Click here to view code image


LEAF101# show ip int brief | grep -E "lo0|unnumbered"

eth1/49.34 unnumbered protocol-up/link-up/admin-up

(lo0)

eth1/50.35 unnumbered protocol-up/link-up/admin-up

(lo0)

lo0 10.233.46.32/32 protocol-up/link-up/admin-up

SPINE201# show ip int brief | grep -E "lo0|unnumbered"

eth1/1.37 unnumbered protocol-up/link-up/admin-up

(lo0)

eth1/2.38 unnumbered protocol-up/link-up/admin-up

(lo0)
lo0 10.233.46.33/32 protocol-up/link-up/admin-up

SPINE202# show ip int brief | grep -E "lo0|unnumbered"

eth1/1.35 unnumbered protocol-up/link-up/admin-up

(lo0)

eth1/2.36 unnumbered protocol-up/link-up/admin-up

(lo0)

lo0 10.233.46.35/32 protocol-up/link-up/admin-up

In addition to loopback 0 interfaces, ACI creates loopback 1023 interfaces on


all leaf switches. A loopback 1023 interface is used for assignment of a single
fabricwide pervasive IP address called a fabric tunnel endpoint
(FTEP) address. The FTEP address represents the entire fabric and is used to
encapsulate traffic in VXLAN to an AVS or AVE virtual switch, if present.
ACI also assigns an SVI and IP address to leaf switches in the infrastructure
VLAN. In Example 3-4, internal VLAN 8 on LEAF101 actually maps to VLAN
3600, which is the infrastructure VLAN configured during fabric initialization.
Note that the infrastructure VLAN SVI should contain the same IP address for
all leaf switches.
Example 3-4 Additional Auto-Established Connectivity in the Overlay-1 VRF Instance

Click here to view code image


LEAF101# show ip int brief vrf overlay-1

(...output truncated for brevity...)

IP Interface Status for VRF "overlay-1"(4)

Interface Address Interface Status

eth1/49 unassigned protocol-up/link-up/admin-up

eth1/49.34 unnumbered protocol-up/link-up/admin-up

(lo0)

eth1/50 unassigned protocol-up/link-up/admin-up

eth1/50.35 unnumbered protocol-up/link-up/admin-up

(lo0)
vlan8 10.233.44.30/27 protocol-up/link-up/admin-up

lo0 10.233.46.32/32 protocol-up/link-up/admin-up

lo1023 10.233.44.32/32 protocol-up/link-up/admin-up

LEAF101# show vlan extended

VLAN Name Encap Ports

-------- ------------------------ -------------------- -----------------------------

8 infra:default vxlan-16777209, Eth1/1, Eth1/2, Eth1/47

vlan-3600

Once an ACI fabric has been fully initialized, each switch should have dynamic
tunnel endpoint (DTEP) entries that include PTEP addresses for all other
devices in the fabric as well as entries pointing to spine proxy (proxy TEP)
addresses. Example 3-5 shows DTEP entries from the perspective of LEAF101
with the proxy TEP addresses highlighted.
Example 3-5 Dynamic Tunnel Endpoint (DTEP) Database

Click here to view code image


LEAF101# show isis dteps vrf overlay-1

IS-IS Dynamic Tunnel End Point (DTEP) database:

DTEP-Address Role Encapsulation Type

10.233.46.33 SPINE N/A PHYSICAL

10.233.47.65 SPINE N/A PHYSICAL,PROXY-ACAST-MAC

10.233.47.66 SPINE N/A PHYSICAL,PROXY-ACAST-V4

10.233.47.64 SPINE N/A PHYSICAL,PROXY-ACAST-V6

10.233.46.34 LEAF N/A PHYSICAL

10.233.46.35 SPINE N/A PHYSICAL


If a leaf switch knows the destination leaf behind which an endpoint resides, it
is able to tunnel the traffic directly to the destination leaf without using
resources on the intermediary spine switches. If a leaf switch does not know
where the destination endpoint resides, it can forward the traffic to the spine
proxy addresses, and the recipient spine can then perform a lookup in its local
Council of Oracle Protocol (COOP) database and forward the traffic to the
intended recipient leaf. This spine proxy forwarding behavior is more efficient
than forwarding via broadcasts and learning destination switches through ARP.
Reliance on spine proxy forwarding instead of flooding of broadcast, unknown
unicast, and multicast traffic is called hardware proxy forwarding. The benefit
of using hardware proxy forwarding is that ACI is able to potentially eliminate
flooding within the fabric, allowing the fabric to better scale while also limiting
the amount of traffic servers need to process.
Because ACI leaf switches are able to use IS-IS to dynamically learn all PTEP
and spine proxy addresses within the fabric, they are able to create tunnel
interfaces to various destinations in the fabric. A tunnel in ACI can be simply
interpreted as a reference to the next-hop addresses to reach a particular
destination. Example 3-6 lists the tunnels on LEAF101. Tunnels 1, 3, and 4 are
destined to leaf and spine PTEP addresses. Tunnels 5 through 7 reference proxy
TEP addresses. Finally, tunnels 8, 9, and 10 refer to the TEP addresses assigned
to APIC 1, APIC 2, and APIC 3, respectively.
Example 3-6 Tunnel Interfaces Sourced from lo0 with Different Destinations

Click here to view code image


LEAF101# show interface tunnel 1-20 | grep -E 'destination|up'

Tunnel1 is up

Tunnel destination 10.233.46.33

Tunnel3 is up

Tunnel destination 10.233.46.34

Tunnel4 is up

Tunnel destination 10.233.46.35

Tunnel5 is up

Tunnel destination 10.233.47.65

Tunnel6 is up

Tunnel destination 10.233.47.66


Tunnel7 is up

Tunnel destination 10.233.47.64

Tunnel8 is up

Tunnel destination 10.233.44.1

Tunnel9 is up

Tunnel destination 10.233.44.2

Tunnel10 is up

Tunnel destination 10.233.44.3

Note that aside from IS-IS, ACI enables COOP functionality on all available
spine switches as part of the fabric initialization process. This ensures that leaf
switches can communicate endpoint mapping information (location and
identity) to spine switches. However, fabric initialization does not result in the
automatic establishment of control plane adjacencies for protocols such as MP-
BGP. As of the time of this writing, a BGP autonomous system number needs to
be selected, and at least one spine has to be designated as a route reflector
before MP-BGP can be effectively used within an ACI fabric.

BASIC POST-INITIALIZATION TASKS


After the initialization of APICs and switches, there are a number of tasks that
are generally seen as basic prerequisites for putting a fabric into production.
This section gives a rundown of such tasks.

Assigning Static Out-of-Band Addresses to Switches and


APICs
Assigning out-of-band addresses to switches ensures that administrators can
access switches via SSH. Out-of-band addresses can be assigned statically by an
administrator or dynamically out of a pool of addresses.

Figure 3-13 shows how to assign static out-of-band addresses to fabric nodes


through the Create Static Node Management Addresses page. To create static
out-of-band addresses for a node, navigate to Tenants and select the tenant
named mgmt. Within the tenant, double-click the Node Management Addresses
folder. Then right-click the Static Node Management Addresses folder and
select Create Static Node Management Addresses. Select default from the Out-
of-Band Management EPG drop-down box. Chapter 5, “Tenants Building
Blocks,” describes EPGs thoroughly, but for now you just need to know that the
default out-of-band management EPG is an object that represents one or more
out-of-band subnets or specific out-of-band addresses used for ACI switches
and APICs. Out-of-band management EPGs other than the default object can be
created if desired to enable application of granular security policies to different
nodes. After entering the node ID in both Node Range fields, the out-of-band
IPv4 address, and the out-of-band IPv4 gateway details for a given switch or
APIC, click Submit. The static node address mapping should then appear under
the Static Node Management Addresses folder.

Figure 3-13 Creating Static Out-of-Band Addresses for Switches and APICs


Even though APIC addresses are manually assigned through the controller
initialization process, they do not by default appear under the Static Node
Management Addresses folder. This is a problem if monitoring solutions such
as SNMP are used to query ACI. Assigning static out-of-band addresses to
APICs following fabric initialization helps ensure that certain monitoring
functions work as expected. Figure 3-14 illustrates how the original node IDs
used during APIC initialization should be used to add static OOB IP addressing
entries to APICs.
Figure 3-14 Static Node Management Addresses View After OOB IP
Configuration
Chapter 13 covers dynamic out-of-band and in-band management in more
detail.

Applying a Default Contract to Out-of-Band Subnet


From a high level, contracts enable the enforcement of security and other
policies to the endpoints to which the contract associates. As a fabric
initialization task, administrators can assign an out-of-the-box contract called
default from a tenant called common to the OOB EPG to allow all
communication to and from the OOB subnet.
While assignment of contracts permitting all communication is not an ideal
long-term approach, it does enable the gradual enforcement of security policies
as requirements are better understood. Moreover, the application of a contract is
necessary when enabling certain management protocols, such as Telnet. Also,
even though it is not required to implement an OOB contract for certain features
like syslog forwarding to work, it is best practice to do so.
To apply the default OOB contract to the OOB management EPG, navigate to
the mgmt tenant, open the Node Management EPG folder, and select Out-of-
Band EPG - default. Then, in the Provided Out-of-Band Contracts section,
select the contract common/default and click Update and click Submit
(see Figure 3-15).

After application of a contract on an OOB EPG, a mechanism is needed to


define the subnets outside the fabric that will have open access to the ACI out-
of-band IP addresses assigned to the OOB management EPG. The mechanism
used for management connectivity is an external management network instance
profile. Navigate to the mgmt tenant, right-click the External Management
Network Instance Profile folder, and select Create External Management
Network Instance Profile. Provide a name for the object and select the default
contract from the common tenant in the Consumed Out-of-Band Contracts
section. Finally, enter the subnets that should be allowed to communicate with
the ACI OOB EPG in the Subnets section, select Update, and then click Submit.
To enable all subnets to communicate with ACI over the OOB interfaces, enter
the subnet 0.0.0.0/0. Alternatively, you can enter all private IP address ranges or
specific subnets assigned to administrators. Figure 3-16 shows the creation of an
external management network instance profile.

Figure 3-15 Assigning a Contract to an OOB Management EPG


Figure 3-16 Creating an External Management Network Instance Profile
To recap, it is important to enforce contracts for access to the OOB management
interface of an ACI fabric because certain configurations rely on contract
enforcement. For open communication to the OOB subnets through use of
contracts, take the following three steps:

Step 1.Assign static node management addresses: Assign out-of-band


addresses to all switches and APICs in the fabric and ensure that all nodes are
shown in the Static Node Management Addresses view.
Step 2.Assign contract to the desired out-of-band EPG: By default, the
object called Out-of-Band EPGs - default represents OOB subnets. Assigning a
contract that allows all traffic, such as the contract named common/default, can
enable open communication to OOB subnets.
Step 3.Define external management network instance profiles and associate
contracts: An external management network instance profile determines the
subnets that can gain management access to ACI. Allocate the same contract
applied in the previous step to the external management network instance
profile you create to ensure that the contract is enforced between the external
subnets you define and the ACI OOB subnets.

Upgrading an ACI Fabric


As a best practice, all nodes within an ACI fabric should operate at the same
code revision. Upon purchasing ACI switches and APICs and setting up an ACI
fabric, it is highly likely that components may have been shipped at different
code levels. For this reason, it is common practice for engineers to upgrade ACI
fabrics right after initialization.
If there are version disparities between APICs and ACI switch code, it is also
possible for the APICs to flag certain switches as requiring electronic
programmable logic device (EPLD) upgrades. EPLD upgrades enhance
hardware functionality and resolve known issues with hardware firmware.
EPLD upgrade code is sometimes slipstreamed into ACI firmware images, and
therefore an EPLD upgrade may take place automatically as part of ACI fabric
upgrades.
The first thing to do when upgrading a fabric is to decide on a target code.
Consult the release notes for candidate target software revisions and review any
associated open software defects. Also, use the APIC Upgrade/Downgrade
Support Matrix from Cisco to determine if there are any intermediary code
upgrades required to reach the targeted code.
After selecting a target software revision, download the desired APIC and
switch code from the Cisco website. ACI switch and APIC firmware images
that can be used for upgrades have the file extensions .bin and .iso, respectively.
The ACI fabric upgrade process involves three steps:
Step 1.Download APIC and switch software images and then upload them to
the APICs.
Step 2.Upgrade APICs.
Step 3.Upgrade spine and leaf switches in groups.
To upload firmware images to ACI, navigate to the Admin tab, click on
Firmware, select Images, click the Tools icon, and then select Add Firmware to
APIC (see Figure 3-17).
Figure 3-17 Navigating to the Add Firmware to APIC Page
In the Add Firmware to APIC page, keep Firmware Image Location set at its
default value, Local, and then click Browse to select a file for upload from the
local device from which the web session to the APIC has been established.
Alternatively, either HTTP or SCP (Secure Copy Protocol) can be used to
download the target software code from a remote server. To download the
image from a remote server, select the Remote option under Firmware Image
Location and enter a name and URL for the download operation. SCP
authenticates and encrypts file transfers and therefore additionally requires entry
of a username and password with access to download rights on the SCP server.
Instead of using a password, you can have ACI leverage SSH key data for the
SCP download. Figure 3-18 shows sample data for downloading a file from an
SCP server using a local username and password configured on the SCP server.
Figure 3-18 Downloading Firmware from a Remote SCP Server
Once the firmware images have been uploaded to the APICs, they appear in the
Images view (refer to Figure 3-17).

Unless release notes or the APIC Upgrade/Downgrade Support Matrix for a


target release indicates otherwise, APICs should always be upgraded first.
Navigate to the Admin menu, select Firmware, and click Infrastructure. Under
the Controllers menu, click the Tools icon and select Schedule Controller
Upgrade, as shown in Figure 3-19.
Figure 3-19 Navigating to the Schedule Controller Upgrade Page

The Schedule Controller Upgrade page opens. ACI advises against the upgrade
if any critical or major faults exist in the fabric. These faults point to important
problems in the fabric and can lead to traffic disruption during or after the
upgrade. Engineers are responsible for fully understanding the caveats
associated with active faults within a fabric. Do not upgrade a fabric when there
are doubts about the implications of a given fault. After resolving any critical
and major faults, select the target firmware version, define the upgrade mode
via the Upgrade Start Time field (that is, whether the upgrade should begin right
away or at a specified time in the future), and then click Submit to confirm the
selected APIC upgrade schedule. During APIC upgrades, users lose
management access to the APICs and need to reconnect.
Figure 3-20 shows how to kick off an immediate upgrade by selecting Upgrade
Now and clicking Submit.
Figure 3-20 Schedule Controller Upgrade Page
By default, ACI verifies whether the upgrade path from the currently running
version of the system to a specific newer version is supported. If, for any
reason, ACI does not allow an upgrade due to the compatibility checks, and this
is determined to be a false positive or if you wish to proceed with the upgrade
anyway, you can enable the Ignore Compatibility Checks setting shown
in Figure 3-20.
Following completion of any APIC upgrades, switch upgrades can begin. Cisco
ACI uses the concept of upgrade groups to execute a group of switch upgrades
consecutively. The idea behind upgrade groups is that if all servers have been
dual connected to an odd and even switch, then an upgrade group consisting of
all odd leaf switches should not lead to server traffic disruption as long as the
even leaf upgrades do not happen until all odd leaf switches have fully
recovered. Furthermore, if only half of all available spine switches are upgraded
simultaneously and an even number of spines have been deployed, then there is
little likelihood of unexpected traffic disruption.
In a hypothetical upgrade group setup, a fabric could be divided into the
following four groups:
 Odd spine switches
 Even spine switches
 Odd leaf switches
 Even leaf switches
Note
Cisco only provides general guidance on configuration of upgrade groups. To maintain
connectivity in a production environment, Cisco suggests that administrators define
a minimum of two upgrade groups and upgrade one group at a time. Performing a
minimally disruptive upgrade with two upgrade groups requires an administrator to
group and upgrade a set of spine switches and leaf switches together. Most
environments, however, tend to separate switches out into four or more upgrade groups
to reduce the risk and extent of downtime if, for any reason, something goes wrong.

To configure an upgrade group, navigate to the Admin menu, select Firmware,


click Infrastructure, and then select Nodes. Open the Tools menu and select
Schedule Node Upgrade, as shown in Figure 3-21.

Figure 3-21 Navigating to Schedule Node Upgrade

In the Schedule Node Upgrade window, select New in the Upgrade Group field,
choose a target firmware version, select an upgrade start time, and then select
the switches that should be placed in the upgrade group by clicking the + sign in
the All Nodes view. Nodes can be selected from a range based on node IDs or
manually one by one. Finally, click Submit to execute the upgrade group
creation and confirm scheduling of the upgrade of all switches that are members
of this new upgrade group. Figure 3-22 shows the creation of an upgrade group
called ODD-SPINES and scheduling of the upgrade of relevant nodes to take
place right away. The completion of upgrades of all switches in an upgrade
group can take anywhere from 12 to 30 minutes.
The Graceful Maintenance option ensures that the switches in the upgrade
group are put into maintenance mode and removed from the server traffic
forwarding path before the upgrade begins. The Run Mode option determines
whether ACI will proceed with any subsequently triggered upgrades that may
be in queue if a failure of the current upgrade group takes place. The default
value for this parameter is Pause upon Upgrade Failure, and in most cases it is
best not to modify this setting from its default.

Figure 3-22 Creating an Upgrade Group and Scheduling Node Upgrades


One of the checkboxes shown but disabled in Figure 3-22 is Manual Silent Roll
Package Upgrade. A silent roll package upgrade is an internal package upgrade
for an ACI switch hardware SDK, drivers, or other internal components without
an upgrade of the entire ACI switch software operating system. Typically, you
do not need to perform a silent roll upgrade because upgrading the ACI switch
operating system takes care of internal packages as well. Each upgrade group
can be dedicated to either silent roll package upgrades or firmware upgrades but
not both. Thus, the selection of a firmware code revision from the Target
Firmware Version pull-down disables the Manual Silent Roll Package
checkbox.
The triggering of an upgrade group places all switches in the specified upgrade
group into queue for upgrades to the targeted firmware version. If upgrades for
a group of nodes have been scheduled to start right away and no prior upgrade
group is undergoing upgrades, the node upgrades can begin right away.
Otherwise, the nodes are placed into queue for upgrades of previous upgrade
groups to complete (see Figure 3-23). As indicated in Figure 3-23, the EVEN-
SPINES group needs to wait its turn and allow upgrades of nodes in the ODD-
LEAFS group to finish first.

Figure 3-23 An Upgrade Group Placed into Queue Due to Ongoing Upgrades
Cisco recommends that ACI switches be divided into two or more upgrade
groups. No more than 20 switches can be placed into a single upgrade group.
Switches should be placed into upgrade groups to ensure maximum redundancy.
If, for example, all spine switches are placed into a single upgrade group, major
traffic disruption should be expected.
Once an upgrade group has been created, the grouping can be reused for
subsequent fabric upgrades. Figure 3-24 shows how the selection of Existing in
the Upgrade Group field allows administrators to reuse previously created
upgrade group settings and trigger new upgrades simply by modifying the target
firmware revision.
Figure 3-24 Reusing a Previously Created Upgrade Group for Subsequent
Upgrades

Understanding Schedulers

An administrator can create a scheduler to specify a window of time for ACI to


execute operations such as switch upgrades and configuration backups.
Schedulers can be triggered on a one-time-only basis or can recur on a regular
basis.
When an administrator creates an upgrade group, ACI automatically generates a
scheduler object with the same name as the group.
In Figure 3-24 in the previous section, Schedule for Later has been selected for
the Upgrade Start Time parameter, which in the installed APIC code version
defaults to a scheduler with the equivalent name as the upgrade group name.
The administrator can edit the selected scheduler by clicking on the blue link
displayed in front of it. Figure 3-25 shows the Trigger Scheduler window, from
which a one-time schedule can be implemented by hovering on the + sign and
clicking Create.
Figure 3-25 Creating a One-Time Trigger for a Scheduler
Figure 3-26 demonstrates the selection of a one-time window trigger, which
involves the selection of a window name, the desired date and time, and the
maximum number of nodes to upgrade simultaneously.

Figure 3-26 Parameters Needed for Adding a One-Time Window Trigger to a


Scheduler
Enabling Automatic Upgrades of New Switches
Earlier in this chapter, we mentioned that APICs can force new switches to
undergo upgrades to a certain firmware version prior to moving them into an
active state.

The code version to which new switches should be upgraded needs to be


selected using the Default Firmware Version setting. This setting, however, may
be unavailable in certain APIC code versions by default. Figure 3-27 shows that
after the Enforce Bootscript Version Validation setting is enabled, an
administrator can then select a value for the Default Firmware Version setting.

Figure 3-27 Selecting the Default Firmware Version


To execute the change, an administrator needs to click Submit. ACI then
requests confirmation of the change by using an alert like the one shown
in Figure 3-28.
Figure 3-28 Confirming Enforcement of Bootscript Version Validation
From the alert message, it is clear that the code version selected for Default
Firmware Version is indeed what is passed along to any new switches as part of
the boot process. The alert message also clarifies that any switches whose node
IDs have been added to an upgrade group will not be bound to the bootscript
version requirements, as manual configuration supersedes the Default Firmware
Version setting. Click Yes to confirm.

Understanding Backups and Restores in ACI


ACI allows both scheduled backups and on-demand backups of user
configurations. The act of making a backup is referred to as a configuration
export. Restoring ACI configurations from a backup is referred to as
a configuration import.
ACI also enables recovery of the fabric configuration to a previous known good
state. This process, called configuration rollback, is very useful when backing
out of a change window is deemed necessary. For configuration rollback to a
specific point in time (for example, prior to a change window), it is important
for administrators to have taken a snapshot of the fabric configuration at the
specified time. Snapshots are stored locally on the APICs.
In addition to snapshots, ACI can export configurations to remote FTP, SFTP,
or SCP servers.
Note
For rapid rollback of configurations, it is best to take very regular configuration
snapshots. To ease disaster recovery, administrators are also advised to retain two or
more remote copies of recent backups at all times. These should be stored in easily
accessible locations outside the local ACI fabric and potentially offsite. To automate
backups, administrators can tie ACI backup operations to schedulers.
When performing a configuration import, ACI wants to know the desired import
type and import mode. Import Type can be either set to Merge or Replace. As
indicated in Table 3-5, the Import Type setting primarily determines what
happens when the configuration being imported conflicts with the current
configuration.

Table 3-5 Import Types


Import Definition
Type

Merge The import operation combines the configuration in the backup file with the cu

Replace The import operation overwrites the current configuration with the configurati
the backup file.

The options for the Import Mode parameter are Best Effort and Atomic. The
Import Mode parameter primarily determines what happens when configuration
errors are identified in the imported settings. Table 3-6 describes the Import
Mode options.
Table 3-6 Import Mode
Import Definition
Mode

Best Each shard is imported, but if there are objects within a shard that are invalid, th
Effort ignored and not imported. If the version of the configuration being imported is in
the current system, shards that can be imported are imported, and all other shard

Atomic The import operation is attempted for each shard, but if a shard has any invalid c
shard is ignored and not imported. Also, if the version of the configuration being
incompatible with the current system, the import operation terminates.

An import operation configured for atomic replacement, therefore, attempts to


import all configurations from the backup and attempts to overwrite all settings
to those specified in the backup file. Where a backup file may be used to import
configurations to a different fabric, a best-effort merge operation may be a more
suitable fit.

Note that when an administrator selects Replace as the import type in the ACI
GUI, the administrator no longer has the option to choose an import mode. This
is because the import mode is automatically set at the default value Atomic to
prevent a situation in which an import type Replace and an import mode Best
Effort might break the fabric.
Another important aspect of backup and restore operations is whether secure
properties are exported into backup files or processed from imported files.
Secure properties are parameters such as SNMP or SFTP credentials or
credentials used for integration with third-party appliances. For ACI to include
these parameters in backup files and process secure properties included in a
backup, the fabric needs to be configured with global AES encryption settings.

Making On-Demand Backups in ACI


To take an on-demand backup of an ACI fabric, navigate to the Admin tab,
select Import/Export, open the Export Policies folder, right-click Configuration,
and select Create Configuration Export Policy, as shown in Figure 3-29.

Figure 3-29 Navigating to the Configuration Import Wizard


In the Create Configuration Export Policy wizard, select a name, select whether
the backup file should conform with JSON or XML format, indicate that a
backup should be generated right after clicking Submit by toggling Start Now to
Yes, and select to create a new remote server destination by right-clicking
Export Destination and selecting Create Remote Location (see Figure 3-30).

Figure 3-30 The Create Configuration Export Policy Wizard


Figure 3-31 shows the Create Remote Location wizard. Enter the details
pertinent to the remote server on which ACI should copy the file, and then click
Submit.
Figure 3-31 The Create Remote Location Wizard
Finally, back in the Create Configuration Export Policy wizard, update the
global AES encryption settings, if desired. Click the Modify Global AES
Encryption Settings checkbox to enable encryption of secure properties, as
shown in Figure 3-32.

Figure 3-32 Navigating to Global AES Encryption from the Export Window


In the Global AES Encryption Settings for All Configuration Import and Export
page, shown in Figure 3-33, select the Enable Encryption checkbox and then
enter the passphrase for encryption. The passphrase needs to be between 16 to
32 characters.

Figure 3-33 Entering Encryption Settings in the Wizard


Click Submit to return to the Create Configuration Export Policy wizard. With
encryption enabled, secure properties will also be included in backup files.
Finally, click Submit to execute the configuration backup.
Note that one of the options available when making configuration backups is to
specify the target DN field. This field limits the backup to a specific portion of
the ACI object hierarchy. When this field is not populated, the policy universe
and all subtrees are captured in the backup file. Chapter 4, “Exploring ACI,”
introduces the ACI object hierarchy in detail.

Making Scheduled Backups in ACI


Scheduled backups are very similar to one-time backups. However, a scheduled
backup also includes a reference to a scheduler object. For instance, an
administrator who wants the entire fabric to be backed up every four hours
could enter settings similar to the ones shown in Figure 3-34.
Figure 3-34 Configuring Automated Backups Using a Recurring Schedule
A scheduler that enables backups every four hours would need six entries, each
configured for execution on a specific hour of day, four hours apart (see Figure
3-35).

Figure 3-35 A Scheduler That Triggers an Action Every Four Hours

Taking Configuration Snapshots in ACI


In addition to backing up configurations to remote locations, ACI allows users
to take a snapshot of the configuration for local storage on the APICs. This can
be done by enabling the Snapshot checkbox in the Create Configuration Export
Policy wizard. Figure 3-36 shows that when the Snapshot checkbox is enabled,
ACI removes the option to export backups to remote destinations.

Figure 3-36 Creating Snapshots of ACI Configurations on a Recurring Basis

Importing Configuration Backups from Remote Servers


To restore a configuration from a backup that resides on a remote server,
navigate to the Admin tab, select Import/Export, drill into the Import Policies
folder, right-click on Configuration, and then select Create Configuration
Import Policy (see Figure 3-37).
Figure 3-37 Navigating to the Create Configuration Import Policy Wizard
In the Create Configuration Import Policy wizard, enter a name for the import
operation, enter details of the backup filename, select the import type and
import mode, select the encryption settings, enter whether the process should
start right away, and enter the remote destination from which the backup file
should be downloaded. Figure 3-38 shows a sample import operation using
Atomic Replace to restore all configuration to that specified in the backup file.
Remember that when Import Type is set to Replace, Import Mode cannot be set
to Best Effort.
Figure 3-38 Restoring the Configuration from a Backup Residing on an
External Server
Once executed, the status of the import operation can be verified in the
Operational tab of the newly created object, as shown in Figure 3-39.

Figure 3-39 Verifying the Status of an Import Operation


In instances in which secure properties are not encrypted or a test of a backup
and restore operation is desired, use of a configuration merge may be more
desirable. Figure 3-40 shows that if Import Type is set to Merge, Import Mode
can be set to Best Effort.

Figure 3-40 Merging a Configuration Backup with Current Configurations

Executing Configuration Rollbacks


When a misconfiguration occurs and there is a need to restore back to an earlier
configuration, you can execute a configuration rollback. To do so, navigate to
the Admin tab and select Config Rollback. Then select the configuration to
which ACI should roll back from the list and select Rollback to This
Configuration, as shown in Figure 3-41.
Figure 3-41 Executing a Configuration Rollback
Note that one of the beauties of configuration rollbacks and backups in ACI in
general is that configurations can be backed up and restored fabricwide, for a
single tenant, or for any specific portion of the ACI fabric object hierarchy.
ACI also simplifies pre-change snapshot creations by allowing users to take
snapshots directly from within the Config Rollback page.
In instances in which a user does not know which snapshot is the most suitable
to revert to, ACI can be directed to compare the contents of snapshots with one
another and log differences between the selected snapshots.

Pod Policy Basics

All switches in ACI reside in a pod. This is true whether ACI Multi-Pod has
been deployed or not. In single-pod deployments, ACI places all switches under
a pod profile called default. Because each pod runs different control plane
protocol instances, administrators need to have a way to modify configurations
that apply to pods. Another reason for the need to tweak pod policies is that
different pods may be in different locations and therefore may need to
synchronize to different NTP servers or talk to different SNMP servers.
A pod profile specifies date and time, podwide SNMP, COOP settings, and IS-
IS and Border Gateway Protocol (BGP) route reflector policies for one or more
pods. Pod profiles map pod policy groups to pods by using pod selectors:
 A pod policy group is a group of individual protocol settings that
are collectively applied to a pod.
 A pod selector is an object that references the pod IDs to which
pod policies apply. Pod policy groups get bound to a pod through a pod
selector.
Figure 3-42 illustrates how the default pod profile (shown as Pod Profile -
default) in an ACI deployment binds a pod policy group called Pod-PolGrp to
all pods within the fabric.

Figure 3-42 Pod Profiles, Pod Policy Groups, and Pod Selectors

Configuring Network Time Protocol (NTP)


Synchronization
One of the day 0 tasks that may require changes to the default pod profile
settings is NTP synchronization. Since multiple data centers may house pods
from a single ACI Multi-Pod deployment, each pod may need to synchronize to
different NTP servers. This is why NTP synchronization needs to be configured
at the pod level.
To modify the list of NTP servers a pod points to, navigate to Fabric, select
Fabric Policies, open the Pods folder, double-click Profiles, double-click the
pod profile for the pod in question, select the relevant pod policy group, and
click on the blue icon in front of the pod policy group to open the pod policy
group applicable to the pod. Pod policy groups are also called fabric policy
groups in several spots in the ACI GUI (see Figure 3-43).

Figure 3-43 Opening the Pod Policy Group for the Relevant Pod
In the Pod Policy Group view, validate the name of the date and time policy
currently applicable to the pod in question. According to Figure 3-44, the date
and time policy that ACI resolves for all pods in a particular deployment is a
date and time policy called default.
Figure 3-44 Verifying the Date and Time Policy Applied to a Pod
After identifying the date and time policy object that has been applied to the pod
of interest, an administrator can either modify the applicable date and time
policy or create and apply a new policy object. Figure 3-45 shows how the
administrator can create a new date and time policy from the Pod Policy Group
view.
Figure 3-45 Creating a New Date and Time Policy in the Pod Policy Group
View
Enter a name for the new policy in the Create Date and Time Policy window
and set the policy Administrative State to enabled, as shown in Figure 3-46, and
click Next. Note that the Server State parameter allows administrators to
configure ACI switches as NTP servers for downstream servers. The
Authentication State option determines whether authentication will be required
for any downstream clients in cases in which ACI functions as an NTP server.

Figure 3-46 Creating a Date and Time Policy


Next, NTP servers need to be defined. Click the + sign on the top-right side of
the NTP servers page to create an NTP provider, as shown in Figure 3-47. Enter
the IP or DNS address of the NTP server in the Name field and set Minimum
Polling Interval, Maximum Polling Interval, Management EPG (in-band or out-
of-band) from which communication will be established. Finally, select whether
the NTP server being configured should be preferred and then click OK.
Figure 3-47 Configuring NTP Providers
Once all NTP providers have been configured, as shown in Figure 3-48, select
Finish.

Figure 3-48 Completing the NTP Provider Configuration


As shown in Figure 3-49, the new date and time policy should appear to be
selected in the Date Time Policy drop-down. Click Submit to apply the change.
Figure 3-49 Applying Changes to a Pod Policy Group
To verify that the changes have taken effect, log in to the APIC CLI via SSH
and run the commands cat /etc/ntp.conf and netstat, as shown in Example 3-7.
Example 3-7 Verifying NTP Configuration and Synchronization on an APIC

Click here to view code image


apic1# cat /etc/ntp.conf

# Permit time synchronization with our time source, but do not

# permit the source to query or modify the service on this system.

tinker panic 501996547

restrict default kod nomodify notrap nopeer noquery

restrict -6 default kod nomodify notrap nopeer noquery

# Permit all access over the loopback interface. This could

# be tightened as well, but to do so would effect some of

# the administrative functions.

#restrict default ignore


restrict 127.0.0.1

#restrict -6 ::1

keysdir /etc/ntp/

keys /etc/ntp/keys

server 10.233.48.10 prefer minpoll 4 maxpoll 6

server 10.133.48.10 minpoll 4 maxpoll 6

apic1# ntpstat

synchronised to NTP server (10.233.48.10) at stratum 4

time correct to within 72 ms

polling server every 16 s

Example 3-8 shows how to verify NTP settings on ACI switches. Execution of


the commands show ntp peers and show ntp peer-status on a switch confirms
that the APICs have deployed the NTP configuration to the switch and that an
NTP server has been selected for synchronization.
Use the command show ntp statistics peer ipaddr in conjunction with the IP
address of a configured NTP server to verify that the NTP server is consistently
sending response packets to the switch.
Example 3-8 Verifying NTP Configuration and Synchronization on an ACI Switch

Click here to view code image


LEAF101# show ntp peers

---------------------------------------------------------------------------------------

Peer IP Address Serv/Peer Prefer KeyId Vrf

---------------------------------------------------------------------------------------

10.233.48.10 Server yes None management

10.133.48.10 Server no None management


LEAF101# show ntp peer-status

Total peers : 3

* - selected for sync, + - peer mode(active),

- - peer mode(passive), = - polled in client mode

remote local st poll reach delay vrf

----------------------------------------------------------------------------------------

*10.233.48.10 0.0.0.0 4 64 3 0.040 management

=10.133.48.10 0.0.0.0 4 64 3 0.040 management

LEAF101# show ntp statistics peer ipaddr 10.233.48.10

remote host: 10.233.48.10

local interface: Unresolved

time last received: 6s

time until next send: 59s

reachability change: 89s

packets sent: 3

packets received: 3

bad authentication: 0

bogus origin: 0

duplicate: 0

bad dispersion: 0

bad reference time: 0

candidate order: 0

Note that if you know the name of the date and time policy applicable to a pod
of interest, you can populate the date and time policy directly by going to
Fabric, selecting Fabric Policies, double-clicking Policies, opening Pod, and
selecting the desired policy under the Date and Time folder (see Figure 3-50). If
there is any question as to whether the right policy has been selected, you can
click the Show Usage button to verify that the policy applies to the nodes of
interest.
Figure 3-50 Navigating Directly to a Specific Date and Time Policy
If the time for a pod should reflect a specific time zone, the Datetime Format
object needs to be modified. You can modify the Datetime Format object by
navigating to System, selecting System Settings, and clicking on Date and
Time.
The Display Format field allows you to toggle between Coordinated Universal
Time (UTC) and local time. Selecting Local exposes the Time Zone field.
Enabling the Offset parameter enables users to view the difference between the
local time and the reference time. Figure 3-51 shows the Datetime Format
object.
Figure 3-51 Selecting a Time Zone via the Datetime Format Object
Note
NTP is considered a critical service for ACI fabrics. Atomic counters, a capability that
measures traffic between leaf switches, requires active NTP synchronization across ACI
fabrics. Without NTP synchronization, ACI is unable to accurately report on packet loss
within the fabric.

Configuring DNS Servers for Lookups


Even though DNS is not explicitly within the scope of the DCACI 300-620
exam, DNS is considered a critical service. Various forms of integrations that
are within the scope of the exam, such as VMM integration, sometimes rely on
DNS. Therefore, this section provides basic coverage of ACI configurations for
DNS lookups.
As a multitenancy platform, ACI needs a mechanism for each tenant to be able
to conduct lookups against different DNS servers. ACI enables such a capability
through DNS profiles. Each profile can point to a different set of DNS servers
and leverage a different set of domains. Administrators can associate a different
DNS profile or DNS label to each tenant to ensure that DNS lookups for
endpoints within the specified tenant take place using DNS settings from the
desired DNS profile.
Where multiple DNS profiles are not needed, a global DNS profile called
default can be used to reference corporate DNS servers.
To create a DNS profile, navigate to the Fabric tab, select Fabric Policies, drill
into the Policies folder, open the Global folder, right-click DNS Profiles, and
select Create DNS Profile. Figure 3-52 shows that the DNS profile name,
management EPG (in-band or out-of-band management connections of APICs),
DNS domains, and DNS providers should be defined as part of the DNS profile
creation process.

Figure 3-52 Creating a DNS Profile


Once a DNS profile has been created, the DNS label should then be associated
with VRF instances within user tenants for ACI to be able to run queries against
servers in the DNS profile. Figure 3-53 shows how to assign the DNS label
Public to a VRF instance called DCACI within a tenant by navigating to the
tenant and selecting Networking, opening VRF instances, selecting the desired
VRF instance, clicking on the Policy menu, and entering the DNS profile name
in the DNS Labels field.
Figure 3-53 Assigning a DNS Label Under a VRF Instance
It is important to differentiate between manually selecting a DNS profile for a
user tenant and associating a DNS profile that enables the APICs themselves to
conduct global lookups. For the APICs to conduct lookups within the CLI and
for use for critical functions, the DNS profile named default needs to be
configured, and the label default needs to be associated with the in-band or out-
of-band management VRF instances. Figure 3-54 shows the default label being
associated with the VRF instance named oob. Association of any DNS label
other than default with the inb and oob VRF instances triggers faults in ACI.
Figure 3-54 Assigning the Default DNS Label to the oob VRF Instance
Following association of the default label to the oob VRF instance, the APICs
should be able to execute pings against servers using their fully qualified
domain names.

Verifying COOP Group Configurations


Council of Oracle Protocol (COOP) is used to communicate endpoint mapping
information (location and identity) to spine switches. A leaf switch forwards
endpoint address information to the spine switch Oracle by using ZeroMQ.
COOP running on the spine nodes ensures that every spine switch maintains a
consistent copy of endpoint address and location information and additionally
maintains the distributed hash table (DHT) repository of endpoint identity-to-
location mapping database.
COOP has been enhanced to support two modes: strict and compatible. In strict
mode, COOP allows MD5 authenticated ZeroMQ connections only to protect
against malicious traffic injection. In compatible mode, COOP accepts both
MD5 authenticated and non-authenticated ZMQ connections for message
transportation.
While COOP is automatically configured by ACI, it is helpful to be able to see
the COOP configuration. To validate COOP settings, navigate to System, select
System Settings, and click COOP Group.
Figure 3-55 shows COOP enabled on both spines with the authentication mode
Compatible Type within a given fabric. When spines are selected to run COOP,
ACI automatically populates the Address field with the loopback 0 address of
the spines selected. If enforcement of COOP authentication is required within
an environment, you need to update the authentication mode to strict type.

Figure 3-55 Verifying COOP Settings in ACI

EXAM PREPARATION TASKS


As mentioned in the section “How to Use This Book” in the Introduction, you
have a couple of choices for exam preparation: Chapter 17, “Final Preparation,”
and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS


Review the most important topics in this chapter, noted with the Key Topic icon
in the outer margin of the page. Table 3-7 lists these key topics and the page
number on which each is found.

Table 3-7 Key Topics for Chapter 3


Key Topic Description
Element

Paragraph Describes APIC in-band ports and minimal versus recommended connectiv
requirements

Paragraph Describes APIC OOB ports and connectivity requirements

Paragraph Contrasts APIC OOB ports with Cisco IMC ports

Table 3-3 Calls out basic configuration parameters that need to be planned for fabric
initialization

List Outlines the steps involved in ACI switch discovery

List Describes fabric discovery stages

Table 3-4 Describes switch discovery states and what each one means

Paragraph Describes the NIC mode and NIC redundancy settings required for proper f
discovery

Paragraph Describes the process of assigning OOB management addresses to ACI nod

Paragraph Explains why it is important to configure entries for APICs in the Static No
Management Addresses folder

Paragraph Describes how to assign the default contract to the OOB management EPG

Paragraph Outlines what external management network instance profiles are and how
can be used to define external subnets that should be allowed to communica
with ACI from a management perspective

List Recaps the process of assigning an open contract to the out-of-band networ

Paragraph Describes how to upload firmware to APICs

Paragraph Describes how to kick off APIC upgrades


Key Topic Description
Element

Paragraph Provides additional critical details on executing APIC upgrades

Paragraph Explains how to configure an upgrade group

Paragraph Provides additional critical details on configuring and triggering an upgrade


group

Paragraph Explains the use of schedulers in ACI

Paragraph Describes the process of setting a default firmware version to enforce code
upgrades for new switches that are introduced into the fabric

Table 3-5 Describes import types

Table 3-6 Describes import modes

Paragraph Explains how all switches are by default placed into the default pod

Paragraph Explains pod profiles, pod policy groups, and pod selectors

COMPLETE TABLES AND LISTS FROM


MEMORY
Print a copy of Appendix C, “Memory Tables” (found on the companion
website), or at least the section for this chapter, and complete the tables and lists
from memory. Appendix D, “Memory Tables Answer Key” (also on the
companion website), includes completed tables and lists you can use to check
your work.

DEFINE KEY TERMS


Define the following key terms from this chapter and check your answers in the
glossary:
APIC in-band port
APIC OOB port
APIC Cisco IMC
TEP pool
infrastructure VLAN
intra-fabric messaging (IFM)
physical tunnel endpoint (PTEP)
fabric tunnel endpoint (FTEP)
dynamic tunnel endpoint (DTEP)
scheduler
pod profile
pod policy group
pod selector
Find answers on the fly, or master something new. Subscribe today. See pricing
options.

 Support
 
 Sign Out
© 2021 O'Reilly Media, Inc. Terms of Service / Privacy Policy

You might also like