CHAPTER 3 Initializing An ACI Fabric
CHAPTER 3 Initializing An ACI Fabric
Home
o Your O'Reilly
Profile
History
Playlists
Highlights
o Answers
o Explore
All Topics
Most Popular Titles
Recommended
Early Releases
Shared Playlists
Resource Centers
o Live Events
All Events
Architectural Katas
AI & ML
Data Sci & Eng
Programming
Infra & Ops
Software Arch
o Interactive
Scenarios
Sandboxes
Jupyter Notebooks
o Certifications
o Settings
o Support
o Newsletters
o Sign Out
CLOSE
CCNP Data Center Application
Centric Infrastructure 300-620 DCACI Official Cert Guideby Ammar
AhmadiPublished by Cisco Press, 2021
1. Cover Page (01:09 mins)
2. About This eBook (01:09 mins)
3. Title Page (01:09 mins)
4. Copyright Page (03:27 mins)
5. About the Author (01:09 mins)
6. About the Technical Reviewers (01:09 mins)
7. Dedication (01:09 mins)
8. Acknowledgments (01:09 mins)
9. Contents at a Glance (01:09 mins)
10. Reader Services (01:09 mins)
11. Contents (13:48 mins)
12. Icons Used in This Book (01:09 mins)
13. Command Syntax Conventions (01:09 mins)
14. Introduction (12:39 mins)
15. Figure Credit (01:09 mins)
16. Part I Introduction to Deployment (01:09 mins)
o Chapter 1 The Big Picture: Why ACI? (32:12 mins)
o Chapter 2 Understanding ACI Hardware and Topologies (42:33 mins)
o Chapter 3 Initializing an ACI Fabric (93:09 mins)
o Chapter 4 Exploring ACI (59:48 mins)
17. Part II ACI Fundamentals (01:09 mins)
o Chapter 5 Tenant Building Blocks (44:51 mins)
o Chapter 6 Access Policies (55:12 mins)
o Chapter 7 Implementing Access Policies (92:00 mins)
o Chapter 8 Implementing Tenant Policies (97:45 mins)
18. Part III External Connectivity (01:09 mins)
o Chapter 9 L3Outs (125:21 mins)
o Chapter 10 Extending Layer 2 Outside ACI (60:57 mins)
19. Part IV Integrations (01:09 mins)
o Chapter 11 Integrating ACI into vSphere Using VDS (54:03 mins)
o Chapter 12 Implementing Service Graphs (69:00 mins)
20. Part V Management and Monitoring (01:09 mins)
o Chapter 13 Implementing Management (35:39 mins)
o Chapter 14 Monitoring ACI Using Syslog and SNMP (51:45 mins)
o Chapter 15 Implementing AAA and RBAC (63:15 mins)
21. Part VI Operations (01:09 mins)
o Chapter 16 ACI Anywhere (26:27 mins)
22. Part VII Final Preparation (01:09 mins)
o Chapter 17 Final Preparation (10:21 mins)
23. Appendix A Answers to the “Do I Know This Already?” Questions (27:36 mins)
24. Appendix B CCNP Data Center Application Centric Infrastructure DCACI 300-620
Exam Updates (02:18 mins)
25. Glossary (23:00 mins)
26. Index (69:00 mins)
27. Appendix C Memory Tables (32:12 mins)
28. Appendix D Memory Tables Answer Key (34:30 mins)
29. Appendix E Study Planner (04:36 mins)
30. Where are the companion content files? - Register (01:09 mins)
31. Inside Front Cover (01:09 mins)
32. Inside Back Cover (01:09 mins)
33. Code Snippets (05:45 mins)
Search in book...
Toggle Font Controls
o
o
o
o
PREV Previous Chapter
Chapter 3
Initializing an ACI Fabric
FOUNDATION TOPICS
UNDERSTANDING ACI FABRIC
INITIALIZATION
Before administrators can create subnets within ACI and configure switch ports
for server traffic, an ACI fabric needs to be initialized.
The process of fabric initialization involves attaching APICs to leaf switches,
attaching leaf switches to spines, configuring APICs to communicate with leaf
switches, and activating the switches one by one until the APICs are able to
configure all switches in the fabric. Let’s look first at the planning needed for
fabric initialization.
2 Dual 1 /10 Gigabit Ethernet ports (LAN1 7 Power supplies (two, redu
and LAN2)
3 VGA video port (DB-15 connector) 8 PCIe riser 1/slot 1 (x16 lan
Out of the components depicted in Figure 3-3, the VIC 1455 ports are of most
importance for the fabric discovery process because they form the in-band
communication channel into the fabric. The VIC 1455 card has four 10/25
Gigabit Ethernet ports. VIC adapters in earlier generations of APICs had two 10
Gigabit Ethernet ports instead. At least one VIC port on each APIC needs to be
cabled to a leaf to enable full APIC cluster formation. For redundancy purposes,
it is best to diversify connectivity from each APIC across a pair of leaf switches
by connecting at least two ports.
In first- and second-generation APICs sold with variants of dual-port VIC 1225
cards, ports 1 and 2 would need to be cabled up to leaf switches to diversify
connectivity. In third-generation APICs, however, ports 1 and 2 together
represent logical port eth2-1, and ports 3 and 4 together represent eth2-2. Ports
eth2-1 and eth2-2 are then bundled together into an active/standby team at the
operating system level. For this reason, diversifying in-band APIC connectivity
across two leaf switches in third-generation APICs requires that one cable be
connected to either port 1 or port 2 and another cable be attached to either port 3
or port 4. Connecting both ports that represent a logical port (for example, ports
1 and 2) to leaf switches in third-generation APICs can result in unpredictable
failover issues.
Not all ACI leaf switches support 10/25 Gigabit Ethernet cabling. During the
deployment planning stage, it is important to ensure that the leaf nodes to which
the APICs connect actually support the available VIC port speeds and that
proper transceivers and cabling are available.
OOB management interfaces should not be confused with the Cisco Integrated
Management Controller (Cisco IMC) port on the APICs. The APIC Cisco
IMC allows lights-out management of the physical server, firmware upgrades,
and monitoring of server hardware health. While the dual 1/10 Gigabit Ethernet
LOM ports enable out-of-band access to the APIC operating system, the Cisco
IMC provides out-of-band access to the server hardware itself. With Cisco IMC
access, an engineer can gain virtual KVM access to the server and reinstall the
APIC operating system remotely in the event that the APIC is no longer
accessible. But the Cisco IMC cannot be used to gain HTTPS access to the ACI
management interface. Because of the significance of Cisco IMC in APIC
recovery, assigning an IP address to the Cisco IMC is often viewed as a
critically important fabric initialization task.
APIC OOB IP addresses and Cisco IMC IP addresses are often selected from
the same subnet even though it is not required for them to be in the same subnet.
Fabric Name A user-friendly name for the fabric. If no name is entered, ACI use
Fabric1.
Fabric ID A numeric identifier between 1 and 128 for the ACI fabric. If no ID
uses 1 as the fabric ID.
Pod ID A parameter that determines the unique pod ID to which the APIC
attached. When ACI Multi-Pod is not being deployed, use the defa
Standby Controller An APIC added to a fabric solely to aid in fabric recovery and in r
APIC quorum during a prolonged outage. If the APIC being initial
APIC, select Yes for this parameter.
Controller ID The unique ID number for the APIC being configured. Valid value
32. The first three active APICs should always be assigned IDs bet
node ID values for standby APICs range from 16 to 32.
Pod 1 TEP Pool The TEP pool assigned to the seed pod. A TEP pool is a subnet us
fabric communication. This subnet can potentially be advertised ou
IPN or ISN or when a fabric is extended to virtual environments us
AVE. TEP pool subnets should ideally be unique across an enterpr
Cisco recommends that TEP pool subnet sizes be between /16 and
sizes do impact pod scalability, and use of /16 or /17 ranges is high
pod needs a separate TEP pool. However, during APIC initializatio
Configuration Description
Parameter
Infrastructure (infra) The VLAN ID used for control communication between ACI fabri
VLAN switches, spine switches, and APICs). The infrastructure VLAN is
extending an ACI fabric to AVS or AVE virtual switches. The infr
unique and unused elsewhere in the environment. Acceptable IDs a
Because the VLAN may need to be extended outside ACI, ensure t
infrastructure VLAN does not fall into the reserved VLAN range o
switches.
BD Multicast The IP address range used for multicast within a fabric. In ACI Mu
Addresses (GiPo) environments, the same range can be used across sites. If the admin
change the default range, 225.0.0.0/15 will be selected for this para
are between 225.0.0.0/15 and 231.254.0.0/15. A prefix length of 15
APIC OOB Addresses Addresses assigned to OOB LOM ports for access to the APIC GU
and Default Gateway separate from the Cisco IMC ports.
Password Strength A parameter that determines whether to enforce the use of passwor
strength for all users. The default behavior is to enforce strong pas
Unknown The node has been detected, but a node ID has not yet been as
administrator in the Fabric Membership view.
Discovering The node has been detected, and the APICs are in the process
specified node ID as well as a TEP IP address to the switch.
Unsupported The node is a Cisco switch, but it is not supported or the firmw
compatible with the ACI fabric.
Disabled/Decommissioned The node has been discovered and activated, but a user disable
decommissioned it. The node can be reenabled.
Inactive The node has been discovered and activated, but it is not curre
example, it may be powered off, or its cables may be disconne
As a best practice, do not modify the NIC Mode or NIC Redundancy settings in
Cisco IMC. If there are any discovery issues, ensure that Cisco IMC has been
configured with the default NIC Mode setting Dedicated and not Shared. The
NIC Redundancy setting should also be left at its default value None.
Note: The infra VLAN ID should not be used elsewhere in your environment
and should not overlap with any other reserved VLANs on other platforms.
Cluster configuration ..
Number of controllers: 3
POD ID: 1
Controller ID: 1
Strong Passwords: Y
Password: ********
Warning: TEP address pool, Infra VLAN ID and Multicast address pool
fabric is wiped.
Circuit Type: L1
Circuit Type: L1
(lo0)
(lo0)
(lo0)
(lo0)
lo0 10.233.46.33/32 protocol-up/link-up/admin-up
(lo0)
(lo0)
(lo0)
(lo0)
vlan8 10.233.44.30/27 protocol-up/link-up/admin-up
vlan-3600
Once an ACI fabric has been fully initialized, each switch should have dynamic
tunnel endpoint (DTEP) entries that include PTEP addresses for all other
devices in the fabric as well as entries pointing to spine proxy (proxy TEP)
addresses. Example 3-5 shows DTEP entries from the perspective of LEAF101
with the proxy TEP addresses highlighted.
Example 3-5 Dynamic Tunnel Endpoint (DTEP) Database
Tunnel1 is up
Tunnel3 is up
Tunnel4 is up
Tunnel5 is up
Tunnel6 is up
Tunnel8 is up
Tunnel9 is up
Tunnel10 is up
Note that aside from IS-IS, ACI enables COOP functionality on all available
spine switches as part of the fabric initialization process. This ensures that leaf
switches can communicate endpoint mapping information (location and
identity) to spine switches. However, fabric initialization does not result in the
automatic establishment of control plane adjacencies for protocols such as MP-
BGP. As of the time of this writing, a BGP autonomous system number needs to
be selected, and at least one spine has to be designated as a route reflector
before MP-BGP can be effectively used within an ACI fabric.
The Schedule Controller Upgrade page opens. ACI advises against the upgrade
if any critical or major faults exist in the fabric. These faults point to important
problems in the fabric and can lead to traffic disruption during or after the
upgrade. Engineers are responsible for fully understanding the caveats
associated with active faults within a fabric. Do not upgrade a fabric when there
are doubts about the implications of a given fault. After resolving any critical
and major faults, select the target firmware version, define the upgrade mode
via the Upgrade Start Time field (that is, whether the upgrade should begin right
away or at a specified time in the future), and then click Submit to confirm the
selected APIC upgrade schedule. During APIC upgrades, users lose
management access to the APICs and need to reconnect.
Figure 3-20 shows how to kick off an immediate upgrade by selecting Upgrade
Now and clicking Submit.
Figure 3-20 Schedule Controller Upgrade Page
By default, ACI verifies whether the upgrade path from the currently running
version of the system to a specific newer version is supported. If, for any
reason, ACI does not allow an upgrade due to the compatibility checks, and this
is determined to be a false positive or if you wish to proceed with the upgrade
anyway, you can enable the Ignore Compatibility Checks setting shown
in Figure 3-20.
Following completion of any APIC upgrades, switch upgrades can begin. Cisco
ACI uses the concept of upgrade groups to execute a group of switch upgrades
consecutively. The idea behind upgrade groups is that if all servers have been
dual connected to an odd and even switch, then an upgrade group consisting of
all odd leaf switches should not lead to server traffic disruption as long as the
even leaf upgrades do not happen until all odd leaf switches have fully
recovered. Furthermore, if only half of all available spine switches are upgraded
simultaneously and an even number of spines have been deployed, then there is
little likelihood of unexpected traffic disruption.
In a hypothetical upgrade group setup, a fabric could be divided into the
following four groups:
Odd spine switches
Even spine switches
Odd leaf switches
Even leaf switches
Note
Cisco only provides general guidance on configuration of upgrade groups. To maintain
connectivity in a production environment, Cisco suggests that administrators define
a minimum of two upgrade groups and upgrade one group at a time. Performing a
minimally disruptive upgrade with two upgrade groups requires an administrator to
group and upgrade a set of spine switches and leaf switches together. Most
environments, however, tend to separate switches out into four or more upgrade groups
to reduce the risk and extent of downtime if, for any reason, something goes wrong.
In the Schedule Node Upgrade window, select New in the Upgrade Group field,
choose a target firmware version, select an upgrade start time, and then select
the switches that should be placed in the upgrade group by clicking the + sign in
the All Nodes view. Nodes can be selected from a range based on node IDs or
manually one by one. Finally, click Submit to execute the upgrade group
creation and confirm scheduling of the upgrade of all switches that are members
of this new upgrade group. Figure 3-22 shows the creation of an upgrade group
called ODD-SPINES and scheduling of the upgrade of relevant nodes to take
place right away. The completion of upgrades of all switches in an upgrade
group can take anywhere from 12 to 30 minutes.
The Graceful Maintenance option ensures that the switches in the upgrade
group are put into maintenance mode and removed from the server traffic
forwarding path before the upgrade begins. The Run Mode option determines
whether ACI will proceed with any subsequently triggered upgrades that may
be in queue if a failure of the current upgrade group takes place. The default
value for this parameter is Pause upon Upgrade Failure, and in most cases it is
best not to modify this setting from its default.
Figure 3-23 An Upgrade Group Placed into Queue Due to Ongoing Upgrades
Cisco recommends that ACI switches be divided into two or more upgrade
groups. No more than 20 switches can be placed into a single upgrade group.
Switches should be placed into upgrade groups to ensure maximum redundancy.
If, for example, all spine switches are placed into a single upgrade group, major
traffic disruption should be expected.
Once an upgrade group has been created, the grouping can be reused for
subsequent fabric upgrades. Figure 3-24 shows how the selection of Existing in
the Upgrade Group field allows administrators to reuse previously created
upgrade group settings and trigger new upgrades simply by modifying the target
firmware revision.
Figure 3-24 Reusing a Previously Created Upgrade Group for Subsequent
Upgrades
Understanding Schedulers
Merge The import operation combines the configuration in the backup file with the cu
Replace The import operation overwrites the current configuration with the configurati
the backup file.
The options for the Import Mode parameter are Best Effort and Atomic. The
Import Mode parameter primarily determines what happens when configuration
errors are identified in the imported settings. Table 3-6 describes the Import
Mode options.
Table 3-6 Import Mode
Import Definition
Mode
Best Each shard is imported, but if there are objects within a shard that are invalid, th
Effort ignored and not imported. If the version of the configuration being imported is in
the current system, shards that can be imported are imported, and all other shard
Atomic The import operation is attempted for each shard, but if a shard has any invalid c
shard is ignored and not imported. Also, if the version of the configuration being
incompatible with the current system, the import operation terminates.
Note that when an administrator selects Replace as the import type in the ACI
GUI, the administrator no longer has the option to choose an import mode. This
is because the import mode is automatically set at the default value Atomic to
prevent a situation in which an import type Replace and an import mode Best
Effort might break the fabric.
Another important aspect of backup and restore operations is whether secure
properties are exported into backup files or processed from imported files.
Secure properties are parameters such as SNMP or SFTP credentials or
credentials used for integration with third-party appliances. For ACI to include
these parameters in backup files and process secure properties included in a
backup, the fabric needs to be configured with global AES encryption settings.
All switches in ACI reside in a pod. This is true whether ACI Multi-Pod has
been deployed or not. In single-pod deployments, ACI places all switches under
a pod profile called default. Because each pod runs different control plane
protocol instances, administrators need to have a way to modify configurations
that apply to pods. Another reason for the need to tweak pod policies is that
different pods may be in different locations and therefore may need to
synchronize to different NTP servers or talk to different SNMP servers.
A pod profile specifies date and time, podwide SNMP, COOP settings, and IS-
IS and Border Gateway Protocol (BGP) route reflector policies for one or more
pods. Pod profiles map pod policy groups to pods by using pod selectors:
A pod policy group is a group of individual protocol settings that
are collectively applied to a pod.
A pod selector is an object that references the pod IDs to which
pod policies apply. Pod policy groups get bound to a pod through a pod
selector.
Figure 3-42 illustrates how the default pod profile (shown as Pod Profile -
default) in an ACI deployment binds a pod policy group called Pod-PolGrp to
all pods within the fabric.
Figure 3-43 Opening the Pod Policy Group for the Relevant Pod
In the Pod Policy Group view, validate the name of the date and time policy
currently applicable to the pod in question. According to Figure 3-44, the date
and time policy that ACI resolves for all pods in a particular deployment is a
date and time policy called default.
Figure 3-44 Verifying the Date and Time Policy Applied to a Pod
After identifying the date and time policy object that has been applied to the pod
of interest, an administrator can either modify the applicable date and time
policy or create and apply a new policy object. Figure 3-45 shows how the
administrator can create a new date and time policy from the Pod Policy Group
view.
Figure 3-45 Creating a New Date and Time Policy in the Pod Policy Group
View
Enter a name for the new policy in the Create Date and Time Policy window
and set the policy Administrative State to enabled, as shown in Figure 3-46, and
click Next. Note that the Server State parameter allows administrators to
configure ACI switches as NTP servers for downstream servers. The
Authentication State option determines whether authentication will be required
for any downstream clients in cases in which ACI functions as an NTP server.
#restrict -6 ::1
keysdir /etc/ntp/
keys /etc/ntp/keys
apic1# ntpstat
---------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------
Total peers : 3
----------------------------------------------------------------------------------------
packets sent: 3
packets received: 3
bad authentication: 0
bogus origin: 0
duplicate: 0
bad dispersion: 0
candidate order: 0
Note that if you know the name of the date and time policy applicable to a pod
of interest, you can populate the date and time policy directly by going to
Fabric, selecting Fabric Policies, double-clicking Policies, opening Pod, and
selecting the desired policy under the Date and Time folder (see Figure 3-50). If
there is any question as to whether the right policy has been selected, you can
click the Show Usage button to verify that the policy applies to the nodes of
interest.
Figure 3-50 Navigating Directly to a Specific Date and Time Policy
If the time for a pod should reflect a specific time zone, the Datetime Format
object needs to be modified. You can modify the Datetime Format object by
navigating to System, selecting System Settings, and clicking on Date and
Time.
The Display Format field allows you to toggle between Coordinated Universal
Time (UTC) and local time. Selecting Local exposes the Time Zone field.
Enabling the Offset parameter enables users to view the difference between the
local time and the reference time. Figure 3-51 shows the Datetime Format
object.
Figure 3-51 Selecting a Time Zone via the Datetime Format Object
Note
NTP is considered a critical service for ACI fabrics. Atomic counters, a capability that
measures traffic between leaf switches, requires active NTP synchronization across ACI
fabrics. Without NTP synchronization, ACI is unable to accurately report on packet loss
within the fabric.
Paragraph Describes APIC in-band ports and minimal versus recommended connectiv
requirements
Table 3-3 Calls out basic configuration parameters that need to be planned for fabric
initialization
Table 3-4 Describes switch discovery states and what each one means
Paragraph Describes the NIC mode and NIC redundancy settings required for proper f
discovery
Paragraph Describes the process of assigning OOB management addresses to ACI nod
Paragraph Explains why it is important to configure entries for APICs in the Static No
Management Addresses folder
Paragraph Describes how to assign the default contract to the OOB management EPG
Paragraph Outlines what external management network instance profiles are and how
can be used to define external subnets that should be allowed to communica
with ACI from a management perspective
List Recaps the process of assigning an open contract to the out-of-band networ
Paragraph Describes the process of setting a default firmware version to enforce code
upgrades for new switches that are introduced into the fabric
Paragraph Explains how all switches are by default placed into the default pod
Paragraph Explains pod profiles, pod policy groups, and pod selectors
Support
Sign Out
© 2021 O'Reilly Media, Inc. Terms of Service / Privacy Policy