Data Center Infrastructure
CT109-3-2 and Version VD1
Data Center Topologies and Architectures
Topic & Structure of The Discussion
• Data Center Topologies and Architectures
Module Code & Module Title Slide Title SLIDE 2
Learning Outcomes
• At the end of this topic, YOU should be able to:
• Understand Data Center Topologies and Architectures design.
Module Code & Module Title Slide Title SLIDE 3
Key Terms You Must Be Able To Use
• If you have mastered this topic, you should be able to use the following
terms correctly in your assignments and exams:
– Centralized
– Zoned
– Top of Rack
– Mesh Network
– Three-tier or Multi-Tier
– Mesh Point of Delivery
– Super Spine
Module Code & Module Title Slide Title SLIDE 4
Topologies
Topologies
• There are three main data center topologies in use today—and
each has its advantages and trade-offs. In fact, some larger
data centers will often deploy two or even all three of these
topologies in the same facility.
Centralized Zoned Top of Rack
Module Code & Module Title Slide Title SLIDE 6
Centralized
is an appropriate topology for smaller data
centers (under 5,000 square feet). As shown, there
are separate local area network (LAN)/ storage
area network (SAN) environments and each one
has home-run cabling that goes to each of the
server cabinets and zones. Each server is
effectively cabled back to the core switches, which
are centralized in the main distribution area.
This provides very efficient utilization of port
switches and makes it easier to manage and add
components.
The centralized topology works well for smaller
data centers but does not scale up well, which
makes it difficult to support expansions.
In larger data centers, the high number of
extended-length cable runs required causes
congestion in the cable pathways and cabinets and
increases cost.
While some larger data centers use zoned or top-
of-rack topologies for LAN traffic, they may also
utilize a centralized architecture for the SAN
environments. This is especially true where the
cost of SAN switch ports is high and port utilization
is important. The centralized model
Module Code & Module Title Slide Title SLIDE 7
Zoned
Zoned topology consists of distributed switching
resources. As shown below, the switches can be
distributed among end-of-row (EoR) or middle of-
row (MoR) locations, with chassis-based switches
typically used to support multiple server cabinets.
This solution is recommended by the ANS/TIA-942
Data Center Standards and is very scalable,
repeatable, and predictable.
Zoned architecture is usually the most cost-
effective design, providing the highest level of
switch and port utilization while minimizing
cabling costs.
In certain scenarios, end-of-row switching provides
performance advantages. For example, the local
area network (LAN) ports of two servers (that
exchange large volumes of information) can be
placed on the same end-of-row switch, for low-
latency port-to-port switching.
A potential disadvantage of end-of-row switching
is the need to run cable back to the end-of-row
switch.
Assuming every server is connected to redundant
switches, this cabling can exceed what is required
in top-of-rack architecture.
Module Code & Module Title Slide Title SLIDE 8
Top of Rack
Top-of-rack (ToR) switching typically consists of
two or more switches placed at the top of the rack
in each server cabinet, as shown below. This
topology can be a good choice for dense one rack-
unit (1RU) server environments. All servers in the
rack are cabled to both switches for redundancy.
The top-of-rack switches have uplinks to the next
layer of switching.
Top of rack significantly simplifies cable
management and minimizes cable containment
requirements.
This approach also provides fast port-to-port
switching for servers within the rack and
predictable oversubscription of the uplink.
A top-of-rack design utilizes cabling more
efficiently.
The tradeoffs are often an increase in the cost of
switches and the high cost for the under-utilization
of ports.
Top-of-rack switching may be difficult to manage in
large deployments, and there is also the potential
for overheating of local area network (LAN) switch
gear in server racks. As a result, some data centers
deploy top-of-rack switches in a middle-of-row or
end-of-row architecture to better utilize switch
ports and reduce the overall number of switches
used.
Module Code & Module Title Slide Title SLIDE 9
Architectures
Architectures
• There are four (4) main network architectures
Mesh Network
Three-Tier or Multi-Tier
Mesh Point of Delivery (PoD)
Super Spine
Module Code & Module Title Slide Title SLIDE 11
Mesh Network
The mesh network architecture often
referred to as a “network fabric,” or
leaf spine consists of meshed
connections between leaf-and-spine
switches.
The mesh of network links enables
any-to-any connectivity, with
predictable capacity and lower
latency—making this architecture
well suited for supporting universal
“cloud services.” With multiple
switching resources spread across
the data center, the mesh network is
inherently redundant for better
application availability.
These distributed network designs
can be much more cost-effective to
deploy and scale when compared to
very large, traditional centralized
switching platforms.
Module Code & Module Title Slide Title SLIDE 12
Mesh Network
Internet cloud
↓
Carrier – ISP Switches/Routers
↓
Border Leaf Tier (Layer 2 or Layer 3 switches,
support routing protocols that exchange routes
with external routers).
↓
Spine Switch Tier (The spine layer consists of
switches that perform routing and work as the
core of the network).
↓
Leaf Switch Tier (The leaf layer involves access
switches that connect to servers, storage devices,
and other end-users).
↓
Servers/End-user
Module Code & Module Title Slide Title SLIDE 13
Three-Tier or Multi-Tier
The multi-tier architecture has been
the most commonly deployed model
used in the enterprise data center.
This design consists primarily of web,
application, and database server
tiers running on various platforms,
including blade servers, 1RU servers,
and mainframes.
Module Code & Module Title Slide Title SLIDE 14
Three-Tier or Multi-Tier
Internet cloud
↓
Carrier – ISP Switches/Routers
↓
Edge/core Layer (Edge routers work to secure the network edge and protect
the core by characterizing and securing IP traffic from other edge routers as
well as core routers). (Core routers forward packets between routers to
manage traffic and prevent packet loss, often using multiplexing).
↓
Aggregation Layer (aggregates the uplinks from the access layer to the data
center core. This layer is the critical point for control and application services).
↓
Access Layer (The access layer is the last layer of the three-tier architecture of
a data center. The actual servers are connected to this layer. The access layer
communicates with its upper layer using several switches (like Layer 2 and
Layer 3) and hubs. This layer generally uses uplinks bandwidth of up to 10 GE).
↓
Servers
↓
SAN Director Layer (SANs use block-based storage and high-speed architecture
to connect servers to logical disk units (LUNs), a range of block storage from a
pool of shared storage and appear to the server as a logical disk. A SAN
comprises three distinct layers: host, fabric, and storage).
↓
Disk Arrays (Disk arrays were designed to separate storage from servers so
systems could be built into large, monolithic configurations for block- or file-
based storage. A disk array enables storage capacity to scale and be managed
far more efficiently than capacity from a collection of servers).
Module Code & Module Title Slide Title SLIDE 15
Mesh Point of Delivery (PoD)
The mesh point of delivery (PoD)
architecture features multiple leaf
switches interconnected within in
the PoDs, with spine switches
typically aggregated in a central main
distribution area (MDA).
Among other advantages, this
architecture enables multiple PoDs
to connect efficiently to a super-
spine tier.
Data center managers can easily add
new infrastructure to their existing
three-tier topology to support the
low-latency East West data flow of
new cloud applications.
Mesh PoD networks can provide a
pool of low-latency compute and
storage for these applications that
can be added without disrupting the
existing environment
Module Code & Module Title Slide Title SLIDE 16
Mesh Point of Delivery (PoD)
Internet cloud
↓
Carrier – ISP Switches/Routers
↓
Edge/core Layer (Edge routers work to secure
the network edge and protect the core by
characterizing and securing IP traffic from other
edge routers as well as core routers). (Core
routers forward packets between routers to
manage traffic and prevent packet loss, often
using multiplexing).
↓
Super Spine Tier (The spine layer consists of
switches that perform routing and work as the
core of the network).
↓
Leaf Mesh Tier (The leaf layer involves access
switches that connect to servers, storage
devices, and other end-users).
↓
Servers/End-user
Module Code & Module Title Slide Title SLIDE 17
Super Spine
Super spine architecture is
commonly deployed by hyperscale
organizations deploying large-scale
data center infrastructures or
campus-style data centers.
This type of architecture services
huge amounts of data passing east
to west across data halls.
Module Code & Module Title Slide Title SLIDE 18
Super Spine
Internet cloud
↓
Carrier – ISP Switches/Routers
↓
Edge/core Layer (Edge routers work to secure the network edge and protect
the core by characterizing and securing IP traffic from other edge routers as
well as core routers). (Core routers forward packets between routers to
manage traffic and prevent packet loss, often using multiplexing).
↓
Super Spine Tier (The spine layer consists of switches that perform routing
and work as the core of the network).
↓
Spine Switches (All spine switches can handle Layer 3 (L3) with high port
density, which allows for scalability. In a software-defined network (SDN), the
spine switch is directly connected to a network control system with a virtual
Layer 2 switch on top of the leaf-spine system).
↓
Leaf Switches (Leaf switches connect to spine switches and mesh into the
spine, forming the access layer that delivers network connection points for
servers. Leaf switches. Servers and storage connect to leaf switches and
consist of access switches that aggregate traffic from servers. They connect
directly to the spine).
↓
Computer/Storage Pods (Pods (Portable On Demand Storage) are designed
to support multiple cooperating processes (as containers) that form a
cohesive unit of service. The containers in a Pod are automatically co-located
and co-scheduled on the same physical or virtual machine in the cluster).
Module Code & Module Title Slide Title SLIDE 19
Mesh Network Three-Tier or Mesh Point of
Delivery (PoD) Super Spine
Multi-Tier
Module Code & Module Title Slide Title SLIDE 20
Equipment Connection Methods
Equipment Connection Methods
Cross-connect Interconnect
A cross-connect uses patch cords or jumpers to An interconnect uses patch cords to connect
connect cabling runs, subsystems, and equipment to equipment ports directly to the backbone cabling.
connecting hardware at each end. It enables
connections to be made without disturbing the This solution requires fewer components and is,
electronic ports or backbone cabling. therefore, less expensive.
A cross-connect provides excellent cable However, it reduces flexibility and introduces
management and design flexibility to support future additional risk, as users must directly access the
growth. Designed for “any-to-any” connectivity, this electronics ports in order to make the connection.
model enables any piece of equipment in the data
center to connect to any other regardless of location. Therefore, CommScope generally recommends
utilizing cross-connects for maximum flexibility and
A cross-connect also offers operational advantages, operational efficiency in the data center.
as all connections for moves, add, and changes are
managed from one location.
The major disadvantage is higher implementation
costs due to increased cabling requirements.
Module Code & Module Title Slide Title SLIDE 22
Data Center Visualization
Example of Data Center Visualization
VMware ESXi. Discover a robust, bare-metal hypervisor that installs directly onto your physical server.
With direct access to and control of underlying resources, VMware ESXi effectively partitions
hardware to consolidate applications and cut costs.
Xen is a hypervisor that enables the simultaneous creation, execution, and management of multiple
virtual machines on one physical computer.
Kernel-based Virtual Machine (KVM) is an open-source virtualization technology built into Linux®.
Specifically, KVM lets you turn Linux into a hypervisor that allows a host machine to run multiple,
isolated virtual environments called guests or virtual machines (VMs).
Module Code & Module Title Slide Title SLIDE 24
Summary of Main Teaching Points
• Data Center Topologies and Architectures
Module Code & Module Title Slide Title SLIDE 25
Question and Answer Session
Q&A
Module Code & Module Title Slide Title SLIDE 26
What we will cover next
• Data Center Commission & Handover
Module Code & Module Title Slide Title SLIDE 27
Thank You
Module Code & Module Title Slide Title SLIDE 28