0% found this document useful (0 votes)
374 views147 pages

Best Practices in EE in Indian Data Centers

The document introduces a project by the Bureau of Energy Efficiency and Confederation of Indian Industry to develop a manual on best practices for energy efficiency in Indian data centers. It notes that data centers have high energy consumption and operational costs. The project formed a steering committee of stakeholders to focus on the four major areas of data centers and identify best practices through site visits in order to help set common performance standards and design guidelines for energy efficiency.

Uploaded by

kgsatish1979
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
374 views147 pages

Best Practices in EE in Indian Data Centers

The document introduces a project by the Bureau of Energy Efficiency and Confederation of Indian Industry to develop a manual on best practices for energy efficiency in Indian data centers. It notes that data centers have high energy consumption and operational costs. The project formed a steering committee of stakeholders to focus on the four major areas of data centers and identify best practices through site visits in order to help set common performance standards and design guidelines for energy efficiency.

Uploaded by

kgsatish1979
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 147

Disclaimer

© 2009, Bureau of Energy Efficiency (BEE), Ministry of Power, Government of India

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by
any means electronic, mechanical, photocopying, recording or otherwise, without the prior written permission from Bureau of
Energy Efficiency (BEE), Ministry of Power-Government of India.

While every care has been taken in compiling this Manual, neither BEE nor CII accepts any claim for compensation, if any entry is
wrong, abbreviated, cancelled, omitted or inserted incorrectly either as to the wording, space or position in the Manual. The
Manual is only an attempt to create awareness on energy conservation and sharing of best practices being adopted in India as
well as abroad.

“The Manual on Best Practices in Indian Data centers” was supported by a grant from Bureau of Energy Efficiency (BEE), Ministry
of Power, Government of India to promote Energy Efficiency initiate in Indian Datacenters. The views and information contained
herein are those of the Bureau of Energy Efficiency (BEE), Ministry of Power-Government of India and not necessarily of CII. CII
assumes no liability for the contents of this manual by virtue of the support given.

Published by Bureau of Energy Efficiency (BEE), Ministry of Power, Government of India.

2
FOREWORD

3
4
Contents

Chapter Description Page No.


No

Foreword ..................................................................................................................................................... 3

1 Executive summary ................................................................................................................................. 7

2 Introduction about the project ........................................................................................................... 8

3 How to use the manual ....................................................................................................................... 10

4 Datacenter Overview ........................................................................................................................... 11

5 Data Center Electrical System .......................................................................................................... 17

6 Data Centre Cooling ............................................................................................................................. 33

7 Data Center IT Systems ....................................................................................................................... 67

8 Operation and Maintenance.............................................................................................................. 88

9 Management approach for Energy efficiency in Datacenter ............................................... 104

Annexure-1: Comparison of Four Tier Levels of Data Centre ............................................... 109

Annexure-2: Data Center Benchmarking Guide ....................................................................... 110

Annexure-3: List of energy saving opportunities in Data Center ....................................... 117

References ............................................................................................................................................. 119

Glossary (Terms and Definitions) ................................................................................................... 121

5
6
CHAPTER 1
EXECUTIVE SUMMARY
The services sector has been experiencing a significant growth in India and a major part of this is
attributable to the IT sector. High tech facilities are making it one of the fastest growing energy use
sectors. The worldwide explosion of data in an electronic form has resulted in establishment of mega
Data Centers.

Aligned with the trend in other areas of outsourcing, there is an increasing interest in outsourcing data
center activity to developing countries such as China and India. It has been revealed that the initial cost
of setting up of Data Center is only 5% of its total cost on life cycle basis (Total Cost of Ownership) for a
span of 15-20 years and rest of the major part is through energy bills. It has been reported that the better
design and adequate energy efficiency measures may reduce the energy requirement by 30% with
suitable business propositions.

Datacenters consume 15 – 20 times more energy than conventional commercial buildings. With the
rapid increase in handling and storage of data in all across the sectors, India is witnessing a significant
growth in Data Center business since last few years. The existing low level of awareness and competence
in design and operation of Data Centers leads to various decisions which are less than optimal, leading
to energy inefficiency. Therefore the study and detailing of the above area requires special attention
and inclusion in the existing Energy Conservation Building Code (ECBC).

Confederation of Indian Industry (CII) under the guidance of Bureau of Energy Efficiency (BEE), Ministry
of Power, Govt. of India, with active participation of key stake holders, carried out a study to capture the
most relevant and business appropriate “Best Practices” in this domain. The study has successfully also
captured various energy saving opportunities and best practices in different specific areas in Data Center.

The document has detailed four major areas of focus in a typical Data Center in terms of implementing
energy efficiency improvement measures. Electrical system, Critical cooling system, IT peripherals and
Operation & Maintenance are the areas which have been detailed in the report.

This “Best practices manual” will help in setting up common performance standards for operation for
Indian Data centers. This manual would definitely benefit the readers and would facilitate the
dissemination of best practices for energy conservation in Data Centers.

7
CHAPTER 2
Introduction about the project
India being the hub of IT activities, outsourcing of IT services to India has resulted in a phenomenal
growth of data centers in India. The Indian data centre business is recording an annual growth rate of
25 – 30%. The data center growth has been driven by the increasing storage demand from the domestics
and international users from sectors such as financial institutions, telecom operators, manufacturing
and services.

The operation of data centers being highly energy intensive, it imposes tremendous pressure on data
center developers to design energy efficient data centers. The growth of data centers in India confronts
few obstacles such as power shortages and high operational costs due to higher power cost. This forces
the data center users /operators to operate data centers in energy efficient manner.

With this background, Bureau of Energy Efficiency (BEE) - an independent body working under ministry
of power, government of India - under the leadership of Dr Ajay Mathur, has taken an initiative to
bring out best operating practices that would result in energy efficiency and design guidelines for
upcoming data centers. BEE earlier introduced the Energy Conservation Building Code (ECBC) to
promote energy efficiency in commercial building sector. The code has been well received across the
country. Subsequent to ECBC, BEE brings out a ‘manual on best practices in Indian data centers’. This
would be followed by a code on Energy efficiency guidelines for new / upcoming data centers.

Energy efficiency in data centers offers three fold benefits:

1. Increased national availability for energy


2. Reduction in operating costs
3. Enhanced efficiency in datacenter design & operation leading to climate change mitigation
This manual has been developed by Confederation of Indian Industry, CII – Sohrabji Godrej Green
Business Centre under the guidance of BEE and the steering committee. The steering committee has
been formed under the chairmanship of Dr Ajay Mathur to assist CII in executing the project.

Steering committee
A steering committee has been formed comprising of various stake holders such as Data center designers,
IT specialists, Service providers, HVAC & Electrical equipment suppliers, Consultants, Users & Experts.
Core groups were formed to focus on the four major areas of the data center.

8
Core group Chairman

1. Electrical accessories Mr Dhiman Ghosh, Emerson Network Power

2. HVAC Mr Ashish Rakheja, Spectral Services consultants Pvt. Ltd.

3. IT Peripherals Mr Chandrashekar Appanna, Cisco Systems, Inc.

4. Operations & Maintenance Mr Sudeep Palanna, Texas Instruments

CII technical secretariats with the support of core group members have visited more than 10 data
centers to study and collate the best practices adopted. All the four core group members have
contributed extensively by sharing the knowledge and information for the successful completion of
the manual. The core group members together have identified 20 best practices for energy efficiency
improvement in Datacenters. CII thanks all the steering committee members and core group members
for their excellent support and valuable inputs.

Best Practices Manual


All the best practices identified by the core groups have been reviewed by the steering committee
members. All the inputs and suggestions of the steering committee members have been included in
the manual.

The Best Practices Manual brings out some of the best practices followed in Indian and International
data centers. Case studies on best practices presented in this manual are in the areas of Electrical power
distribution systems, Datacenter cooling, IT peripherals and systems and Operation & maintenance.

We are sure that the data centers in India will make use of this opportunity, improve their energy
efficiency and move towards greener data centers and healthier environment.

9
CHAPTER 3

HOW TO USE THE MANUAL


 The objective of this manual is to act as a catalyst to promote Energy conservation activities in
IT & ITES industry towards continuously improving the performance of the Datacenters and
achieving higher levels of energy efficiency

 This manual contains the following:

 The latest trends & technologies in data centers and the associated systems

 The best practices adopted in various data centers for improving energy efficiency

 Case studies with indicating technical details and cost - benefit analysis

 The manual also discusses about methods on assessing the performance of the existing systems
in the data center as well as setting a target ‘section-wise’ for reducing the energy consumption
in the data centers

 The performance of datacentres can be improved by adopting the best practices as suggested
in the manual. These best practices may be considered for implementation after suitably fine
tuning to meet the requirements of individual Datacenters

 Implementation of latest technologies may also be considered for existing datacentres and by
design future projects. A detailed study needs to be taken up for the suitability of these
technologies for individual projects.

 The Indian IT & ITES Industry should view this manual positively and utilize the opportunity to
improve the performance of their data centers and reduce energy consumption.

10
CHAPTER 4

DATA CENTER OVERVIEW


4.0 Introduction
As the trend shifts from paper-based to digital information management, data centers have
become common and essential to the functioning of business systems. A data center is a facility
that has concentrated depository of equipment such as servers, data storage devices, network
devices etc. Collectively, this equipment processes, stores, and transmits digital information and
is known as an “information technology” (IT) equipment.

At its most basic, the data center is a physical place that houses a computer network’s most
critical systems, including backup power supplies, air conditioning, and security applications.

4.1 Data Center Growth Trend


The data center industry is in the midst of a major growth period. The increasing reliance on
digital data is driving a rapid increase in the number and size of data centers. This growth is the
result of several factors, including growth in the use of internet media and communications
and growth in the need for storage of enormous digital data. For example, Internet usage is
increasing at approximately 10 percent per year worldwide (COMScore Networks 2007) and
has directly fuelled the growth of data centers.

From simple data storage to a complicated global networking, data centers play a significant
role and also have become an integral part of the IT spectrum. In spite of these advantages, the
data centers have come under the hammer for their high energy consumption.

From 2000 to 2006, the energy used by data centers and the power and cooling infrastructure
that supports them has doubled. With such high energy consumption, the energy efficiency in
data centers have become the focal point for data center designers, which until recently had
very little or no focus at all.

4.2 Present Scenario and Future Growth of Data Center in India


India being the nerve center for all IT activities and the outsourcing activities that demands
high storage has resulted in a phenomenal growth of data centers in India. With the increase in
such business volume, the Indian data centre services market is poised to witness rapid growth
in the coming years.

11
The total data center capacity in India is expected to grow from 2.3 million square feet in the
year 2008 to 5.1 million square feet by the year 2012. The storage demand which has increased
from one petabyte in 2001 to 34 petabytes in 2007 (Source: Gartner, INC) has resulted in existing
data center capabilities being fully utilized and, consequently, the need has arisen to build
more capacity.

The growth of data centers in India also means more energy consumption and more energy
cost. Hence the datacentres need to focus on latest and innovative technologies for reducing
their energy consumption.

4.3 Sources of Data Center Power Consumption


Power usage distribution chart of a typical data center is shown in Figure 4.1.

Figure 4.1: Typical Power Usage Distribution chart of a Data Center

[Source: EYP Mission Critical Facilities Inc. New York]

From the power usage distribution chart, we understand that the IT equipment and its cooling
system consume a major chunk of power in a data center. Also, the cooling requirement in a
data center is based on the energy intensity of IT load in the data center. Therefore, energy
saving in IT load would have a direct impact on the loading of most of the support systems such
as, cooling system, UPS system, power distribution units and thereby has an effect on overall
energy performance of the data center.

12
Typically the cooling system consumes 35 – 40 % of the total data center electricity use. Demands
on cooling systems have increased substantially with the introduction of high density servers. As
a result, the cooling system represents the second highest energy consumer next to the IT load.

4.4 Classification of Data Center


Data centers can be of two types namely Internet Data Center (IDC) and Enterprise Data Center (EDC).

4.4.1 Internet Data Center


Internet Data Centers (IDC), also referred as co-location & managed data center, are built
and operated by service providers. However, IDCs are also built and maintained by
enterprises whose business model is based on internet commerce. The service provider
makes a service agreement with their customers to provide functional support to the
customer’s IT equipment. The service provider’s IDC architecture is similar to that of
enterprise IDC architecture. However the scalability requirement of enterprise IDC is lower
because of a smaller user base and the services provided are less.

4.4.2 Enterprise Data Center


Enterprise Data Centers support many different functions that enable various business
models. Enterprise Data Centers are evolving, and this evolution is partly a result of new
trends in application environments, such as the n-tier, web services, and grid computing.

The data center redundancy is the one of the most important factors in the designing
stage of the data center. To maintain high availability levels, the data center systems are
always designed with redundancy. Based on the redundancy levels maintained in the
data centers, the data centers can be categorised into four Tier levels.

4.5 Data Center Tiers


(Source: Uptime Institute).
Data center tier standards define the availability factor of a data center facility. The tier system
provides a simple and effective means for identifying different data center site infrastructure
design topologies. The standards are comprised of a four-tiered scale, with Tier 4 being the most
robust. The comparison of the four Tier levels is given in the Annexure 1

A Tier I data center has no redundant capacity components and single non-redundant power
distribution paths serving IT equipment. A typical example would be a computer room with a
single UPS, generator and Heating Ventilation and Air Conditioning cooling system.

13
A Tier II data center has redundant capacity components and single non-redundant power
distribution paths serving the IT equipment. A typical example would be a computer room
with a single UPS and generator, but a redundant cooling system.

A Tier III data center has redundant capacity components and multiple distribution paths
serving the IT equipment. Generally, only one distribution path serves the computer equipment
at any time. All IT equipment is dual-powered and fully compatible within the topology of a
site’s power distribution architecture. A typical example would be a computer room with a
single UPS that has maintenance bypass switch wrapped around the system and a generator.
Also, it would have redundant cooling systems.

A Tier IV data center has redundant capacity systems and multiple distribution paths
simultaneously serving the IT equipment. The facility is fully fault-tolerant, through electrical,
storage and distribution networks. All cooling equipment is independently dual-powered,
including chillers and HVAC systems. An example of this configuration would be multiple UPS
serving IT equipment through multiple paths with no single point of failure. The UPS would be
backed up by generators that are redundant and have no single point of failure.

4.6 Classification of Data Center based on size


The size of the data center is defined based on the maximum IT load (kW) in the data center. The
classification of Data Center size based on the maximum IT load is mentioned in the table 4.1.

Table 4.1: Classification of Data Center size Based on the Maximum IT Load

Category Small Medium Large X Large


Site description Mixed use Mixed use Mixed use or Mixed use or
building building dedicated building dedicated building

Average Size (Sq. ft) 125 - 1000 1000 - 5000 5000 – 25000 > 25000

Average number of 5 – 40 41 – 200 200 – 800 >800

IT racks

Typical number of Servers 30 – 250 250 - 1300 1300 – 4000 >4000

Maximum design of 20 - 160 160 - 800 800 - 2500 >2500


IT load (kW)
Source: APC

14
4.7 TIA – 942 Standards
As we know, the operation and maintenance of data centers is critical, the Telecommunications
Industry Association (TIA) has formulated operating standards to address the requirements of
data center infrastructure.

TIA-942 is a standard developed by the Telecommunications Industry Association (TIA) which


defines guidelines for planning and building data centers, particularly with regard to cabling
systems and network design. The guidelines are intended for use by data center designers early
in the building development process.

TIA-942 covers the following:


 Site space and layout
 Cabling infrastructure
 Reliability
 Environmental considerations
The principal advantages of designing data centers in accordance with TIA-942 include:
 Standard nomenclature
 Failsafe operation
 Robust protection against natural or human made disasters
 Long-term reliability, expandability and scalability

4.8 Data Center Architecture

Figure 4.2: Power Flow in a Data Center

15
Data center power delivery system provides backup power, regulates voltage, and makes
necessary alternating current/direct current (AC/DC) conversions. The power from the
transformer is first supplied to an uninterruptible power supply (UPS) unit. The UPS acts as a
battery backup to prevent the IT equipment from experiencing power disruptions. A momentary
disruption in power could cause huge loss to the company. In the UPS the power is converted
from AC to DC to charge the batteries. Power from the batteries is then reconverted from DC to
AC before leaving the UPS. Power leaving the UPS enters a power distribution unit (PDU), which
sends power directly to the IT equipment in the racks.

The continuous operation of IT equipment generates a substantial amount of heat that ought
to be removed from the data center for the equipment to operate properly. Precision air
conditioners (PAC) are used in remove the heat generated within data centers to the outside
atmosphere. Two most important parameters which the PACs should maintain in the data center
space is temperature and humidity. The conditioned air from the PAC is supplied to the IT
equipment through a raised floor plenum.

Data centers use significant amount of energy to supply three key components: IT equipment,
cooling, and electrical system. The three key components are covered individually in the chapters
later in this manual.

16
CHAPTER 5
DATA CENTER ELECTRICAL SYSTEM
5.1. Electrical system
The electrical system in a data center c1onsists of transformers, UPS, Power Distribution Units
(PDU) and the transmission medium. The schematic diagram of the power distribution in a Data
Center is shown in the figure below

Schematic representation of power distribution in a Data Center

5.2 Transformers
The power from the substation is given to the data center through distribution transformers.
The distribution transformer steps down the incoming High Tension (HT) voltage to Low Tension
(LT) voltage for the data center use.

The use of k-factor distribution transformers has become a popular means of addressing
harmonic related overheating problems where electronic ballasts, drives, personal computers,
telecommunications equipment, broadcasting equipment and other similar power electronics
are found in high concentrations. These non-linear loads generate harmonic currents which
can substantially increase transformer losses. The k-rated transformer has a more rugged design
intended to prevent failure due to overheating.

K-factor is defined as a ratio between the additional losses due to harmonics and
the eddy current losses at 50Hz.

17
K-factor is used to specify transformers for non-linear loads. A K-factor is a value used to determine
how much harmonic current a transformer can handle without exceeding its maximum
temperature rise level. When a k factor value is used in a transformer, it is said to be K-rated.
Traditionally selection of the transformers had focused entirely on system reliability and the
energy efficiency part was all but overlooked. However with the advancement in transformer
technology, the scenario has changed. The United States of America had made it mandatory for
all data centers to use only energy efficient dry type transformers for its facilities. The data
centers in India have also started to optimize the loading of its transformers and also explores
the possbility of using dry type transformers.

5.3 Uninterruptible power supply (UPS)


The power is then distributed through UPS. An uninterruptible power supply (UPS), also
known as a battery back-up, provides emergency power and, depending on the topology, line
regulation as well to connected equipment by supplying power from a separate source when
utility power is not available. It differs from an auxiliary or emergency power system or standby
generator, which does not provide instant protection from a momentary power interruption. A
UPS, however, can be used to provide uninterrupted power to equipment, until an auxiliary
power supply can be turned on, utility power restored, or equipment is safely shut down.
UPS is one of the most important equipment in a data center. UPS is used to protect data centers
where an unexpected power disruption could cause data loss and consequently huge loss to
the company. The UPS system used in data centers are On-Line UPS system. The Online UPS is
ideal for environments where electrical isolation is necessary or for equipment that is very
sensitive to power fluctuations.
The number of UPS systems for a particular data center depends on what Tier level the data
center is operating and also on the criticality of the load.
The traditional approach to the specification and selection of UPS systems had focused almost
solely on system reliability. However with the increased energy cost and energy shortage, focus
has shifted to the efficiency of the UPS system.

Typical curve of Loading (vs.) Efficiency of a UPS

The efficiency varies with loading on the UPS system. As the number of UPS systems increases for
a data center to take care of higher redundancy, the loading on the UPS decreases which causes
inefficiency in the UPS system. The Loading vs. Efficiency curve is shown in the figure above.

18
Various techniques are available to optimize the loading of the UPS system and to optimally
share the load on all UPS systems. Modularity is one such method to improve the efficiency of
the UPS system. Modularity allows users to size the UPS system as closely to the load as
practicable (in other words, it allows the UPS to operate as far right on the curve as possible).
UPS technologies continue to evolve toward greater electrical efficiency and newer
technologies which are available will yield greater benefits.

5.4 Power Distribution Units


In a data center, the main data center power is distributed to the Power Distribution Units
(PDUs). The power distribution units may contain large power transformers to convert voltage
or provide power conditioning.

The power distribution units in turn distribute a quantity of branch circuits to the IT equipment.
Each IT enclosure uses one or more branch circuits. The wiring of the IT enclosure is usually
required to be in flexible or rigid conduit, typically located beneath the raised floor. The single
line diagram of the power distribution from the UPS to the IT load through the PDU is shown in
Figure below. The typical dual bus configuration for power distribution in Data Center is shown
in figure subsequently

Single line diagram of the power distribution from the UPS to the IT load through the PDU

Wiring of Data Center Dual Bus Power Distribution System

19
The traditional PDU architecture had severely shortened the ability to plug in new IT servers to
the rack or for that matter a new IT rack. In order to optimize the power distribution system,
present day data centers use Modular power distribution units. The modular power distribution
units have certain inherent properties which makes it superior to the conventional PDUs.

5.5 Modular Power Distribution Unit (PDU)


A modular power distribution unit (PDU) is used to supply electric power to IT equipment in
data centers, computer rooms, and communication centers. The power requirement for
equipment may vary.

The power distribution unit includes a frame and one or more user-replaceable power modules
that fit into slots in the frame. Each power module provides one of more plug receptacles to
provide power to the equipment connected. The power modules are available in a variety of
receptacle types, receptacle numbers, and power rating configurations to accommodate the
equipment in a particular environment as needed.

The frame includes an internal connector panel for distributing power from a power source to
the power modules when they are inserted in the frame. The power modules may be removed,
installed, and interchanged in the frame without interrupting power to other power modules
or to the power distribution unit.

An ideal power distribution system would have the following attributes:


 New circuits can be added or changed on a live system
 No under floor cables needed
 All circuits can be monitored
 Capacity and redundancy can be managed on every circuit
 High efficiency

20
Challenges faced with traditional PDU

 Requirement of many more receptacles as the proposed data center would require large
number of plug in devices with separate power chords

 The modern high density IT servers changed the power requirements or receptacle
requirements at the rack location

 The changing power requirements would result in the addition of new power circuits
which had to be added to the live data center without disturbing near by existing IT loads

 The power chords in under floor air plenum had resulted in the blockage of airflow to the
IT equipment

The power distribution system with modular power distribution unit in one of the datacentres
is shown in the figure below: [Source: APC]

Schematic diagram of modular power distruibution unit

Benefits of the modular architecture


The modular distribution system is particularly well suited for retrofit projects, because installation is
not much disruptive compared to the installation of traditional PDU.

The air blockage in the under floor air plenum was minimized by the adoption of modular distribution
system as the distribution system uses only suspended cable tray for distribution.

The modular distribution system architecture facilitates the modification of power distribution easily
and helps in reconfiguring the rack power quickly.

21
5.6 Energy saving Opportunities In Electrical Systems
This section briefly discusses on the various energy saving opportunities in Data Center Electrical
systems. The detailed case studies/write-ups on these energy saving opportunities is provided
in the later part of this section.

5.6.1 Optimize the loading on UPS and PDU system


The data center facility managers maintain redundancy levels to ensure the site availability as
required by the process. The level of redundancy for each tier is as given in Annexure 1. However,
some data centers are operated with more redundant systems than the recommended level.
Increase in redundancy means a compromise on operating efficiency of the system.

There exists a good potential to control the operation of the modules to operate the units at a
higher load factor which would result in better operational efficiency.

The efficiency of the UPS system varies with loading. Typically, an UPS system has maximum
efficiency at 75-80% loading, and the efficiency reduces gradually for change in loading on
both sides. At loading less than 40-45%, the efficiency reduces drastically. Hence it is
recommended to maintain UPS loading of more than 40%.

5.6.2 Use of energy efficient transformers


Transformer is the heart of an electrical distribution system. The power to IT systems is distributed
through a transformer. The efficiency of the transformer plays a major role in efficient power
distribution.

The energy efficient transformers have better efficiency levels than conventional transformers
which has higher inherent loss.

The inherent loss of transformers depends upon the type of core used in the transformers.
Latest energy efficient transformers have maximum efficiency of 99.1%. The magnetic core of
these transformers is built with superior material which helps to energy right from the time of
installation.

5.6.3 Total Harmonic distortion


Nonlinear loads cause harmonics to flow in the power lines. Harmonics are unwanted currents
that are multiples of the fundamental line frequency. Excessive harmonic currents can overload
wiring and transformers, creating heat and fire. Therefore, it is necessary to maintain the
harmonics levels in the electrical system as recommended in the IEEE 519 standards.

The permissible limit for voltage harmonics varies with the voltage level of operation. Table 5.6
shows the permissible voltage harmonic limit in percentage for various distribution voltage
levels.

22
Table 5.6: Voltage harmonic levels for various distribution voltage levels

Supply System Voltage (kV) Total Harmonic


Individual Harmonic
at point of common Voltage Distortion VT
Voltage Distortion (%)
coupling (%)
Odd Even
0.415 5 4 2
6.6 and 11 4 3 1.75
33 and 66 3 2 1
132 1.5 1 0.5

The permissible lim1it for current harmonics as specified by IEEE standard is specified in table 5.7.

Current distortion limits for general distribution systems end user limits (From 240 V to 69 kV)

Table 5.7 : Allowable current harmonic distortion for various Isc/IL ratios

Isc/IL <11 12 – 17 17 – 23 Ithd %

<20 4.0 2.0 1.5 5.0

20<50 7.0 3.5 2.5 8

50<100 10 4.0 4.0 12

100<1000 12 5.5 5.0 15

5.6.4 Power Factor improvement


The power factor is defined as the ratio of active power (kW) to the apparent power (kVA). The
active power requirement depends on the load consumption.

For any active power (kW) requirement, the kVA demand depends on the operating power
factor (PF). Lower the PF, higher would be the kVA demand for the same kW load and vice versa.

Therefore, it is recommended to maintain the power factor above 0.95 to reduce the kVA demand.

5.6.5 Monitoring of critical components


The continuous monitoring of key parameters to all components in a Data Center is important
to maintain an efficient performance of the electrical distribution system.

23
The parameters such as voltage, efficiency, power factor, and total harmonic distortion of both
voltage and current have to be monitored on a regular basis.

Whenever there is a deviation in values of parameters, necessary action has to be taken to bring
it back to the desired levels.

5.6.6 Use of Energy Efficient power supplies


Voltage is stepped down in the incoming transformer and conversion of power takes place in
UPS switching circuit.

The power conversions leads to certain losses in the equipment. The latest PDUs and UPS system
use low loss switching devices to minimize the loss in the system and improve the system
efficiency.

5.6.7 Regular maintenance and test of electrical system


Scheduled mantenance, periodic testing and regular upkeep of the system to maintain all
parameters within their control limits, facilitates in increased efficiency. A casestudy on this
aspect is enclosed in the later part of this chapter.

5.6.8 Regular maintenance of capacitors in UPS


DC capacitors which are used in the UPS system wear out after certain working hours. If a worn
out capacitor is left to operate in the UPS system, it would result in the inverter ‘failing-to-
operate under load’ and would cause increase in ripple current in the batteries. This results in an
inefficient operation of the equipment. Therefore, it is necessary to monitor the capacitors at
regular intervals to ensure its proper working.

24
CASE STUDY

POWER QUALITY IMPROVEMENT IN DATA CENTER BY INSTALLING


HARMONIC FILTERS

Background
Power factor is one of the major parameters which influence the performance of an electrical system.
The power factor is influenced by both reactive power requirement of the load and by the harmonics
present in the system.

Harmonics are generated due to the presence of non-linear switching loads such as UPS, display units,
PAC, and HVAC controls in the circuit. Harmonics, when exceeding a limit in a system causes undesirable
effect such as malfunctioning of protective relay equipment, de-rating of equipment capacity,
premature failure due to increased stress on electrical system, etc.

The presence of higher harmonics affects the power factor negatively and increases the KVA demand
requirement for any KW load.

IEEE 519 standard specifies the limit for both Voltage and current harmonics. The current harmonics
Limits depend on ratio of Short Circuit Current (SCC) at PCC to average Load Current of maximum
demand over 1 year. Thus the current harmonic limit is the function of system design. Also the voltage
harmonics depends on the bus voltage. Typically, the voltage harmonic limit at 415V bus is 5%.

Project details
The company is an Indian telecom giant with an exclusive data centers catering to their internal needs.
The organization conducted an energy study to reduce cost through better energy management.
The energy cost has two, one based on the actual power consumption (kWh or Active power) and the
other based on the maximum demand registered (kVA or apparent power), which is affected by the
system power factor.
The measured system power factor was 0.88 Lagging for the average load of 1030 kW. The harmonic
levels in the system were also measured with power quality analyzer and measurements were analyzed.
The project team inferred that the power factor was lower and can be substantially improved. They also
inferred that harmonic levels in the system can be reduced. It was decided that solutions for improving
power factor and mitigating the harmonics can be implemented by using the combination of APFC
and Active harmonics filter.
Note: The harmonic filter design is a site specific approach based on the operating conditions. The
filter has to be selected based on the requirements identified through a detailed study.

The organization procured and installed a 2 x 225 Amp Active Harmonic Filter along with 225 kVAR
APFC (Automatic Power Factor Correction) system in the Main Incomer panel of the building housing
the Data Center.

25
It has improved the power factor from 0.88 lagging upto 0.97 lagging, reducing the demand by 96 kVA.

Benefits of the project


The server Load remaining the same at 600 KW, there is a small increase in power consumption due to
the losses in filters which is equal to 19kW. There is a substantial reduction in Demand from 1198 KVA to
1102 KVA reducing the demand by 96 KVA. The summary of the audit pre and post filter installation is
given in table-1.

Benefits of the project


 Reduction in demand from 1198 kVA to 1102 kVA
 Harmonics level controlled within desirable limits
 Better power factor maintained

Table-1: Data Summary of the Facility Audit Pre- and Post-Filters Installation

KW K VA PF

Total with APFC Filter 1050 1198 0.88


C/F Connected

After installing Active Filter 1069 1102 0.97

Financial Analysis
The annual saving achieved was Rs 3.3 million. The investment made for this project was Rs 4 million.
The pay back period is 15 months.

Cost benefit analysis


 Annual savings – Rs. 3.3 million
 Investment – Rs. 4 million
 Pay Back period – 15 months

The project involving installation of Harmonic filters is recommended in the scenario where
harmonic levels are higher than the IEEE 519 standard. The project involves minor
modification/retrofitting of filters in electrical infrastructure which requires external expertise.
The project being capital intensive, has to be taken up as part of business decision.

26
CASE STUDY

ENERGY EFFICIENCY IMPROVEMENT IN UPS SYSTEMS BY LOADING


OPTIMISATION
Background

Data Center is an environment in which the uptime and availability is critical and needs to be maintained
at the maximum. The uptime of a Data Center can be maintained by ensuring the availability of
continuous power supply to its IT equipment.

The reliability in power supply is achieved through Uninterruptible power supply (UPS) system. UPS is
one of the major energy consumers in a Data Center, used to provide continuous power supply to all IT
equipment. Thus continuous operation of a UPS system is critical in a Data Center.

The efficiency of the UPS system varies with loading. Typically, an UPS system has maximum efficiency
at 75-80% loading, and the efficiency reduces gradually for change in loading on both sides. At loading
less than 40-45%, the efficiency reduces drastically as shown in figure-1.

EFFICIENCY DATA OF 200 KVA UPS SYSTEM

% Loading Efficiency
10% 80.00%
15% 84.00% 95.00%
20% 87.00%
25% 89.16%
30% 91.00% 90.00%
40% 91.95%
50% 92.80% 85.00%
60% 93.00%
70% 93.40%
80% 93.46% 80.00%
90% 93.15%
100% 93.00% 75.00%

70.00%
1 2 3 4 5 6 7 8 9 10 11 12

Figure-1: Loading (vs.) Efficiency curve of the UPS

Generally, the UPS system is configured for parallel operation to maintain sufficient redundancy which
ensures high availability of UPS power for critical loads.

While operating in parallel mode, the loading on each UPS is less than 25%, such that the total load
does not exceed 50% loading on one UPS operation.

27
At around 25% loading, the efficiency of UPS is less and it can be improved by increasing the loading
on UPS system. The loading level should be maintained, such that it balances the reliability (backup
time) and also maintain better efficiency of the UPS system.

Project details
The organization is a well known software development company having international clientele. The
organization maintains a Data Center which caters to various clients abroad.

The company initiated a programme for energy management and conducted a Power Quality and
Energy audit.

During the assessment of UPS, the


loading on the UPS system was found
to be varying constantly. The change
in loading pattern was due to,

a) Flexible operating hours of


developers, resulting in
randomness of the load
b) Number of software development
projects being worked

For a maximum load of 200 kVA, four


modules of 200 kVA UPS were installed
in a 4 x 200 KVA configuration as shown
in figure-2. Thus, even when there is
a load of 200 kVA, each UPS would be
loaded to maximum of 25% only.

In reality, the load actually was never


200 kVA but lower and steadily varied
from a minimum to a maximum at
different times of a day thus, resulting
in varied percentage loading on the
UPS systems. The loading pattern
observed for a period of two days is
shown in figure-3.

Figure-2: Schematic diagram of UPS configuration

28
Figure-3: Loading pattern on UPS

The total load on the system varied from 100 kVA to 144 kVA which results in 12.5% to 18% loading on
individual UPS. The project team observed that the efficiency of UPS at these loading is around 83%,
from the efficiency chart supplied by the manufacturer.

It was found beneficial to improve the loading on the UPS system. An automation system was inatalled
to monitor the loading on Individual UPS and gives commands on the operation of all UPS. The desired
loading on UPS is maintained by operating the other UPS in standby mode and bringing it online
whenever the load increases.

In the event of increase in load or malfunction in the ‘On-load UPS’, the other UPS will automatically
take up the load. This mechanism increased the loading of the UPS system and thereby increased the
circuit efficiency.

Care was taken to ensure that the circuit would be ‘fail-safe’ and most reliable in its design. Since the
circuit consisted of only sensors and control logics, its own losses were negligibly less.

Benefits of the project

For the same load variations, reliable operation of UPS system is maintained. Intelligent control logic
regulates the operation of inverter in the load bus of UPS thus increasing the loading on UPS system.

The efficiency of UPS system was improved by 6.7%. Thereby the demand consumption was reduced
by 20 kVA.

The improvement in efficiency due to load optimization is represented in the table 1 & table-2.

29
Table-1: Summary Data Record & Analysis Pre-implementation of project

Before Planned Action

Load UPS capacity Loading on UPS


( k VA ) (kVA) UPS (%) Efficiency (%)

Loading Status - 1 127 800 15.88% 84.6%

Loading Status - 2 114 800 14.25% 83.3%

Loading Status - 3 100 800 12.50% 82.3%

Loading Status - 4 130 800 16.25% 85.1%

Loading Status - 5 144 800 18.00% 86.4%

Table-2: Summary Data Record & Analysis Post-implementation of project

After Planned Action

Load UPS capacity Loading on UPS


( k VA ) (kVA) UPS (%) Efficiency (%)

Loading Status - 1 127 400 31.75% 90.8%

Loading Status - 2 114 400 28.50% 90.3%

Loading Status - 3 100 400 25.00% 89.2%

Loading Status - 4 130 400 32.50% 91.1%

Loading Status - 5 144 400 36.00% 91.4%

The project loading optimization of UPS system would find replication in the scenario
when the loading of individual UPS is low in a bank of UPS system. The project involves
minor modification/retrofitting in power distribution infrastructure which requires external
expertise. The project being capital intensive, has to be taken up as part of business decision.

30
CASE STUDY

ENERGY EFICIENCY IMPROVEMENT IN LIGHTING SYSTEM BY


REPLACING FLUORESCENT LAMPS WITH LIGHT EMITTING DIODE
(LED) LAMPS
The project demonstrates the use of Light Emitting Diode (LED) lamps in place of conventional Compact
Fluorescent Lamps (CFL) to achieve reduction in electrical energy consumption.

Background

A light-emitting diode is an electronic light source. LEDs present many advantages over traditional
light sources which include lower energy consumption, longer lifetime, improved robustness, smaller
size and faster switching. However, they are relatively expensive and require more precise current and
heat management as compared to traditional light sources.

One of the key advantages of LED-based lighting is its high efficacy (Lumens/watt), as measured by its
light output per unit power input.

LEDs are available in different performance classes such as standard, mid and high power packages, in
various brightness levels and package sizes and in the complete color range including all shades of
white, RGB, and color on demand.

The comparison of LED and Fluorescent lamps are given in table-1.

Table-1: Comparison of LED and Fluorescent lamps

Parameter Fluorescent Lamps LED Lamps

Efficacy 60-65 lm/W 105-120 lm/W

Lifetime 10000 to 15000 hrs 35000 to 50000 hrs

Mechanical strength Fragile Robust

Environmental aspect Contains mercury No mercury content

Initial cost Medium High

Total ownership cost High Low

Voltage sensitivity Medium High

Temperature sensitivity Medium High

31
Project details
The company carried out a lighting study throughout the facility. The lighting energy and LUX levels
were recorded and analyzed. It was noticed that lighting contributed a considerable share to the overall
operating cost.

The company decided to replace all compact fluorescent lamps with latest energy efficient LED lamps.
They have replaced 850 nos of 18W CFL with 2550 nos of 1.8W energy efficient LED lamp. Each 18W CFL
was replaced with 3 nos of 1.8W LED lamps.

Retrofitting of lamps was possible due to fact that no extra or special wiring was required for LED lamps.
The LED lamps were installed in utility rooms, laboratories and for emergency lights in the data center.

Also, the company segregated the lighting systems with respect to areas, based on the illumination
requirement and occupancy time. The lighting controls such as movement sensors were adopted for
the areas with intermittent occupancy.

Benefits of the project


The conventional CFLs are replaced with Energy efficient LED Lamps. The implementation has been
rewarding and beneficial to the business of the organization. The project is also environmentally
beneficial as LED lamps contain no mercury.

Lighting before and after


The annual savings achieved was Rs 0.54 million. The investment made for this project was Rs 1 million
with the attractive pay back period of 22 months.

Note: The project was feasible with attractive pay back period due to the high energy cost.

Energy cost

Benefits of the project Cost benefit analysis


 Reduction in lighting power consumption by  Annual savings – Rs. 0.54 million
73%, reducing total ownership cost (TOC)
 Investment – Rs. 1 million
 No mercury content in LED lighting (address
 Pay Back – 22 months
environmental and disposal issues)
 Increase in lifetime upto 3 times

The project involves minor modification/retrofitting in lighting


infrastructure which requires external expertise. The project being
capital intensive, has to be taken up as part of business decision.

32
CHAPTER -6
DATA CENTER COOLING
6.0 Introduction
The heat removal is one of the most essential processes in the operation of Datacenter and IT room. The
invention of high density equipment for performance improvement has resulted in concentrated
generation of heat which imposes challenges on the cooling system.

In a typical data center, the power consumption of cooling system is to the tune of 35 – 40% of the total
power consumption of the data center. Hence, maintaining the cooling system performance at optimum
level is essential for reducing the overall energy consumption of the data centre.

The operating temperature and humidity level in Datacenter has to be maintained at recommended
levels to ensure desired performance of all IT equipment. Thus, reliable operation of cooling system is
absolutely necessary for the desired operation of the servers.

The cooling system includes of various components such as,


 Chiller units
 Chilled water pumping system
 Precision air conditioner unit
 Air distribution system
The cooling of a Datacenter can be accomplished by both chilled water system and by direct expansion
units.

Most of the data centers use both type of system, but the chilled water system is predominantly used in
cooling for Data Center, which forms the integral part of overall building cooling.

The precision air conditioner units housing a cooling coil, humidifier, and heater is used to condition
the air which is circulated in the Data Center space for heat removal from IT equipment.

A working medium in the cooling coil would be chilled water in case of chilled water based system. In case
of direct expansion (DX) system, the refrigerant flows through the coil to cool the air flowing across the coil.

Humidifier is one of the important components in a Data Center operation. A humidifier is used to
increase the water content in air to maintain desired humidity level in the Datacenter space. Distribution
of humidifiers in the data center is essential to achieve equal distribution of the humidification.

33
Computer Room Air Conditioners (CRAC)
Computer Room Air Conditioner (CRAC) is one of the most common cooling systems installed in current
generation data centers to remove heat energy from the air inside the data center and reject it to
outdoor spaces. These systems typically take the form of a relatively small (30 Tons sensible capacity)
compressor system and evaporator coil installed in the data center space, and a condensing unit installed
outdoors to reject the energy from the room. Figure 6.0 shows the CRAC based cooling systems.

Figure 6.0: CRAC system layout [Source: Rittal corporation]

Cooling process in a Datacenter


Ideally, the cooling process is a closed loop system. I.e. the cold air supplied from the CRAC flows through
the IT equipments and the hot air is re-circulated to the CRAC unit for removing the heat from air.

Each rack draws the cold air from the front side to remove the heat from its hot servers. The hot air then
exits the rack from the rear side.

The cold air supply to server rack is facilitated in two ways. The conventional method uses room cooling
technique and latest method make use of Hot aisle/Cold aisle containment technique.

34
In the conventional room cooling technique, the temperature distribution in the room is determined by
the inlet and outlet temperatures of different racks.The inlet temperature of a rack depends on the supplied
cold air from the CRAC and the hot air that is re-circulated from the outlet of other rack. The outlet
temperature of a rack depends on the inlet air temperature and the power consumption of that rack.

The mixing of cold and hot air takes place which results in increased distribution supply temperature
and reduced return temperature. The reduced return temperature increases the load on CRAC units.

Hot aisle/Cold aisle containment technique is now commonly used in most of the Data Centers.
Figure-6.1 shows the typical layout of hot aisle and cold aisle approach.

Figure 6.1: Datacenter raised floor with hot aisle /cold aisle setup

In Figure 6.1, the racks are arranged such that the hot side and cold side of racks face each other to form
alternative cold and hot aisles.

The cold aisle consists of perforated floor tiles separating two rows of racks. The chilled air from the
perforated floor tiles is supplied from the tiles and is drawn into front of the racks. The inlets of each rack
face the cold aisle. This arrangement allows the hot air exhausting the rear of the racks to return to the
CRAC, thus minimizing hot exhaust air from the rack circulating back into the inlets of the racks.

The servers with critical duties will shutdown and fail when temperature and humidity rises above
manufacturer specified standards. The high density equipment in the data centers can rise up to 40
degree Celsius in 3 minutes in the absence of air conditioners. Therefore, ‘failure-free’ operation of
CRAC units is absolutely necessary for the normal operation of the servers.

This chapter focuses on various possibilities for improving the energy performance of the Data Center
cooling system.

35
6.2 Energy Conservation measures in air-conditioning System
6.2.1 Chiller system
 Increase the Chilled Water Supply Temperature (CHWST ) Set point
The chilled water supply temperature (CHWST) set point is specified during the design of a
cooling system. It also has wide-ranging implications on the energy performance of chiller
system.

It affects the operating efficiency of the chiller. As a rule of thumb, chiller efficiency improves by
1% for every 1 deg F the evaporator leaving water temperature is raised, all other factors held
equal. The scope for adjustment varies among chiller make/models, and depends largely on
the chiller loading.

A lower chilled water supply temperature causes greater air dehumidification at the cooling
coil by condensation of moisture in air.

A typical chilled water supply temperature set point for facilities with normal space humidity
control requirements is about 45ÚF. This set point is typical even in facilities that have relaxed or
even no humidity requirements, due to the persistence of design “rules of thumb”.

For energy savings due to changes in the chilled water supply temperature in an existing facility
considers two basic scenarios.

Scenario-1:

The chilled water supply temperature set point is raised, but the chilled water return
temperature must remain roughly the same as before (chilled water delta-T decreases).

This is the more common scenario. The space or process being served must be maintained at a
certain temperature, which limits the maximum possible chilled water return temperature. If
the chilled water return temperature is already near its upper practical limit, the only way to
keep it there when the chilled water supply temperature increases is to increase the chilled
water flow rate.

For this action to be viable there must not be an existing zone that has already achieved
maximum valve opening position. If there is such a zone, permanently raising the chilled water
supply temperature will cause this zone to overheat. If the zone gets overheated intermittently,
an automatic chilled water supply temperature reset may still be a viable option. If the chilled
water supply temperature set point can be permanently raised in an existing facility, while still
meeting all humidity and load requirements, it will have the effect of saving chiller energy, and
increasing pump energy.

Scenario-2:

The chilled water supply temperature is raised, and the chilled water return
temperature can be allowed to rise by some amount (chilled water delta-T decreases
less than in Scenario 1, or even remains the same as before).

36
This is a rare scenario. In this case, the original chilled water supply temperature set point is
unnecessarily low for the load being served. All of the chilled water control valves are closed to
some degree, limiting the flow. The chilled water return temperature is significantly lower than
required to maintain the desired space or process temperature in all zones.

If the chilled water return temperature can be allowed to rise in addition to raising the chilled
water supply temperature, then the chilled water flow rate does not have to increase as much,
or even at all. This limits the increase in pump energy, while still allowing more efficient chiller
operation.

 Integrated Waterside Economizer to chilled water Plant

This cases applies to chillers using water cooled system with cooling towers.

During periods of low wet bulb temperature (often at night), the cooling towers can produce
water temperatures low enough to pre-cool the chilled water returning from the facility,
effectively removing a portion of the load from the energy-intensive chillers.

During the lowest wet bulb periods, the towers may be able to cool the chilled water return all
the way down to the chilled water supply temperature set point, allowing the chillers to be shut
off entirely. The air handling unit senses no change in chilled water supply temperature at all
times, allowing them to maintain the required temperature and humidity requirements. Free
cooling also offers an additional level of redundancy by providing a non-compressor cooling
solution for portions of the year

 Implement Variable Condenser Cooling Water Flow

The standard operating procedure for water-cooled chillers is to have constant condenser water
(CW) flow and a constant temperature of water entering the condenser, referred as the condenser
cooling water temperature (CCWT).

Reducing the condenser water flow will save condenser water pump energy. However, reducing
the condenser water flow increases the chiller’s condensing temperature, causing it to run less
efficiently. If low condenser cooling water temperature can be produced by the cooling tower
then the chiller’s condensing temperature can be reduced again, restoring efficient chiller
operation and retaining the benefit of reduced cooling water pump energy. This must be
compared against the increased cooling tower fan energy needed to produce the lower
condenser cooling water temperature to determine if there are net energy savings.

Determine the possibility of reducing condenser water flow considering following points.

 ASHRAE recommendation of minimum condenser water flow velocity of 1 m/s to maintain


turbulent velocity and prevent formation of deposits in the condenser
 The condenser water velocity is only a small factor in the overall heat transfer. The main
factor controlling refrigerant condensation is the condenser surface area
 Many chillers can operate at low condenser water flow velocities and high condenser water
delta-T without effecting the stable operation of the chiller

37
Decreasing the condenser water flow rate will provide condenser water pump savings that
may or may not outweigh the increased chiller energy use.

It is recommended to keep the condenser cooling water temperature (CCWT) as low as possible
to maintain high chiller efficiency.

 Install an Evaporative Cooled-condenser for chillers

Evaporative-cooled chillers are essentially water-cooled chillers in a package. The system has
condenser, water, sump and pump, etc., all as integral part of the chiller. Whereas a water-cooled
chiller requires a cooling tower, condenser water pump, and field-erected piping.

The hot gaseous refrigerant is condensed by water flowing over the condenser tubes and
evaporating. This facilitates the condensing temperature to the ambient wet bulb temperature,
like a water-cooled chiller.

This improves the operating efficiency of the chiller significantly.

 Replace old inefficient chillers with latest efficient chiller

Chillers are the major energy consumer in the cooling system. Most of the time the chillers are
operated at part load conditions. Latest chillers can offer higher coefficient of performance
(COP), by design, even at part load conditions.

The latest chillers meet or exceed the minimum COP requirements presented in the table below.

38
For these reasons, it is recommended to assess the energy performance of all chillers periodically.
And estimate the cost-effectiveness of replacing old inefficient chillers with latest efficient
chillers available in the market.

6.2.2 Chilled water distribution system


 Convert all 3-Way Chilled Water Valves to 2-Way

Generally, chilled water distribution systems are designed with 3-way valves at the cooling
coils. A constant flow of chilled water is delivered to each coil location. Each coil is equipped
with a bypass line, and each 3-way valve regulates the water through the coil as per the cooling
requirement. The excess chilled water bypasses the coil. This method uses more energy to
pump the additional water in the chilled water circuit.

The use of variable speed drives for pump motors would eliminate the bypasses and can replace
the 3-way valves with 2-way valves. The 2-way valves modulate as needed to serve the cooling
load, and the pump motor speed varies in response to the demand by maintaining a constant
pressure at the far end of the distribution loop.

In facilities that experience high load variation, it may be effective to program the control system
to vary the pressure set point in response to the position of the most-open 2-way valve

 Reduce the Chilled Water Supply Pressure Set point

Standard control system design calls for the chilled water pump serving the chilled water
distribution system to maintain a constant pressure at a given location (usually at the most
remote cooling coil), regardless of the current cooling load.

The pressure set point is selected to ensure that adequate flow is delivered to every coil at the
peak load condition, when all the cooling coil valves are wide open. The set point may currently
be set higher than necessary. This can occur for several reasons – improper initial balancing;
overestimation of peak load; load growth projections were too aggressive; changes were made
to the distribution system but it wasn’t rebalanced; etc.

A pressure set point that is higher than necessary can cause the chilled water pump motor to
draw more power than is necessary. Optimizing the set point for current conditions can save
energy, particularly in systems where the CHW pump is continuously in operation.

The highest valve position is then used as an input to a control loop that resets the chilled water
loop pressure set point down until the maximum valve position equals 85% - 90% open. This
control approach continuously optimizes the set point to reduce energy usage

6.2.3 Auxiliaries
 Convert Primary/Secondary Chilled Water Pumping System to Primary-Only

The typical chilled water distribution system for data center facilities has a constant-volume primary
loop and a variable-flow secondary loop. This arrangement ensures a constant flow through the
chiller evaporator, while allowing the secondary loop to modulate according to demand.

39
In recent years, chillers have evolved to be more tolerant of variable chilled water flow through
the evaporator. As a result, primary-only variable flow chilled water pumping has become more
common. This arrangement eliminates the primary chilled water pumps (the pumps previously
designated as secondary become the primary pumps) and typically results in energy savings.

Chillers still have minimum allowable evaporator flow rates, so the control system must monitor
and ensure these rates. Even in facilities with relatively constant load such as data centers,
energy savings can be realized as the constant self-balancing of the primary-only control system
minimizes pump energy use.

 Install high efficiency pumps

Present days, pumps are available in a wide range of efficiencies for a particular application.

Estimate the operating parameters such as maximum head and maximum flow requirement
in the pumping system. Select a high efficient pump matching its duty point to the operating
parameter values. Centrifugal pumps are available with operating efficiency in the range of
80 – 85%.

 Calibrate Chilled Water (CHW ) Supply Temperature Sensors annually

Chiller efficiency highly depends on the temperature of the chilled water (CHW) it produces
with respect to the ambient operating conditions.

A low chilled water supply temperature typically results in lower chiller efficiency, maintaining
all other factors constant. An error in the chilled water supply temperature sensor can cause a
chiller plant to produce an unnecessarily cold chilled water temperature and increase in energy
consumption. In addition, a too-cold chilled water temperature can cause undesired
dehumidification at the cooling coils. This places an extra load on the cooling system and
additional energy use.

It is recommended to calibrate the sensors annually.

 Calibrate the Condenser Water (CW ) Supply Temperature Sensors annually

A water-cooled chiller’s efficiency is directly affected by the temperature of the condenser water
(CW) entering the condenser. A higher condenser water supply temperature typically results in
lower chiller efficiency, maintaining all other factors constant.

An error in the condenser water supply temperature sensor can cause the cooling towers to
produce a warmer than desired condenser water temperature and in turn cause inefficient
operation of the chiller plant

 Utilize entire area in cooling tower to improve the energy performance of cooling
towers

40
By operating as many cooling towers as possible at all times, the amount of water to be cooled
is distributed across a greater number of towers. This decreases the amount of heat rejection
required by each tower, which in turn reduces the required fan speed. This translates directly to
energy saving. Care must be taken that no tower is starved for flow

 Install Cooling Tower with low Approach Temperature

Every cooling tower can produce a water temperature that approaches, but is never lower than,
the ambient wet bulb temperature. The difference between these two temperatures is called
the “approach” temperature.

During operation the approach temperature will vary as a result of several factors – the tower
water flow rate, the temperature of the water entering the tower, the current wet bulb
temperature, the cooling tower fan speed, etc. The manufacturers report the approach
temperature at a single specific operating condition. This nominal condition may not be the
same from one manufacturer to another.

A tower with a smaller approach temperature is more efficient and produces a lower approach
temperature. A lower approach temperature can improve chiller efficiency.

It is recommended to select and install a cooling tower with low approach temperature.

6.3 Energy Conservation Tips in Datacenter Cooling


 Measure the Return Temperature Index (RTI) and Rack Cooling Index (RCI)

A low air temperature rise in the data center IT equipment which is outside the recommended
range clearly indicates the inefficiency in air management.

A low return temperature is due to by-pass air and an elevated return temperature is due to
recirculation air. Estimating the Return Temperature Index (RTI) and the Rack Cooling Index
(RCI) will indicate if corrective, energy-saving actions are called for.

The Rack Cooling Index (RCI) is a measure of how well the system cools the electronics
within the manufacturers’ specifications, and the Return Temperature Index (RTI) is a measure
of the energy performance of the air-management system

 Increase the Supply Air Temperature

A low supply temperature makes the chiller system less efficient and limits the utilization of
economizers. Enclosed architectures allow the highest supply temperatures since mixing of hot
and cold air is minimized.

In contrast, the supply temperature in open architectures is often dictated by the hottest intake
temperature.

Target the maximum recommended intake temperature from guidelines issued by ASHRAE
(77F) depending on the type of electronic equipment in the data or telecom center. If the air

41
distribution system can be modified to deliver air more effectively to the equipment, it may be
possible to raise the average intake temperature. This in turn will allow the cooling supply air
temperature to be raised, which typically results in more efficient cooling system operation.

 Provide Temperature and Humidity Sensors to Mimic the IT Equipment air intake
Conditions

IT equipment manufacturers design their products to operate reliably within a given range of
intake temperature and humidity. The temperature and humidity limits imposed on the cooling
system that serves the data center are intended to match or exceed the IT equipment
specifications. However, the temperature and humidity sensors are often integral to the cooling
equipment and are not located at the IT equipment intakes. The condition of the air supplied by
the cooling system is often significantly different by the time it reaches the IT equipment intakes.
It is usually not practical to provide sensors at the intake of every piece of IT equipment, but a
few representative locations can be selected. Adjusting the cooling system sensor location in
order to provide the air condition that is needed at the IT equipment intake often results in
more efficient operation.

 Calibrate the Temperature and Humidity Sensors annually

Temperature sensors generally have good accuracy when they are properly calibrated (+/- a
fraction of a degree), but they tend to drift out of adjustment over time. In contrast, even the
best humidity sensors are intrinsically not very precise (+/- 5% RH is typically the best accuracy
that can be achieved at reasonable cost). Humidity sensors also drift out of calibration.

To ensure good cooling system performance, all temperature and humidity sensors used by the
control system should be treated as maintenance items and calibrated at least once a year.

After a regular calibration program has been in effect for a while, you can gauge how rapidly
your sensors drift and how frequent the calibrations should be. Calibrations can be performed
in-house with the proper equipment, or by a third-party service.

 Provide personnel and Cable Grounding to Allow Lower IT Equipment Intake


Humidity

Higher humidity levels results in increased energy consumption of cooling system in Datacenter.
Conversely, the lower humidity limit in data centers is often set relatively high (40% RH at the IT
equipment intake is common) to guard against damage to the equipment due to electrostatic
discharge (ESD).

Energy can be saved if the allowed lower humidity limit can be lowered. ESD can be kept in
check by conductive flooring materials, good cable grounding methods, and providing
grounded wrist straps for technicians to use while working on equipment.

6.4 Energy Conservation Tips in Datacenter Air Management

42
 Ensure Adequate Match between Heat Load and Raised-Floor Plenum Height:

The cooling capacity of a raised floor depends on its effective flow area, which can be increased
by removing cables and other obstructions that are not in use. Still, the heat density may need
to be reduced. Undersized and/or congested plenums often require an overall elevated static
pressure to deliver the required airflow. Providing the increased static pressure requires
additional fan energy.

 Provide Adequate Ceiling Supply/Return Plenum Height

The plenum height can be increased if the clear ceiling allows. A return plenum often means a
lower clear ceiling but allows placing the return grilles directly above the hot aisles. Such a
plenum needs to be maintained similar to a raised floor. A shallow plenum may result in high
pressure losses, poor pressure distributions, and high fan-energy costs.

 Remove Abandoned Cable and Other Obstructions

Under-floor and over-head obstructions often interfere with the distribution of cooling air.
Such interferences can significantly reduce the air handlers’ airflow as well as negatively affect
the air distribution. The cooling capacity of a raised floor depends on its effective height, which
can be increased by removing obstructions that are not in use.

 Implement Alternating Hot and Cold Aisles

This is the first step towards separating hot and cold air, which is a key to air management. Cold
air is supplied into the cold front aisles, the electronic gear moves the air from the front to the
rear and/or front to the top, and the hot exhaust air is returned to the CRAC from the hot rear
aisles. Some data centers are not suitable for hot/cold aisles, including those with non-optimal
gear (not moving air from front to rear/top)

 Provide Physical Separation of Hot and Cold Air

Physical barriers can successfully be used to avoid mixing the hot and cold air, allowing reduction
in airflow and fan energy as well as increase in supply/return temperatures and chiller efficiency.

There are four principal ways of providing physical separation:

 Semi-enclosed aisles such as aisle doors; allows some containment of the cold air. Also
blanking panels should be used to seal openings under and between equipment racks,
between equipment shelves in partially filled racks, or completely empty racks
 Flexible strip curtains to enclose aisles; allows good separation of hot and cold air
 Rigid enclosures to enclose aisles; allows excellent separation of hot and cold air
 In-rack ducted exhausts; allows effective containment of the hot exhaust air

 Provide perforated tile or diffuser in Cold Aisle

43
Perforated floor tiles or over-head supply diffusers should only be placed in the cold aisles to
match the “consumption” of air by the electronic equipment. Too little or too much supply air
results in poor overall thermal and/or energy conditions.

Note that the hot aisles are supposed to be hot, and supply tiles should not be placed in those
areas.

 Design the Return Air from Hot Aisle area

The thermal efficiency of the data center increases when the return temperature is maximized.
The closer the return is located to the heat source, the better. If a return plenum is used, the grilles
should be placed directly above the hot aisles.

 Provide Adequate Floor Plenum Pressure

A high static pressure often results in high floor leakage and by-pass air. A moderate static
pressure (0.05 in. of water) allows relatively high tile airflow rates including minimum floor
leakage.

In case a standard 25% perforated tile does not deliver enough airflow to cool the equipment at
the moderate pressure, it becomes better to increase the tile open area than to increase the
pressure.

 Balance the Air Distribution System

Over-head ducted systems can be adequately balanced using conventional methods whereas
raised-floor systems are balanced by providing required number of perforated tiles. The amount
of cold air required at each rack should be supplied by placing the adequate tiles in front of
racks.

 Remove Doors from IT Equipment Racks

The use of doors often obstructs the cooling airflow and may result in recirculation of cooling
air within the enclosed cabinet. This would further increase the equipment intake temperature.

If rack doors are necessary for security reasons, provide sufficient openings in the doors to
permit adequate cooling airflow through it.

 Switch off CRAC/CRAH Units

Incase of low heat load on all CRAC units which desires lower airflow volume, some of the CRAC
units can be turned off.

This is not a precise way of controlling the air volume, but it can still yield acceptable results in
circumstances where variable speed fans are not adopted.

44
Some experimentation may be required to determine which units to be switched off without
compromising adequate cooling of the IT equipment

 Control All Supply Fans in Parallel

If all the supply fans serving a given space are identical and equipped with variable speed
drives, fan energy is minimized by running all the fans (including redundant units) at the same
speed

 Install CRAC units with EC fans

While replacing the older units, insist on units that uses EC fans for air movement application.
The EC fans provide latest electronic control to vary the air flow rate and maintain required
temperature at the IT equipment side. It uses minimum energy for its operation compared to
other air movement equipments available in the market.

45
CASE STUDY

COOLING SYSTEM ECONOMISER


Background

Blade severs generate more heat than traditional 1U servers, and they draw less air flow per watt of heat
absorbed by the cooling air that passes through them. This results in a greater temperature difference
(“T) between air entering and exiting the servers and it produces warmer return air.

The warmer return air enables effective transfer of heat from air to water at temperatures close to wet
bulb temperatures. As the heat transfer is proportional to the temperature difference between cooling
water and return air, more heat transfer takes place in the AHU. The return air is pre-cooled with cooling
water at wet bulb temperature which reduces the heat load on chilled water system.

The pre-cooled return air is again cooled to required temperature through chilled water system. As a
result, cooling capacity of the air handling coils increases dramatically.

The cooling water system acts as an economizer to reduce the heat load on chilled water system under
favorable weather conditions.

Project details

This project has been implemented in one of the data centers in US which uses high density blade
servers.

In a blade server data center, the total HVAC system air side “T can reach 28oC. For each server at peak
utilization, air entering at 18oC exits at 46 to 52oC. As all the servers was not simultaneously operated at
peak utilization, the return air temperature was in the order of 43oC, which is still warmer compared to
many other data centers.

Condenser water is used to pre-cool the return air before it enters the chilled water coil. This is very
attractive during favorable climate conditions especially during winter in some parts of the country
where the temperatures are relatively lower.

46
An additional cooling coil called economizer is placed at the air handler (AHU). The economizer utilizes
condenser water circulated through cooling tower. As the return air temperature from the blade server
data center is very warm in the order of 46oC, the economizer provides partial cooling by cooling it to a
temperature of 32oC. This pre-cooled air then enters the chilled water coil where it is cooled to an
acceptable temperature of 18oC.

Results of the project


There was a substantial reduction in power consumption of chiller. It was estimated that the economizer,
combined with the higher air handling system “T and lower air flow per kW of heat, will increase the
HVAC effectiveness from 3.57 to 7.29.

HVAC effectiveness is the ratio of IT equipment power to total cooling system power.

The cooling system economizer for pre-cooling the return air would find replication
in the scenario when the water from cooling tower is low enough to cool the return
air of the Data Center. The project involves major revamp of the cooling infrastructure
which requires external expertise. The project being capital intensive, has to be taken
up as part of business decision

47
CASE STUDY

ELECTRONICALLY COMMUTED (EC) FANS

Background

Air management is one of the critical areas in Data center cooling systems. The supply air requirement
is based on the heat load and is based on the operation of the data center.

Using conventional centrifugal fans becomes difficult for varying air movement in datacenter.

Electronically commuted (EC) fan is the latest energy efficenct system for varying air movement
application in the Data Center.

EC fans require 15% less power input than the conventional centrifugal fans due to better design
efficiency. The EC fans have high efficiency across a wide speed range, whereas the efficiency of the
centrifugal fans rapidly drops with decreasing speed. At partial load range, the benefit of the EC
technology gets more prominent.

The efficiency of the EC motor (typically > 90%) is higher than that of traditional asynchronous AC
motor (typically < 80%) and generates less heat, as there are no slip losses, less copper and iron losses.

The EC motor is also more efficient than alternative speed control methods including:

 Inverter, AC frequency control


 Triac voltage control
 Multi-taped transformer voltage control (steps)
 Star/Delta switch (two step)

EC fans offer on-demand, automatic and manually controllable variable speed capability and achieve
the benefits of improved efficiency, reliability and low running costs.

Project details

The company performed the study to replace 4 CRAC units with EC fans out of the total of 24
conventional centrifugal units. During the study the thermal load on the 4 CRAC units was recorded.
The power consumption of the compressor and the power consumption of the centrifugal fans were
also recorded.

48
The organization procured and installed 4 CRAC units with EC fans. After installation, power
consumption of CRAC was recorded real-time and observed significant savings in energy consumption.

Results of the project

Implementation of this project resulted in an annual saving of Rs 5.28 Lakhs.

Technical Benefits of the project Cost Benefits of the project


 Reduction in power consumption by  Annual benefits of Rs. 5.28 Lakhs
16.5 kW
 Improved efficiency & reliability

The power consumption of the centrifugal and EC fans is shown in below table.

Comparison of Centrifugal fans and EC fans


Parameter CRAC with CRAC with
Centrifugal Fans EC Fans
Number of Units 4 4
Air Flow m3/h 96,000 96,000
Cooling capacity 300 300
(total) (kW)
Cooling capacity 296 296
(sensible) (kW)
Fan power 40 23.5
consumption (kW)
Savings (kW) 16.5

Replication potential

The project has a high replication potential in the designing stage of the project.

Electronically commutated (EC) fans for CRAC units is applicable in all types of datacenter.
The project has to taken up during the design stage of the cooling infrastructure

49
CASE STUDY

HOT AISLE / COLD AISLE CONTAINMENT


Background
Modern IT equipment takes in cold air through the front and exhausts hot air out of the back.
For example, If the front of the servers and the front of the rack shares the same orientation, then the
user has achieved a consistent airflow direction throughout the row of racks.
However, if several parallel rows of racks are set up in the same orientation, a significant cooling problem
arises which affects the energy performance of Data Center. Because the hot exhaust air from the first
row gets sucked into the “cool” air intakes of the servers in the second row of racks. With each progressive
row, the air temperature gets hotter and hotter as hot air gets passed from one row of servers down to
the next as represented in the figure.
To maintain the server inlet air temperature at farthest row, the overall supply air temperature has to be
maintained very low. Providing low temperature air increases the cooling requirement and thus increases
the operating cost of Data Center.
Apart from that, the condition is made even worse when significant hot and cold air mixing occurs in
the data center. Because the lower temperature air returning to the CRAC unit loses more moisture in
the cooling process than warmer, unmixed air would. This requires continuous humidification process
to maintain the humidity level inside the data center.
The continuous humidification and dehumidification process reduces the useful work done by the
CRAC/CRAH units.

50
To overcome the issue, the rows of racks should be oriented so that the fronts of the servers face each
other. In addition, the backs of the rows of racks should also be facing each other. Such orientation of
rows layout is known as “hot aisle / cold aisle” system. Such a layout, along with cold air containment
can greatly reduce energy losses and also prolong the life of the servers.

Various containment and segregation techniques can help minimize the mixing of hot and cold air.
Simple devices such as blanking panels (which fill spaces between rack-mounted equipment), air dams
(which seal the top, bottom, and sides of equipment), and brush grommets (which can fill open spaces
around floor cable cut-outs, and barriers or cabinet chimneys to contain hot return air) can help
contribute to better air management and improve the cooling efficiency.

Advanced approaches such as hot aisle or cold aisle containment can minimize the mixing of hot and
cold air to a large extent. Such strategies allow airflows to be more predictable.

As a result, a greater portion of the CRAC capacity can be utilized efficiently to achieve higher IT power
densities.

Project details

The facility is a R&D site of leading electronic design company. The company shifted its nine numbers
of server rooms in to a one Data Center space. The old server rooms employed room cooling technique
using 180 TR of cooling capacity for a total of 900 servers.

51
With the increase in design process, the new Data Center consists of more than 1500 servers catering to
the requirement. The Data Center employs Cold air containment technique to avoid mixing of hot and
cold air as depicted in the below figure.

Results of the project

The implementation of cold air containment had resulted in reduction of cooling requirement. The
datacenter cooling requirement has reduced from 180TR to 120TR even with the increase in Servers
from 900 nos to 1500nos.

The project would find replication in the scenario where the conventional room
cooling technique is implemented. The project involves major revamp of the
cooling infrastructure which requires external expertise. The project being highly
capital intensive, has to be taken up as part of business decision

52
CASE STUDY

CABLE MANAGEMENT IN RAISED FLOOR SYSTEM

Background

In data centers, the space beneath the raised floor has been designed to act as a plenum. However, in
many cases, the space has become a dumping ground for excess cables and cords. This clutter interferes
with the ability of the cooling system to force cool air under the floor, through the perforated floor tiles,
and over to the server intakes. The cooling system has to work harder to achieve the same cooling
result and more energy is consumed to achieve the same task.

Solution

A greener solution would be to remove cable blockage and to migrate to overhead cable distribution
if possible. In addition, unused raised floor cutouts should be blocked to eliminate unwanted air leakage.
Perforated tiles (with a design of about 25% open area) should be used to ensure uniform and predictable
airflow distribution in lower density areas of the data center. For higher density racks, special
consideration should be given to the perforated tile manufacturer’s suggested air flow rates at specified
static pressure levels. In some cases where higher density racks are involved, the plenum may not be
adequate to deliver the needed cubic feet per minute (CFM) through a 25% perforated tile.

The cooling system can be optimized by


 Placing CRACs/CRAHs suction across hot aisles

 Installing the first tile for air supply at least 8 feet (2.4 meters) from CRAC/CRAH

 Running the data cables only under hot aisle area to minimize under floor airflow
obstructions to cold aisle area

In addition, air pressure sensors can be installed under the raised floor in order to slow down CRAC
speeds when a constant high pressure is not needed.

The project would find replication in the scenario where the cooling is done through
raised floor system. The project involves minor modification in the existing cooling
system. The project does not require any capital investment. The project can be taken
up by the in-house team during regular maintenance activity

53
CASE STUDY

OPTIMUM INDOOR CONDITIONS OF A DATACENTER

Background

Many data centers are operated at temperatures much lower than the necessary for IT equipment as
specified by the manufacturer. Lower temperatures lead to higher energy consumption in Data Centers.

There are a number of reasons for lower temperatures in Data Centers.

 Generally the operation of Precision Air Conditioning (PAC) is controlled by return air
temperature. If the (return) set point temperature is maintained as per ASHRAE recommendations
between 68O F (20 OC) and 77O F (25 OC), the supply temperature to the IT equipment would be
much lower than the allowable high temperature limit.

 The room is being kept cold to achieve a longer ride-through time during a cooling outage. A
few degrees increase in set point can have a large impact on energy; however, it is unlikely that
lowering a room’s temperature by a few degrees will offer any meaningful increase in ride-
through time. For example, if the maximum server temperature is 95O F (35 OC), running the room
at 68O F (20 OC) versus 70O F (21 OC) will offer very little extra time in the event of a cooling failure.

 There is a fear that higher IT equipment temperatures will affect reliability. Consider the graphs
shown, which compare the temperature of a typical server component and the wattage required
to operate the system fans with an increasing system ambient temperature. In the first graph,
where the system fans are held at high speed, component temperature tracks fairly closely to
inlet ambient temperature. However, the system fans are using a large amount of power; they
represent about 20% of the system’s power requirement.

As shown in the figure below, the fans are allowed to vary according to normal control algorithms.
System fan power is drastically lowered due to variable speed control. Component temperatures are
higher with variable speed fans; however, they remain fairly constant with respect to increasing ambient
temperature.

54
Typical component temperature response to constant-speed fan: Increasing inlet ambient
temperature

Typical component temperature response to variable-speed fan: Increasing inlet ambient


temperature

 The room is being kept colder to achieve marginal inlet temperatures in worst-case locations.
Majority of the data centers have areas with poor airflow dynamics and experiences higher
temperatures for a subset of its equipment. Any attempt to raise operating temperatures should
take these locations into account. A variety of actions can be taken to lessen the risk to these
areas, such as moving tiles to rebalance airflows, moving actual IT systems to cooler areas,
implementing supplemental cooling for specific hot spots, and establishing containment
strategies for cold or hot aisles to ensure appropriate airflow segregation.

 The room temperature is based on personnel comfort level. Obviously personnel comfort will
have to be weighed against the opportunities for energy savings. Some of the more aggressive
efforts to operate at high temperatures have been accompanied by separate air-conditioned

55
spaces adjacent to the data center, as well as service strategies that limit the amount of time
spent in the data center. Perforated tiles may even be used temporarily in a hot-aisle work area
during maintenance activities.

A higher operational temperature should be a consideration in the search for increased data center
efficiency. Coupled with PAC energy-saving options, an increase in operating temperatures offers the
opportunity of saving large amount of energy—5% or more at the facility level is possible.

For reasons discussed above, IT equipment fan power should be taken into account when considering
any increase in set points.

Temperatures in the data center should be measured via the supply temperature at the server intake
(as compared to temperature readings at the return). Many data centers have traditionally set their
temperatures as low as 18°C. However, data center equipment can safely operate at slightly higher
temperatures.

The American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE) specifies an
allowable dry bulb temperature range of 59° to 90° F (15° to 32° C) and a new recommended range of
64.4° to 80.6° F (18° to 27° C) for environments that support critical enterprise server and storage
environments.

Apart from temperature, humidification of the air must also be controlled. ASHRAE specifies a low-side
dew point value of about 5.5° C (41.9° F) in the data center.

ASHRAE Upper Limit Lower Limit


Recommendation
Temperature 27 OC (80.6 OF) 18 OC (64.4 OF)
Moisture 60% RH & 5.5 OC (42 O
F)
O O
15 C (59 F) Dew Dew point
point

The project would find replication in the scenario where the cooling temperature
maintained is not in the recommended level. The project involves fine tuning in the existing
cooling system. The project does not require any capital investment. The project can be
taken up by the in-house maintenance team

56
CASE STUDY

OPTIMIZE HUMIDITY LEVEL IN A DATACENTER

Background

Humidity is an important parameter in the operation of a Data Center. The level of humidity in a Data
Center has major impact on successful operation of Data Center.

Lower humidity levels may result in misshapenness in operation such as Electro-Static Discharge (ESD)
and static current spark problems.

Depending on the equipment manufacturer, 30-35% RH might be sufficient to overcome the issues
such as spark and ESD.

On the other side, maintaining higher humidity level increases the energy consumption in CRAC/CRAH
units due to natural dehumidification effect of cooling coils and forced humidification process to
maintain the set humidity levels.

The continuous process of dehumidification and humidification process results in increased wastage
of energy and reduced utilization of effective capacity of CRAC/CRAH units.

Measures

The first step to avoid the continuous process of dehumidification and humidification is to reduce the
superfluous dehumidification in the system. It can be done by raising the cooling coil apparatus dew
point by adopting following techniques discussed below.

 After verifying the requirements for environmental conditions of the equipment, lower the room
dew point by increasing temperature and lowering relative humidity to the recommended level

 Increasing the size of the cooling coil in CRAC/CRAH units to increase the average coil surface
temperature. Although not a standard offering currently, it is possible to request for a DX CRAC unit
with a mismatched compressor / coil pair. For example, a 20 ton compressor to be used with a 25 ton
cooling coil. Increasing coil size is done at the design stage, and might increase the unit size and
initial cost.

57
 Adjusting controls for wider tolerance of “cut-in” and “cut-out” settings, allowing indoor humidity
levels to swing by 10% RH or more. This is possible by adjusting the dead band setting in the CRAC
units.

The savings from this measure come from reduced control overlap between adjacent CRAC units,
i.e. the controls of one machine calling for humidification while the controls of the neighboring
machine calling for de-humidification.

 Coordinating the unit controls of multiple independent CRAC units to act like a single large CRAC
unit. The savings from this measure are similar to widening the control settings, which reduce the
overlap of control between adjacent CRAC units.

Note: Depending upon IT hardware heat density, naturally there will be different conditions at different
CRAC units. Therefore the independent control of temperature at each CRAC unit would be appropriate.

 Increasing air flow to raise average coil surface temperature and air temperature.

Note: This measure would increase fan energy consumption sharply and may exceed the savings
achieved by controlling humidification.

The other sources of moisture come from envelope losses, especially in dry weather conditions. For wet
conditions, this may be nil, or even beneficial. Moisture losses from the envelope can be mitigated from
reduced air pressure differences between adjacent spaces, vestibules, gasket doors, and an effective air
barrier / vapor barrier on the floor, walls, plenums, and ceiling.

The project involves fine tuning in the existing cooling system. The project does
not require any capital investment. The project can be taken up by the in-house
maintenance team

58
CASE STUDY

AVOID CRAC UNITS DEMAND FIGHTING

Background

A large Data Center equipped with multiple CRAC units has an additional problem with the
controllability of humidifiers. It is common in such cases for two CRAC units to be fighting each other
to control humidity.

The problem can occur for the following reasons.

 If the return air to the two CRAC units is at slightly different temperatures

 If the calibrations of the two humidity sensors disagree

 If the CRAC units are set to different humidity settings.

One CRAC unit would dehumidify the air while the other unit might humidify the air. This mode of
operation is extremely wasteful, yet is not readily noticeable to the data center operators.

Measures

The problem of demand fighting can be avoided by adopting any of the following techniques.

Centralized humidity control

The feedback for control of humidity levels can be made centralized. All the humidifiers in the CRAC
units can be made to operate in common mode to control the humidity levels inside the Data Center.

Coordinated control among the CRAC units

Coordinate or synchronize the unit controls of multiple independent CRAC units to act like a single
large CRAC unit. The coordinated control would reduce the control overlap between adjacent CRAC
units and avoids unnecessary process of dehumidifying and humidifying the air.

59
Turning off the humidifiers in the CRACs

In a large Data Center, the control of multiple number of humidifiers becomes complex and might
result in unnecessary operation of humidifiers which may be apparent to the operating personnel.

Turning off some of the humidifiers would result in easy and better controllability of the system.

Higher dead band settings

Setting the CRAC units to operated at higher dead band ranges can avoid the operational overlap of
humidifiers in adjacent CRAC units. The dead band humidity setting should be set atleast to +/-5%.

Each of these techniques has advantages, which is not discussed in detail in this manual. The best
way to fix the problem is to verify that the cooling systems are set to the same settings and are
properly calibrated.

The project involves fine tuning in the existing cooling system. The project does
not require any capital investment. The project can be taken up by the in-house
maintenance team

60
CASE STUDY

OPERATE IT EQUIPMENT COOLING SYSTEM AT A HIGHER


TEMPERATURE DIFFERENCE

The difference in temperatures of two measured points is termed as Delta-T (“T). The term “T is used to
either describe the heating of air as it passes through IT equipment or the cooling of air as it passes
through cooling equipment.

Until very recently, all IT equipment operated with constant-speed fans to accommodate worst-case
inlet temperatures. These IT systems were designed in an era when peak cooling and energy efficiency
were lower priorities. Presently, we know that constant-speed fans are very inefficient for IT equipment
whose inlet air temperatures are in a lower, more typical temperature range. For applications that
require high airflow rates to cool components, a constant-speed fan is inefficient; it basically over-
cools in a scenario with low room temperatures or low IT power dissipation. The Figure below shows
system fan power for three servers as it varies with ambient temperature. The exponential power increase
from low to high speed is the result of a much smaller linear increase in the airflow rate. The result is
significant power savings because of the ability to throttle down the airflow when it is not needed.

DC FAN POWER VERSUS AMBIENT TEMPERATURES FOR THREE DIFFERENT SERVERS

61
The delta-T is inversely proportional to the amount of airflow through the IT equipment. For example,
if the flow rate is decreased by half, the delta-T is doubled. Thus, one advantage of using IT equipment
with higher delta-T is the power savings associated with the fans running at a lower speed.

Another advantage of a higher equipment delta-T is a higher return temperature at the PAC, allowing
the PAC to operate closer to the rated cooling capacity. This can then lead to fewer PACs needed to cool
the same load and, in turn, less energy required for fans in the PACs. The warmer the air passing through
the PAC coil, the less likely the entire coil surface area will experience a temperature below the dew
point. If more of the coil surface is positioned above the dew point, condensation is decreased. Thus
energy wastage is avoided which otherwise occurs in both the condensation and the humidification
process.

62
CASE STUDY

THERMAL STORAGE SYSTEM FOR EMERGENCY COOLING


REQUIREMENTS

Background
A power sag or complete outage can cause cooling system to temporarily shut down. In data centers with
high power and heat densities, a power outage can cause rapid increase in temperature. This is due to
temporary shut down of cooling system, while servers keep producing heat as they are supplied from UPS.
Rapid increase in temperature may cause severe damage to IT equipment. Calculations show that if cooling
is interrupted, a high density data center may only take 18 sec to reach 40 0C and 35 sec to reach 57 0C.

This necessitated to implement a low cost thermal storage system that maintained cooling at a high density
data center during an electrical power outage. The system enables the data centers to operate even during
the power outage without affecting the IT process. The system has an auxiliary thermal storage tank that
feed water into the chilled water supply lines if the main chillers stop working due to power outage.

There are several methods for increasing the flexibility of data center cooling system to power
disturbances. Some data centers, requiring high availability, use standby generators for chillers. However,
these will significantly add to the data center cost. Also, generators take several seconds to start up,
after which it takes several minutes to restart the chillers. These delays may be acceptable in low-
density data centers, because temperature increases slowly, where as in high-density data centers even
a few seconds delay can cause problems due to rapid temperature rise.

Thermal storage methods were found to offer an alternative solution, providing varying degrees of
flexibility to power failures at much reduced cost with less complexity. Thermal storage can extend the
ability to cool data center IT equipment in the event of a power failure by using thermal reserves to
provide temporary cooling during power outage.

Operation of Thermal Storage System


During normal operation, the centrifugal chillers supply chilled water at 13 0C to the CRAHs that cool
the IT equipment. Meanwhile, the low temperature scroll chillers maintain a trickle of cold water to the
thermal storage tanks, which keeps the tanks at about 6 0C. Storing water at this temperature reduces
the cost and size of the thermal storage tanks. Also, in the event of power outage, the cold water from
the tanks pass through the system for longer time before it becomes too hot.

63
During normal operation, the thermal storage tank valves are closed, isolating the 6 0C water in the
tanks from the main 13 0C chilled water system.

In the event of power outage, the centrifugal chillers stop. The thermal storage tank valves open and
add water from the tanks at 6 0C into the main 13 0C supply feeder line, helping to keep the main chiller
water supply at a required temperature limit.

Project details
Thermal storage system has been installed at one of the leading company’s data center facility. The
facility uses both centrifugal and scroll chillers. The three 1200 TR centrifugal chillers, which are more
efficient in terms of kW/TR, supply main cooling system with chilled water at 13 0C. This is used for
sensible cooling of the areas housing IT equipment and power supplies. There are two small 175 TR
scroll chillers to supply a smaller capacity system with chilled water at 6 0C. This is used for latent cooling
and non critical loads. The 6 0C system also supplies to cool the water in the thermal reserve tanks.

During a utility power outage, the chiller plant shuts down and takes several more minutes to resume
normal cooling. UPS continues to power the IT equipment in the data center producing heat. Under
such conditions, the servers in the data center will suffer thermal damage and shut down due to high
ambient temperature. Sometimes it may damage the servers that would potentially cost huge
investment to replace.

The solution was to install a large supplemental thermal reserve system. The system was based on two
100 m3 cold water tanks. The tanks were sized to provide enough capacity to cool the data center for 7
minutes longer than the UPS battery life.

The chilled water pumps are on a separate facility. Therefore, they keep the cold water moving through
CRAH cooling coils. The CRAH fans are also on the UPS, so they continue to move air through the
cooling coils and deliver cold air into data center space.

Benefits of the project


Thermal storage system enables uninterrupted cooling operation to facilitate continuous IT process
even during power outages.

The thermal storage would find replication in the scenario where the IT process involves
mission critical applications. The project involves major revamp of the cooling infrastructure
which requires external expertise. The project being highly capital intensive, has to be taken
up as part of business decision.

64
CASE STUDY

EXPANDING DATA CENTER CAPACITY WITH WATER-COOLED


CABINETS

Background
The dynamic growth in the number of projects led to an unplanned demand for additional computing
capacity at one of the older data centers of an IT firm.

There was a great need to install new servers to support rapid growth in design computing needs.
Installation of new servers in the existing space would result in increased server density which increases
the heat load in the data center. The data center has difficulty due to lack of cooling capacity to handle
the increased heat load.

The difficulty was addressed by installing 26 water-cooled cabinets to contain a total of 2184 servers.
This had doubled the capacity of the existing room while adding only 600 kW to the total power load.
Each cabinet contained about 84 blades with the power load of 23kW/cabinet.

Project Details
The water-cooled cabinet system was trial implemented to determine the cooling performance,
reliability and redundancy. The sealed cabinets used the facility’s existing chilled water supply to cool
air that re-circulated within the cabinets to cool the servers. It was observed that the cabinets effectively
cooled all servers even under full load condition.

After observing the satisfactory operation, the water-cooled cabinet systems were installed in a 5000
sq ft server room. Chilled water was supplied to the cabinets using pipes beneath the raised floor. The
room already contained about 2000 servers in conventional air cooled racks, but there was enough
space to accommodate additional cabinets.

A total of 26 water-cooled cabinets were installed. Each cabinet contained 6 blade chassis with up to 14
blades per chassis for a total of up to 84 blades per cabinet.This created an average load of approximately
21 kW per cabinet. Additional servers of 2184 were added in water cooled cabinets. This increased the
total number of servers in the room to more than 4000, which is double the previous number of servers.

The sealed, water-cooled cabinets produced very little external heat up to 1 kW per cabinet. Therefore,
their effect on the room’s ambient air temperature was found to be negligible. However, the cabinets

65
placed an increased load on the data center chilled water system, requiring the installation of additional
chillers and piping.

Benefits of the project

Using water-cooled cabinets, which provide highly power-efficient cooling, it was able to quickly add
server capacity in the existing space to support business requirement.

Water cooled cabinets can be useful for increasing capacity of older data centers.

Pros
 Used for high density cooling solutions

 Removes heat at source and reduces load on CRAC units

 Modular and flexible system that can be expanded

 Flexible hose connectors and under floor manifold allows reconfiguration of racks in the future

 Increased compute power capacity up to 8X compared to air cooled facilities

 Can be retrofitted for existing installation

Cons
 Water-cooled cabinets are more expensive than air-cooled cabinets

 They introduce complexity, requiring additional monitoring

The water-cooled cabinet would find replication in the scenario for capacity up-
gradation of older Datacenters. The project involves major modification of the cooling
infrastructure which requires external expertise. The project being highly capital
intensive, has to be taken up as part of business decision.

66
CHAPTER 7

DATA CENTER IT SYSTEMS


7.1 Introduction to Server technology
A server is a heavy-duty computer, designed to be the core of a network. Hardware requirements
for servers vary, depending on the server application. Servers are often rack-mounted and
situated in server rooms for convenience and restricted to physical access for security reasons.
The increasing business growth demands expansion of both computing and storage resources
in datacenters. The server market has witnessed noticeable transition in the server technology.
The latest in the evolution of server technology is the high density server called as Blades.

A server rack is a metal enclosure deployed in data centers and server rooms to securely house rack
mountable servers. In addition to housing servers, data center managers often mount other IT and
networking equipment in these racks. Racks have gone by many names including server cabinets,
19" racks, rack mount enclosures, network enclosures, 19 inch rack enclosures, data racks, etc.

7.1.1 Server Power


Servers operate over a range of DC voltages, but utilities deliver power as AC, and at higher
voltages than required by the computers. Converting this current requires one or more power
distribution units (PDUs). To ensure that the failure of one power source does not affect the
operation of the computer, even entry-level servers have redundant power supply source. Table
7.1 shows the typical breakup of power consumption in a server.
Figure 7.1: Consumption for a Typical Server

(Source: USEPA report)

67
7.1.2 Cooling Of Servers
The continuous operation of IT equipment generates a substantial amount of heat that ought
to be removed from the data center for proper operation of IT equipment. The chilled air from
underground plenum is drawn to the servers through the fans in the enclosure which takes out
the heat from the servers.

Blade servers are servers with a modular design optimized to minimize the number of
conventional servers and the physical space. A server blade is an entire server (processor, system
memory, network connection, and associated electronics) on a single motherboard which slides
into an enclosure that can contain number of blades in it.

Blade servers are suitable for specific purposes such as web hosting and cluster computing.
Blades are considered to be hot-swappable, which means one can add new blades or remove
existing ones while the system is powered on.

The advancement of server technology and increase in the demand for higher computational
capacity, has severely questioned the traditional air conditioning and power distribution system
of a data center. As the computational power increased, the power consumption of the servers
also increased. From a mere 2 kW/rack, the consumption had increased to 40kW/rack creating
hot zones in the Datacenter space. It challenges the traditional power distribution systems for
necessary modification and dynamic operation.

IT equipment and the precision air conditioning units consume the major share of power in a
data center. Any saving in IT load would have a direct impact on the loading of most of the
support systems such as, cooling system, UPS system, power distribution units and thereby has
effect on overall energy performance of the data center.

7.2 Energy Conservation Tips in IT Equipments


This section briefly discuss on various energy saving possibilities in IT and IT Peripheral systems.
The following energy saving techniques is supported by the case studies / write-ups provided
in the later of this section.

7.2.1 Monitor Utilization of Servers, Storage, and Network devices


IT devices are designed based on the present maximum resource requirement and also consider
the future requirement based on the prediction.

68
The capacity utilization of IT devices depends on the diversity factor which is based on kind of
operation. IT systems are often under-utilized which generally leads to poor operational efficiency.

The key causes of poor energy performance in Data Center are:


 Utilization of servers at a fraction of their processing capability
 Over-sizing and infrequent access of data storage system
 Low data transfer rates in a network
Continuous monitoring of utilization rates will allow Data center personnel to plan necessary
action to optimize the performance of IT devices.

7.2.2 Server consolidation


Data Center consists of various types of servers based on the application. The utilization rate of
the servers depends on the criticality of the operation or mission of the Data center.

Low utilization rate of servers results in poor energy performance. The performance of a
Datacenter can be improved by sever consolidation & virtualization technology clubbed with
redefining of utility( HVAC and power) requirements.

Virtualization technology improves the Data Center performance through integration of


multiple servers in to single high density server. It uses shared resources to support the operation
which also typically reduces the space requirement may up to 50% and accordingly the cable
requirement to one-eighth.

7.2.3 Network-Attached Storage (NAS or SAN)


The storage disk drives in servers consume significant percentage of total energy use in servers.
These drives have low utilization rate based on the mission and application.

The standard practice is to continuously monitor the utilization rate of the disk drives. The
utilization rate highly varies for the storage when it serves certain application like engineering
services, laboratory services, etc.

The storage disk drives in servers consume almost constant power irrespective of task it performs.
Therefore intermittent usage of the storage affects the energy performance of storage system.

Consolidation of on-board disk drives to a network-attached (NAS or SAN) data storage device
is an effective energy performance improvement technique which allocates and dynamically
uses necessary storage space from the network of storage systems.

69
7.2.4 Allocate active Storage for critical data handling
It is common to have more storage allocated to processing tasks than is needed, and to have the
storage accessed infrequently. Allocation of excess storage space can result in poor energy
performance, as storage devices draw energy whether they are in active use or not.

Explore the possibility of allocating active data storage systems only for critical data moving
less performance-sensitive data to higher capacity and more efficient passive media.

7.2.5 Computing Performance Metrics for New IT Equipment


Use of performance metrics which uses computational performance to define the energy
performance of IT equipment is an important tool for procurement.

The metric allows comparison of overall computing efficiency and will account for concerns
such as processor efficiency, hardware/software compatibility, memory efficiency, etc.

For example, SPECjbb benchmark is one such performance comparison metric followed for
servers universally.

7.2.6 Virtualization of Network devices


Network virtualization technologies enable increased utilization of network resources and
exert more control over allocation of resources adding more operational flexibility and scalability
of resources. It also reduces the cost and complexity of managing network infrastructure by
reducing the number of physical devices in the network.

70
CASE STUDY

ACCELERATED SERVER REFRESH STRATEGY

The project mainly involved design computing server refresh strategy that takes advantage of
increasing server performance and energy efficiency to reduce the operational cost in Data center.

Background
Server is one of the primary elements of a Data Center. It provides a particular and specific service to
other machines connected to it. The power consumption of a server is largely determined by the
computational capacity and the architecture of its processor.

The advancements in the processor architecture improved the computational performance of servers.
The performance of servers has drastically improved with the introduction of “Multiple Core”
architecture based processors. These processors with supports parallel processing and multi-threading
techniques, which enables the processing of multiple programs simultaneously in high density
computational systems.

The use of high density computational servers enables virtualization and consolidation of conventional
servers and reduces the consumption of resources such as power, space, power supply units, and cooling
system.

The consolidation ratio is based on the processor performance and type of application processed. The
latest servers has high consolidation ratio thus reducing the need for expansion of facilities even for
increased requirement of computational capacity, thus avoiding the cost of construction.

Project details
The case is an initiative of a leading semiconductor design company which has increasing complex
design computing requirements. The number of design computing servers had increased from 1000 in
1996 to 68000 in 2007.

With the increasing computational requirements by the business the number of servers in the Data
center increased over the above span , the company faced the challenge of accommodating increasing
computational requirements within existing space, cooling and power, which lead to expansion of its
data center facility.

71
It becomes extremely expensive to build a new Data center and also to maintain and operate their
growing population of old less-efficient servers.

To tackle this problem, the company initiated an enterprise-wide data center energy management
program. The company explored the alternative server refresh strategy that takes advantage of
increasing server performance and energy efficiency to reduce costs.

The company performed an extensive analysis to determine the benefits on accelerated server refresh
due to reduced data center construction. The company analyzed the ROl that could be delivered over
by adopting different refresh cadences ranging from one to seven years. For example, with a six-year
cadence, the company had consolidated and replaced all design servers more than six years old.

The analysis examined total costs over eight years with an assumption that the cost of each new server
would remain stable over this evaluation period and the computing requirements would continue to
increase at 15 percent per year. The analysis also accounted for region-wise cost variation in construction
and utilities. The software cost was not included since it was already considered as part of their broader
data center efficiency program.

The company found that a four-year refresh cycle delivered the greatest ROI.

Old server specifications New server specifications


One rack One rack
21 servers 21 servers
0.85 million Business operations 6.30 million Business operations
per second with SPECjbb2005 per second with SPECjbb2005
Benchmark Benchmark
40 square feet 40 square feet
8 Kilowatts 7.5 Kilowatts

Results of the project


Increase in computational capacity for same space and utilities. The new server performance has also
boosted the productivity across the design services.

With the four-year refresh cycle program, the company had achieved nearly Rs 12500 million due to
the best combination of construction avoidance, server refresh costs, and utilities savings.

“Computing capacity has increased 7 times in the same space using less power”

72
Benefits of the project
Keys factors that affect ROI
 Reduction in energy Consumption of
700kW for consolidation of every 500  Server cost
older server with Quad-core proces-  Construction cost
sor server
 Increase in Computational Capacity  Utility cost
for same space and utilities  IT Network infrastructure cost
 Improved performance increase the
productivity

The project involves major revamp of the IT infrastructure which requires


external expertise. The project being highly capital intensive, has to be taken
up as part of business decision.

73
CASE STUDY

CONSOLIDATION OF DATA CENTERS FOR IMPROVING ENTERPRISE


COMPUTING PERFORMANCE

The project demonstrates the adoption of high density computing equipments for consolidation of
data centers at various sites that improves enterprise computing performance.

Background
The case depicts the success story of a leading company in the development of technologies that
supply world’s most important market. The company owns various laboratories for R&D services and
production with exclusive data center catering their laboratories.

With the consolidation of various site laboratories serving R&D groups and also expansion in operations
created the issues related to the computational speed with the conventional servers (Specification of
the old server) consuming more space, more power and subsequently more cooling requirements. It
demanded for additional infrastructure for the data center to cater the requirement.

In addition, the company needed to contend with costly power outages and brownouts that were
impacting the availability and performance of its R&D applications. There was a constraint in meeting
additional power requirement with the existing infrastructure and resources.

To cater the growing need of the business, the company recognized the need for newer, high-density
equipment that would occupy less space and use less electric power.

Project details
The company used combination of its own technology and advanced power and cooling systems to
operate a state-of-the-art laboratory datacenter in Bangalore. The datacenter surmounts obstacles
related to electric power and sets high standards for eco-responsible computing and overall operational
efficiency.

The organization spearheaded the consolidation project, applying datacenter design models, services
such as custom Consolidation Architecture, Design and Migration Services that have proven their ability
to reduce operating costs and improve performance and availability at facilities worldwide. The
consolidation required the migration of applications from approximately 300 older servers to 100 newer
servers. The new high density servers imposed challenge on conventional low density cooling design.

74
Applying advanced cooling technology the datacenter design utilized a hot aisle containment
technique based on Row Cooling (RC) technology. The RC devices trap and neutralize the heat generated
by the equipment to eliminate the mixing of hot and cold air in the room.

The units sense the temperatures and speed up or slow down the cooling fans as required, making for
a very efficient solution.

The datacenter used standard racks that provide a consistent footprint for all users and allowed dense
cable configuration that saves space.

Results of the project


The new datacenter has increased the computational capacity by 154% while reducing electric power
consumption by 17%. It was estimated that the company can increase the server count in the 3,000
square foot datacenter to 3,000 servers or more through further consolidation onto newer server systems.
While helping to reduce costs and power consumption, the new datacenter design has also boosted
productivity across the R&D services.

The consolidation had allowed offering more tools to engineers with greater availability, better
performance, and a consistent methodology for accessing the resources needed.

The project resulted in cumulative five years savings of Rs 3.4 million with the investment of Rs 0.93 million.

75
Benefits of the project Cost benefit analysis
 Reduction in power Consumption  Cumulative five-year sav-
17% and Space 15% ings - Rs 34 Lakhs
 Increase in Computing Capacity  Investment - Rs 9.25 Lakhs
154%
 Payback - 10 months
 Annual ROI - 74%

The project involves consolidation of IT infrastructure at enterprise level which


requires external expertise. The project being highly capital intensive, has to be
taken up as part of business decision

76
CASE STUDY

NETWORK VIRTUALISATION FOR PERFORMANCE IMPROVEMENT

The project discusses about virtualization of campus wide networks that reduces the complexity and
increases availability, manageability, security, scalability, and energy performance of the network.

Background
Data communication is one of major areas in IT operations. The Network systems comprises of
integration of various network equipments to enable data transfer in a secured way. It is an integral
part of IT operations.

The increasing business demands the networks to grow in to a complex system. It increases the need
for scalable solutions to segregate group of network users and their resources.

The increase in network size imposes challenges to prominent features like data security, access control,
resource / service sharing, scalability, and energy performance.

Virtualization of a large network using various networking technologies brings a simple and exclusive
solution to address the challenges pertaining to its complexity.

Network virtualization enables a single physical device or resource to act like it is multiple physical
versions of it and is shared across the network. A virtualization technology increases the utilization of
networked resources such as servers and storage-area networks (SANs). The approaches like centralized
policy management, load balancing, dynamic allocation and virtual firewall enhances agility improving
network efficiency and optimizes resources reducing both capital and operational expenses.

Also use of Layer-3 technologies such as GRE (Generic Routing Encapsulation) Tunnels and MPLS VPN
methods for harnessing the power of Multiprotocol Label Switching (MPLS) to create Virtual Private
Networks (VPNs).enables simple and effective approach to create closed user group on the large
campus wide networks.

Authentication and Access-Layer Security is used for access control to mitigate threats at the edge and
remove harmful traffic before it reaches the distribution or core layers.

Challenges
Access control : To ensure legitimate users and devices are recognized, classified, and authorized
entry to their assigned portions of the network

77
Path isolation : To ensure that the substantiated user or device is mapped to the correct secure set of
available resources—effectively, the right VPN

Services edge: To ensure that the right services are accessible to the legitimate set or sets of users and
devices, with centralized policy enforcement

Benefits of the project


 Consolidate multiple networks into one highly available network
 Provide security by keeping customer networks logically separated
 Help ensure flexibility of network connectivity across the campus
 Establish a scalable foundation to accommodate future growth needs
Consolidating multiple physical networks reduces operational cost and
makes use of a single, scalable and easy-to-manage platform

The project finds replication in the scenario of large multiple independent


networks. The project involves major revamp of the networking infrastructure
which requires external expertise. The project being highly capital intensive, has to
be taken up as part of business decision

78
CASE STUDY

EFFECTIVE STORAGE UTILIZATION OPTIMIZES DATA CENTER


OPERATIONAL COST

The project discuss on improving the storage utilization rate and reducing the operational cost on
storage systems.

Background
The company has a large, rapidly growing, and increasingly complex storage environment. Business
growth drive increased the enterprise transactions and applications causing the expansion of storage
system. However, a significant amount of capacity growth resulted from avoid¬able factors such as
underutilizing existing capacity, storing duplicate copies of existing data, and retaining data that is no
longer required, based on the study.

At the end of 2007, it managed 20 petabytes of primary and backup storage infrastructure constituting
7 percent of company’s Total Cost of Ownership (TCO), with storage capacity growing at 35 to 40
percent per year—a rate that would lead to 90 petabytes of storage capacity by 2012 and double the
storage TCO.

Project details
The company adopted three key strategies for storage optimization.

Strategy-1: Virtualization, Tiering, and Application Alignment


Storage virtualization allowed multiple systems to share a single storage device. The storage
infrastructure includes both SAN and NAS environments.

Virtualized storage environments make it practical to re-tier storage and migrate the data among
virtual storage machines with relative ease.

Tiering and application alignment had impact costs by increasing utiliza¬tion and scalability, enabling
multi-vendor sourcing, and simplifying management. Tiering strategies reduced overall TCO for storage.

Storage-Medium Allocation improvement was done through NAS/SAN virtualization, system-to-


application mapping and alignment, data migration

79
Strategy-2: Capacity Management
Capacity management offered opportunities to significantly improve utilization of storage devices
through application of techniques such as thin provisioning, Fabric unification, storage reclamation
and capacity management reports and metrics.

Strategy-3: Data Management


Data management technologies reduced the volume of data to be stored and reduce capacity growth
by adopting various techniques like writable snapshots, de-duplication, Next-generation (D2D) backup
and recovery, etc.

Key issues faced


The company experienced difficulties in,

Demand forecasting

There was no specified requirement from the client/user and any effective tool or method for forecasting

 Capacity management

Inadequate reporting on storage capacity and performance led to inef¬ficiencies in planning,


provisioning, tiering and purchasing. Here again, inadequate or nonexistent tools made it difficult to
forecast and maintain optimal free capacity

 Alignment between data value and storage technologies

Current tools limited the opera¬tions group's ability to efficiently classify data and match it to
appropriate storage infrastructure and services based on its value. This resulted in over-provisioning of
services and higher expenses

 Storage services management

Processes, roles, and responsibilities were often duplicated. There were wide variations in approaches
and technolo¬gies, which led to misinformation and inaccurate planning and made it harder to manage
head¬count growth

 Archive and purge policies

The default meth¬odology was to retain almost all data, with few broad policies or enforcement
regarding reten¬tion and deletion. Data with little or no value was often retained indefinitely in costly
primary- storage infrastructure, with multiple copies for operational and disaster recovery purposes

80
Benefits of the project
The estimated TCO in the full-potential scenario would be just 13 percent higher in 2014 than in 2007,
while the estimated baseline TCO would experience a 279 percent increase in the same period. Those
numbers are a strong confirmation of the benefits of a holistic approach to storage optimization, and
are reinforced by equally impressive results in controlling power consumption.

The project has a high replication potential where the storage cost is significant in
company’s total operational cost. The project involves major revamp of the storage
infrastructure which requires external expertise. The project being capital intensive,
has to be taken up as part of business decision

81
CASE STUDY

INFRASTRUCTURE EFFICIENCY IMPROVEMENT IN A VIRTUALIZED


ENVIRONMENT

Background
Virtualization of servers reduces server population and reduces power and cooling requirement. Thus
it results in reduced IT power and overall Data center power consumption. The key characteristic of the
virtualization is high density operation of servers. It also introduces new operational challenges in the
datacenter environment.

A virtualized environment imposes three key challenges on the datacenter personnel.

 Increased server criticality


Virtualization brings higher and higher processor utilization increasing the business importance
of each physical server, which makes effective power and cooling even more critical in
maintaining availability

 Dynamic and migrating high-density loads


With virtualization, applications can be dynamically started and stopped, resulting in loads that
change both over time AND in physical location. This adds a new challenge to the architecture and
management of power and cooling

 Under-loading of power & cooling systems


If power & cooling capacity is not optimized to the new lower IT load, data center infrastructure
efficiency (DCiE) will go down after virtualization. Therefore scale down of infrastructure to match
the IT load further reduces the power consumption in datacenter

82
DYNAMIC AND MIGRATING HIGH-DENSITY LOADS
Virtualization enables dynamic load allocation on servers, high processor utilization capability inducing
localized high-density hot zones in the Data Center.

The dynamic loading of servers results in shifting of the thermal profile of the room with no visible
physical changes in equipment. The schematic condition is shown in figure 1.

Figure 1: Migration of load over a period of time in a virtualized environment

Under such conditions, the conventional room based cooling technique sometimes becomes ineffective
even with the cooling capacity more than 2 times than the actual required.

Rack based cooling or targeted cooling located close to the load act as a supplement and provides an
effective solution to remove the high density heat load efficiently. The figure 2 shows the schematic of
row/rack based cooling system.

Figure 2 : Row based cooling or targeted cooling system

83
The key characteristics of a row based cooling system are:

 Short air path between cooling and load

 Dynamic response to load changes

Row based cooling system increases cooling system efficiency and increases availability by,

 Reduced mixing of cold supply air with hot return air

 Increased return temperature (increases rate of heat transfer to coil)

 Targeted cooling that easily respond to localized demand

 Conservation of fan power

 Reduced – often eliminated – need for make-up humidification (to restore the moisture removed
by condensation on a too-cold coil resulting from a too-low set point)

UNDER-LOADING OF POWER & COOLING SYSTEMS


The capacity of the power & cooling system is designed with high redundancy to maintain maximum
uptime of Data Center. Operating the system with high redundancy levels has a tendency to under-
load all the equipment resulting in inefficient operation. It becomes difficult to improve on efficiency
of the systems using conventional single core architecture, where the efficiency can be varied only by
varying the load. Scalable and modular architecture provides idle platform to overcome the difficulty
in which number of module operation is controlled to maintain the loading on individual module at
optimum level.

Effect of under-utilization
All power & cooling devices have electrical losses (inefficiency) dispersed as heat. A portion of this loss
is fixed loss – power consumed irrespective of the load. At no load (idle), the power consumed by the
device does no useful work.

As load increases, the device’s fixed loss stays the same and other losses increase in proportion to the
amount of load, called as proportional loss.

As load increases, fixed loss becomes a smaller and smaller portion of the total energy used, and as the
load decreases, fixed loss becomes a larger portion of total energy used

Virtualization improves IT system energy performance and reduces the load on power & cooling system.
The reduction in load further reduces the loading on the power & cooling equipment, resulting in

84
inefficient operation. Power & cooling devices that can scale in capacity will reduce fixed losses and
increase efficiency. Scalable architecture will facilitate not only downsizing to follow IT consolidation,
but also subsequent growth to follow expansion of the virtualized IT load as shown in figure 3.

Figure 3: Scalable power and cooling to minimize the inefficiency of unused capacity
during consolidation and growth

Data center infrastructure efficiency (DCiE) will go down after virtualization, due to fixed losses
in unused power & cooling capacity. With optimized power and cooling to minimize unused capacity,
power & cooling efficiency (DCiE) can be brought back to nearly pre-virtualization levels – sometimes
even better, depending upon the nature of improvements to the cooling architecture.

Function of IT load on Datacenter efficiency


Datacenter efficiency is always higher at high IT loads. Therefore datacenter efficiency is a function of
IT load. Typical datacenter infrastructure efficiency curve is shown in figure 4.

Every data center will have a higher or lower infrastructure efficiency curve depending upon the
efficiency of its individual devices and the efficiency of its system configuration. The curve always
starts at zero and follow the shape as shown in figure 4.

Figure 4 : Typical data center infrastructure efficiency curve

85
Virtualization will always reduce power consumption due to the optimization and consolidation of
computing devices. However, if no concurrent downsizing or efficiency improvement is done to power
and cooling infrastructure, the infrastructure efficiency (DCiE) will move down on the curve because of
the reduced IT load as shown in figure 5.

Figure 5 : consolidation reduces infrastructure efficiency if the data center’s


infrastructure remains unchanged

The data center’s infrastructure efficiency curve must be raised by optimizing the power & cooling
systems to reduce the fixed loss and optimize the infrastructure capacity to the new IT load. Optimization
of infrastructure capacity would change the infrastructure efficiency curve and improve the post-
virtualization DCiE as shown in figure 6.

The greatest impact on the efficiency curve can be made by going from room-based to row-based
cooling and by “right-sizing” of power & cooling systems. In addition to improving efficiency,
optimization of power & cooling will directly reduce the power consumption.

86
Figure 6 : Optimized power & cooling raises the efficiency curve and improves DCiE

To realize the full energy-saving benefits of virtualization, the following design elements can be
incorporated:

 Power & cooling capacity scaled down to match the load


 VFD fans and pumps that slow down when demand goes down
 Equipment with better device efficiency to consume less power
 Cooling architecture with shorter air paths (e.g. changing from room-based to row-based)
 Capacity management system, to balance capacity with demand
 Blanking panels to reduce in-rack air mixing

The project has a high replication potential in a virtualized environment.


The project involves minor modification in power and cooling infrastructure
which requires external expertise. The project being capital intensive, has to
be taken up as part of business decision.

87
CHAPTER – 8
OPERATION AND MAINTENANCE
8.1 NECCESSITY OF O & M
Operation & Maintenance of critical facilities like Data Center is recognized as an important
tool to achieve operational excellence in the facility.

The operational excellence translates to Reliable, safe, secured, and efficient operation of all
business systems in a Data Center.

As the robustness and associated complexity of critical infrastructures has increased, the
importance of establishing equally robust O&M practices to manage these facilities has become
apparent. This activity includes routine switching and reconfiguration of critical systems,
maintenance tasks. Once a Datacenter has been designed and put into operations, it is the
responsibility of the operations and maintenance personnel to operate, monitor report and
maintain the infrastructure.

One of the most important roles of an O&M manager is to get the attention of the management
through regular performance reports.

Every site needs to define key performance Indices, monitor report and drive improvement in
the Indices. This can be facilitated by proper auditing and logging and analysis of the key
parameters. MIS is a powerful tool which has the potential to drive changes and lower energy
and operating costs. A typical MIS format is included in this chapter. The MIS format can be
referred to monitor the performance metrics of a Datacenter.

Outlined below in this section are a few typical indices practiced in the industry for energy
performance monitoring.

8.2 TECHNICAL DOCUMENTATION & INTERNAL TRAINING


The technical documentation shall comprise of the following:

 Design intent with system description


 All testing and commissioning forms completed and approved
 All Non conformation (NCR) items –Snag list completed and approved
 Training completion details to operating and maintenance personnel

88
 Warranty information
 Operation and maintenance (O&M) manual customized to the installed system
 Standard O&M manuals of all individual equipments
 As build drawings in both soft (*.dwg format) and hard copies
 Single line diagrams /schematics of all systems (MEP ,Monitoring &Controls, Fire &safety)
in both soft (*.dwg format) and hard copies
 List of Spare parts and special tools to be used

8.3 MANPOWER AND TRAINING


The importance of having a properly defined and standardized operating and maintenance is
a well accepted fact in any industry, requisite staff and processes necessary to support continuous
operations must be in place on the first day that the site goes live and must continue through to
the final day that critical operations occur.

In addition to the regular trainings on O&M procedures, it is recommended that the operator is
sensitized to the energy and environmental impact of his actions. Lower energy and operating
costs can be realized when every individual realizes the necessity of their contribution to the
overall energy conservation.

8.4 HOUSEKEEPING
Maintaining a clean and dust free environment benefits in terms of better air movement through
CRAC units and IT equipment, apart from the minimizing the equipment failures caused by dust
and other particulate matter. The site should have a properly defined and regular cleaning
process.

5S methodology emphasizes on self discipline, orderliness. It is recommended to adopt 5S


practices to organize and upkeep the datacenter.

8.5 PREVENTIVE MAINTENANCE


Most facilities have some level of planned maintenance. Routine tasks based on time intervals,
or frequency, are referred to as preventive maintenance. The limitation here is that the tasks
occur regardless of actual operating condition and generally involve a shutdown of the
equipment which may be difficult in Datacenters. Monitoring of metrics like Mean time between
failures can indicate the effectiveness of a preventive maintenance program.

89
8.6 BREAKDOWN MAINTENANCE
This generally refers to the repairs carried out after equipment has failed. In a datacenter
environment this could prove to be costly. However there has to be a proper response mechanism
to failures and proper root cause analysis in necessary to prevent future reoccurrences.
Monitoring of indices like mean time to repair should be beneficial.

8.7 CONDITION BASED MONITORING


Technologies that allow maintenance to occur based on actual operating conditions. A simple
example is using a differential pressure sensor to monitor filter condition. When the filter loads
up, the delta-P increases and the filter is replaced when appropriate.

The condition-monitoring technologies can be used to trend the real time data and can be
used to predict necessary maintenance action required in advance. This is called predictive
maintenance. Thresholds can be assigned for alerts and alarm conditions, and by analyzing the
trends, one can predict exceed of thresholds and even predicts failures.

Some examples of operating condition monitoring technologies include vibration analysis


and infrared thermal scans. These technologies are used to analyze the operating condition of
the equipment while the equipment is online, without requiring shutdowns or maintenance
outages.

8.8 CAPACITY VS UTILIZATION


Monitoring of capacity versus utilization is very important and can provide opportunities to
improve energy efficiency. Continuous monitoring will give the facilities as well as the IT manager
actionable items for optimization, and in return obtaining benefits in terms of reduction of wasted
energy in CPU, Storages, UPS etc. Care has to be taken to ensure a proper balance between energy
efficiency and reliability.

8.9 METERING & CALIBRATION


Proper Selection of location of metering devices should meet in accordance with PUE
Calculation. All the Electrical, Cooling and lighting energy should be measurable at both input
and output.

Periodic calibration of all sensors and metering devices is crucial and significantly affects all
monitoring and control systems. Every site should have a calibration program through a reputed
calibration agency.

90
8.10 ROUTINE AUDITS
Due to the dynamic nature of the operations the conditions inside the datacenter can vary
considerably from the original design intentions.

These audits can be done manually in the absence of automated data collection
instrumentation.

These audits help you to generate information for managing change and also identifying any
potential issues.

Examples of some of the recommended audits are:

 Capacity audit
 Availability audit
 Reliability audit
 Asset audit
 Security audit
 Safety audit
8.11 CHANGE MANAGEMENT
IT downtime on any scale has a negative impact on business values and growth.The maintenance
professionals face a continuous challenge in minimizing the downtime of a Datacenter.

Also, the process of any changes for the improvement in the infrastructure involves certain
amount of downtime depending on the changes proposed and procedure adopted.

To address the challenge of minimizing downtime involved in the process of making any
changes in infrastructure, the operations team approaches a change process in steps referred as
Change management process. Change management process provides smooth
implementation of patches, upgrades and other changes in the infrastructure.

Change management process has a set of procedures which provides a platform to assess the risk
of uptime and take necessary precautions to minimize the downtime during the change process.
Therefore the process reduces the risk to uptime significantly and maintains less downtime.

A properly managed change utilizes the information gathered from various Audits and proposes
changes in infrastructure to ensure the overall optimum performance of Datacenter.

91
The information collection primarily focuses on:-

 Availability of space
 Availability of power and cooling
 Energy performance of the proposed change
The user demand of 100% uptime of services and networks threatens the IT operations to deploy
changes that aren’t adequately tested against the entire infrastructure.

Presently, implementation of changes has become one of the major causes of downtime with
10% on average roll back from production problems. A mature change management process
controls IT Changes, through testing and change impact analysis and reduces downtime risk.

IT operations often attempt to deploy and test changes in a representative end-to-end pre-
production staging and testing environment. Creating a dedicated staging environment solely
used for IT testing will generate significant measurable benefits. The completeness of testing is
the final factor in change management maturity. Testing every change is a decision call that IT
organization makes balance against other priorities.

92
Typical Change Control flow chart

93
Adopting best practices helps IT operations establish the environment and approach to
concentrate on understanding the impact of change in the system rather than only prioritizing
which changes to test.

An organization can establish the matured change management process by adhering to


fundamental changes and also by implementing best practices and standards from similar
organizations.

A matured change management process would result in minimizing the operational downtime
of the facility which provides excellent business value.

8.12 DATA CENTER PERFORMANCE METRICS


The energy performance of the equipment can be assessed by the standard metrics available in
practice. The assessment becomes complex when a datacenter is assessed for overall energy
performance. The complexity arises due to different opinion on accounting the components of
energy use within Datacenter premises for performance evaluation.

BEE recommends the use of following simple metrics, tools to analyze the energy performance
of Datacenter to facilitate data centers professionals understand the overall system better and
improve the energy efficiency of the existing datacenters. It also provides a common platform
to compare the results with other Datacenters.

POWER USAGE EFFECTIVENESS (PUE) AND DATACENTER INFRASTRUCTURE


EFFICIENCY (DCIE)

In the above equations, the Total Facility Power is defined as the power measured at the utility
meter — the power dedicated solely to the datacenter (this is important in mixed-use buildings
that houses datacenters as one of a number of consumers of power).

The IT Equipment Power is defined as the equipment that is used to manage, process, store, or
route data within the data center. It is important to understand the components for the loads in
the metrics, which can be described as follows:

94
1. IT EQUIPMENT POWER. This includes the load associated with all of the IT equipment, such as
compute, storage, and network equipment, along with supplemental equipment such as KVM
switches, monitors, and workstations/laptops used to monitor or otherwise control the datacenter.

2. TOTAL FACILITY POWER. This includes everything that supports the IT equipment load such as:

 Power delivery components such as UPS, switch gear, generators, PDUs, batteries, and
distribution losses external to the IT equipment
 Cooling system components such as chillers, computer room air conditioning units
(CRACs), direct expansion air handler (DX) units, pumps and cooling towers
 Compute, network, and storage nodes
 Other miscellaneous component loads such as datacenter lighting
 The PUE or DCiE metric is used to determine:
 Opportunities to improve a datacenter’s operational efficiency
 Compare with competitive datacenters
 Facilitates to administer the energy saving measures, designs and processes implemented
 Opportunities to repurpose energy for additional IT equipment
Both of these metrics imply the same; they can be used to illustrate the energy allocation in the datacenter
in a different way.

For example, if a PUE is determined to be 2.0, this indicates that the datacenter demand is two times
greater than the energy necessary to power the IT equipment. In addition, the ratio can be used as a
multiplier for calculating the real impact of the system’s power demands.

For example, if a server demands 400 watts and the PUE for the datacenter is 2.0, then the power from
the utility grid needed to deliver 400 watts to the server is 800 watts. DCiE is quite useful as well. A DCiE
value of 50% (equivalent to a PUE of 2.0) suggests that the IT equipment consumes 50% of the power
in the datacenter.

95
The utility metering in a multipart building should have exclusive meter for datacenter operation.
Since power not intended to be consumed within the datacenter would result in faulty PUE and DCiE
metrics. For example, consider a datacenter located in an office building, total power drawn from the
utility will be the sum of the Total Facility Power for the datacenter, and the total power consumed by
the non-datacenter offices. In this case the datacenter administrator would have to measure or estimate
the amount of power being consumed by the non-datacenter offices (an estimate will obviously
introduce some error into the calculations).

IT Equipment Power would be measured after all power conversion, switching, and conditioning is
completed and before the IT equipment itself. The most likely measurement point would be at the
output of the computer room power distribution units (PDUs). This measurement should represent the
total power delivered to the compute equipment racks in the datacenter.

The PUE can range from 1.0 to infinity. Ideally, a PUE value approaching 1.0 would indicate 100%
efficiency (i.e. all power being used by IT equipment only).

8.13 RACK COOLING PERFORMANCE INDEX


RECOMMENDED THERMAL CONDITIONS:
The thermal conditions that may occur in a data center are depicted in figure. First, facilities
should be designed and operated to target the recommended range. Second, electronic
equipment should be designed to operate within the extremes of the allowable operating
environment. Prolonged exposure to temperatures outside the recommended range can result
in decreased equipment reliability and longevity; exposure to temperatures outside the
allowable range may lead to catastrophic equipment failures. The recommended range and
the allowable range vary with the guideline or standard used. For the ecommended
temperatures, the ashrae thermal guideline lists 68°- 77°f (20°-25°c) for a “class 1” environment.

96
RACK COOLING INDICES (RCI):
The rci is a metric used to measure the thermal condition of the electronic equipment. Specifically, the
rcihi is a measure of the absence of over-temperatures (under-cooled conditions); 100% means that no
over-temperatures exist, and the lower the percentage, the greater probability that equipment
experience excessive intake temperatures. Rci values below 80% are generally considered “poor”.
Rack cooling indices are unit independent indications of cooling in a datacenter.
There are four types of temperatures defined for a rack or server, they are:
 Maximum allowable temperature
 Maximum recommended temperature
 Minimum recommended temperature
 Minimum allowable temperature
Based on these temperatures a unit independent rack cooling indices RCIHI and RCILO are formulated
as below

RCIHI = {1 – [{S (Tx - Tmax-rec)Tx>Tmax-rec} / { (Tmax-all – Tmax-rec) n}] }100 %

Where Tx Mean temperature at intake x [°F or °C]


n Total number of intakes
Tmax-rec Max recommended temperature per some guideline or standard [°F or °C]
Tmax-all Max allowable temperature per some guideline or standard [°F or °C]

RCILO = {1 – [{ S (Tmin-rec – Tx)Tx<Tmin-rec } /{(Tmin-rec – Tmin-all) n }] }100 %

Where Tx Mean temperature at intake x [°F or °C]


n Total number of intakes [-]
Tmin-rec Min recommended temperature per some guideline or standard [°F or °C]
Tmin-all Min allowable temperature per some guideline or standard [°F or °C]

The interpretation of the indices is as follows:

RCIHI = 100% All intake temperatures = max recommended temperature

RCIHI < 100% At least one intake temperature > max recommended temperature

RCILO = 100% All intake temperatures = min recommended temperature

RCILO < 100% At least one intake temperature < min recommended temperature

97
The index is used to estimate the cooling level that can be compared to the standard level specified by
the equipment manufacturer.
After calculating the index, based on the inference the maintenance team can formulate specific
improvement steps for performance improvement.

98
Case Study

Metering and Monitoring

Background
The case represents the development of comprehensive approach to meter the power usage in the
Data Center and adoption of appropriate instrumentation to continuously monitor the power usage
effectiveness (PUE), the key metric of Data Center energy efficiency.

The primary goal of the initiative was to identify current operating costs, set baseline measurements,
and implement improvement measures based on the information gathered on a continuous basis.

The facility is a five year old Data Center in Bangalore, India. The data center is a 5500-square-foot with
conventional design such as a 24-inch raised floor, a 10-foot-high false ceiling, using room cooling
technique. The Datacenter has a power density of 110 watts per square foot (WPSF), a 2(N+1) UPS
power redundancy configuration, and ductless chilled-water-based precision air conditioning (PAC)
units in an N+1 cooling redundancy configuration.

The Datacenter improved energy efficiency by metering and continuous monitoring of electrical energy
consumption. Continuous monitoring has enabled continuous tracking of PUE, the key data center
efficiency metric. It is achieved through implementing instrumentation at very granular level.

Metering the facility’s Energy utilization in IT and cooling and estimating losses at key points in power
distribution provided useful information to plan and implement efficiency improvements.

Issues faced in establishing baseline measurement


The datacenter maintenance team faced difficulty in,

 Isolating Data Center power and cooling loads from the rest of the building loads

 Metering the utilization of IT power, cooling power at the right points in the power distribution
cycle to facilitate collection of useful information for efficiency improvements

 Identifying the optimum levels of granularity for energy metering

Project details
The metering was done at three different levels to measure the total facility power, IT equipment
power, and cooling system power. Table 1describes the location of meters in the system.

99
Table 1: Location of Data Center power and cooling meters

Total facility power


Total facility power is the power measured at the utility meter that is dedicated solely to the data center.

Data center total facility power includes everything that supports the IT equipment load, such as:

 Power delivery components including UPS, switch gears, generators, power distribution units (PDUs),
batteries, and distribution losses external to the IT equipment

 Cooling system components such as chillers, computer room air conditioning (CRAC) units, direct
expansion (DX) air handler units, pumps, cooling towers, and automation

 Compute, network, and storage nodes

 Other miscellaneous component loads, such as data center lighting, the fire protection system, and
so on

Figure 1: Total facility power measurement

100
IT equipment power
IT equipment power is defined as the effective power used by the equipment that manages, processes,
stores, or routes data within the raised floor space.

The IT equipment load includes:-

 The load associated with all of the IT equipment such as compute, storage, and network equipment

 Supplemental equipment such as keyboard, mouse, switches, monitors, workstations and laptops
used to monitor or otherwise control the data center

Continuous monitoring of row level energy consumption was found to be more effective approach to
establish base line. Figure 2 shows the layout of Energy meters in UPS system to measure total IT power.

Figure 2 : UPS metering layout to measure Total IT power

To estimate the total IT power, energy meters were installed to measure the energy consumption at the
row levels in the Data Center. This enabled to meter and monitor total power utilization of facility at a
very granular level providing differentiation between energy consumption for IT equipment and for
the rest of the building facilities.

Cooling system power


The chilled water plant was common between the data center, labs, and office space. Therefore flow
meters and temperature sensors were installed to measure the total cooling load of building and
especially the Data Center cooling load as well. All of the meters were integrated to the building
management system (BMS) for continuous availability of measurements. Figure 3 shows the locations
of energy meters to measure cooling.

101
Figure 3 : Cooling power metering isolated data center power from the
rest of the building facilities
The continuous monitoring and analysis had facilitated in implementation of following energy saving
measures.
 Paralleling the UPS to increase the utilization levels, thereby increasing efficiency and reducing
distribution losses
 Increasing the PAC temperature set point from 19 to 23 degrees Celsius.
 Fine tuning of humidity level controls based on the requirement
 Managing airflow inside the data center
 Managing load across the data center floor
 Use of LED for emergency lighting inside the data center.
 Managing standard lighting to reduce energy consumption
Results of the project
The initiatives demonstrated the method to continuously measure and manage the PUE in data center.
It had resulted in saving of annual operational power cost of Rs 3.85 million with more than 10 percent
improvement in overall operational data center efficiency in 2008. The PUE had improved from 1.99 to
1.81. Despite an increase in the overall IT load, there has been a reduction in total facilities load. Table 2
and Figure 4 summarizes the results achieved.

Table 2 : Power Cost Savings Yielded by Improvements in Power Usage Effectiveness


(PUE) Efficiency and Projected Potential Savings

102
Figure 4 : Total facilities load reduction

The project is essential and has to be customized based on the facility infrastructure.
The project involves minor modification/retrofit in the existing system. The project
being capital intensive, has to be taken up as part of business decision.

103
CHAPTER 9

MANAGEMENT APPROACH FOR ENERGY


EFFICIENCY IN DATACENTER
9.0 Challenges in a Data centre
Datacenter is one of the critical areas of operation in any organization, where vital information
in electronic format is stored and processed. The design of the datacenter has two challenges to
confront.

The first and primary challenge is to maintain availability near 100%, which is critical. The
availability of datacenter is achieved by integrating multiple redundant systems by design.

Providing multiple redundancies has major effect on energy performance of the overall system.
Availability should not be at the cost of Energy efficiency. Both should be in hand to hand.

Therefore the next focus is to maintain the energy performance of the equipments, which impacts
the operating cost. In most of conditions, energy efficient equipments/systems are adopted and
operated.

Eventually, the degree of energy performance in a datacenter is influenced by its operating


condition of the equipments and systems. The operation and maintenance, in turn, is directly
related to the knowledge level and involvement of the maintenance personnel. Therefore the
success of achieving and sustaining energy savings depends on the knowledge level of the
technical personnel (maintenance) and their involvement towards the goal.
The management has to create a platform for the employees to continuously improve their
knowledge levels and making them to participate in various forums and discussion groups.

Therefore, a management plays a crucial role in maintaining the efficient operation of a


datacenter.

The following are some of the suggestions on management aspect of datacenter operation.

9.1 Energy management cell and Energy Manager


A successful energy management requires a dedicated team of personnel working with a
common objective.

The energy management function, whether vest in one ‘energy manager or coordinator’ or
distributed among a number of middle managers, usually resides somewhere in between senior
management and those who control the end-use of energy.

104
The Energy manager would lead, coordinate and champion the energy conservation activities
in an organization.

Top Management

Energy Manager Operations Head Maintenance Head

Energy Operations staff Maintenance staff


management staff

9.2 Awareness programs and campaign

In general, datacenter maintenance personnel are primarily concerned only in maintaining the
uptime and capacity addition for future growth.

Energy efficiency, generally finds a place only towards the end in their list of priorities.

Therefore the management has to conduct awareness programs to involve and motivate the
personnel responsible for operation of datacenters. The management should carry out the
following employee motivational aspects, to sustain the energy conservation activities.

 Send operating/ maintenance personnel for training programs in specific areas like Servers,
storage, air management, power distribution, chiller design and operation, etc

 Organize regular meeting for executives with technology suppliers to know the latest
developments in Datacenter infrastructure

 Present papers on energy conservation activities in seminars

105
9.3 Energy Management Plan
Energy management planning is a strategy to identify, implement and evaluate energy
conservation schemes in an organization. It will act as a reminder for the activities to be performed.

From time to time review your strategy and action plan and revise them with the support of new
information and feedback.

The energy management plan should address

 The process of identification, implementation and evaluation of energy saving schemes

 Plan for training and awareness of personnel

 Strategy and commitment to improve the efficiency levels with specific target and time
period

 Strategy to involve management, middle level managers, operation and maintenance


personnel to make energy conservation a sustainable activity

9.4 Metering, Sub-metering and logging


Energy monitoring and logging of all key components in a datacenter influences the decision
for any modification and performance improvement in a datacenter. It eases the process of
maintenance by indicating any abnormalities in operation and improves the reliable and energy
performance of the system.

Install sub-meters at key locations to measure the energy distribution among the IT equipment
and support systems. It is a powerful tool to accurately measure the energy usage and to prepare
the energy balance of entire system. It enables to monitor the system performance over the time
and provides the evidence of degradations and improvements.

What is being measured, can be managed

9.5 Instrument Calibration Program


Measurement plays a vital role in monitoring and control of energy in a system. It is necessitate
to ensure the accuracy of measurement which provides meaningful results desired by the efforts
of the management.

Initiate a program to verify and calibrate the accuracy of sensors on a regular basis. The program
would involve the process of verifying, determining the accuracy of measuring instrument and
calibrating it to the measurement standard.

106
What is being measured well, can be managed well

Identify the key parameters that affect the performance of the systems. Provide instrumentation
and monitor all those parameters on continuous basis. Also maintain record for calibration of all
equipments.

9.6 Energy Audit


Energy audit is a systematic checkup that offers a real-time profile and model of the data center’s
energy use conditions, making it possible to identify areas of high energy use and establishes a
baseline for further improvement activities.

Energy audit involving the experts from external agency would fine tune the internal process or
approach and would help internal team to identify new area of scope for efficiency improvement.

Based on the audit, several energy saving activities like capacity utilization, fine tuning of process
and adopting latest energy saving technologies, can be initiated.

9.7 Overall performance assessment


The energy consumed in a datacenter is utilized for both IT and its support systems. In general,
the efficient operation of individual equipments is assessed and maintained. It helps in reducing
the energy consumption levels in a datacenter.

The performance of a datacenter indicates the effective ratio of energy utilization in IT and
support systems. The overall performance can be estimated by adopting various metrics

 PUE (Power Utilization Effectiveness)

 DCIE (Data Center Infrastructure Efficiency).

Both of these measurements indicate how much energy the support systems use in comparison
to the IT equipment itself.

The energy utilized in support systems has no business value but has impact on the cost of
operation. The less energy the support systems use for a given IT load, the more efficiently the
facility operates. Hence, it is necessary to reduce the energy use in support systems to improve
the overall performance of a datacenter. Continuously monitoring this ratio is a good way to
keep track of the performance of the whole data center.

9.8 Top management commitment


Demonstration of top management’s participation in energy management and encouragement
to the employees is very important to the operation and maintenance professional.

107
Top management commitment could be in the form of framing energy management policy/
plan, and regular review of energy management projects and motivating employees on matters
pertaining to energy conservation.

9.9 Budget and implementation


Rational budgeting is a must for implementing energy saving measures in the plant. This should
be made available under guiding policies of the company.

However, each investment should be evaluated thoroughly on its technical feasibility and
economic viability through ROI calculations like simple pay back period, IRR etc. Budget
allocation should be done on yearly basis and should be known to energy management cell at
the start of the year, for smooth execution of such activities. Top management can retain the
power for sanctioning larger investments, but decision on marginal investments should left to
lower or middle management.

Management may choose not to fund a project based on the capital expenditure. The energy
management team has to clearly present the analysis of the return on investment (ROI) of the
proposed action, which will help management compare it to alternate investment opportunities.

108
Annexure 1
Comparision of Four Tier Levels of Data Centre

Category Small Medium Large X Large


Site description Mixed use Mixed use Mixed use or Mixed use or
building building dedicated building dedicated building

Average Size (Sq. ft) 125 - 1000 1000 - 5000 5000 – 25000 > 25000

Average number of 5 – 40 41 – 200 200 – 800 > 800

IT racks

Typical number of Servers 30 – 250 250 - 1300 1300 – 4000 > 4000

Maximum design of 20 - 160 160 - 800 800 - 2500 > 2500


IT load (kW)

Source: APC

109
Annexure 2
Data Center Benchmarking Guide
I. Overall Data Center Performance Metrics

ID Name Priority Overall Data Center Performance


B1 Data Center Infrastructure Efficiency 1 Metrics
B2 Power Usage Effectiveness 1
B3 HVAC System Effectiveness 1 

B1: Data Center Infrastructure Efficiency (DCiE)


Description:
This metric is the ratio of the IT equipment energy to the total data center energy use. The total
data center energy use is the sum of the electrical energy for IT, HVAC system, power
distribution, lighting, and any other form of energy use, like steam or chilled water. All the
energy data values in the ratio are converted to common units.
Units: Dimensionless
B1 = dE2 ÷ (dE1 + (dE4 + dE5 + dE6)
where:
dE1: Total Electrical Energy Use (kWh)
dE2: IT Electrical Energy Use (kWh)
dE4: Total Fuel Energy Use (kWh)
dE5: Total District Steam Energy Use (kWh)
dE6: Total District Chilled Water Energy Use (kWh)

B2: Power Usage Effectiveness (PUE)


Description:
PUE is the inverse of the DCiE metric. This metric is the ratio of the total data center energy use
to total IT energy use. The total data center energy use is the sum of the electrical energy for the
servers, HVAC system, power distribution, lighting, and any other form of energy use, like
steam or chilled water. All the energy data values in the ratio are converted to common units.
Units: Dimensionless

B2 = (dE1 + (dE4 + dE5 + dE6) ÷ dE2


where:
dE1: Total Electrical Energy Use (kWh)
dE2: IT Electrical Energy Use (kWh)
dE4: Total Fuel Energy Use (kWh)
dE5: Total District Steam Energy Use (kWh)
dE6: Total District Chilled Water Energy Use (kWh)

110
B3: HVAC System Effectiveness
Description:
This metric is the ratio of the IT equipment energy to the HVAC system energy. The HVAC
system energy is the sum of the electrical energy for cooling, fan movement, and any other
HVAC energy use like steam or chilled water.
Units: Dimensionless
B3 = dE2 ÷ (dE3 + (dE4 + dE5 + dE6)
where:
dE2: IT Electrical Energy Use (kWh)
dE3: HVAC Electrical Energy Use (kWh)
dE4: Total Fuel Energy Use (kWh)
dE5: Total District Steam Energy Use (kWh)
dE6: Total District Chilled Water Energy Use (kWh)

II. Air Management Metrics


ID Name Priority
A1 Temperature Range
1
A2 Humidity Range
1
A3 Return Temperature Index
1
A4 Airflow Efficiency
1

A1: Temperature: Supply and Return


Description:
This metric is the difference between the supply and return air temperature from the IT
equipment in the data center.
Units: oC
A1 = dA2 – dA1
where:
dA1: Supply air temperature
dA2: Return air temperature

A2: Relative Humidity: Supply and Return


Description:
This metric is the difference of the return and supply air relative humidity from the IT equipment
in the data center.
Units: % RH
A2 = dA4 – dA3

111
A3: Return Temperature Index
Description:
This metric is a measure of the energy performance of the air management The primary purpose
of improving air management is to isolate hot and cold airstreams. This allows elevating both the
supply and return temperatures and maximizes the difference between them while keeping the
inlet temperatures within ASHRAE recommendations. It also allows reduction of the system air
flow rate. This strategy allows the HVAC equipment to operate more efficiently. The return
temperature index (RTI) is ideal at 100% wherein the return air temperature is the same as the
temperature leaving the IT equipment.
Units: %
A3 = ((dA2 – dA1) / (dA6 – dA5)) × 100
where:
dA1: Supply air temperature
dA2: Return air temperature
dA5: Rack inlet mean temperature
dA6: Rack outlet mean temperature

A4: Airflow Efficiency


Description:
This metric characterizes overall airflow efficiency in terms of the total fan power required per
unit of airflow. This metric provides an overall measure of how efficiently air is moved through
the data center, from the supply to the return, and takes into account low pressure drop design as
well as fan system efficiency.
Units: W/ m3/s
A4 = dA7 ÷ dA8
where:
dA7: Total fan power (supply and return) (W)
dA8: Total fan airflow (supply and exhaust) (m3/s)

III. Cooling Metrics


ID Name Priority
C1 Data Center Cooling System Efficiency 1
C2 Data Center Cooling System Sizing Factor 1
C3 Air Economizer Utilization Factor 1
C4 Water Economizer Utilization Factor 1

112
C1: Data Center Cooling System Efficiency
Description:
This metric characterizes the overall efficiency of the cooling system (including chillers, pumps,
and cooling towers) in terms of energy input per unit of cooling output. It is an average value
depicting average power of the cooling system with respect to the cooling load in the data center.
Units: kW/ton
C1: (dC1) ÷ (dC2)
where:
dC1: Average cooling system power usage (kW)
dC2: Average cooling load in the data center (tons)

C2: Cooling System Sizing Factor


Description:
This metric is the ratio of the installed cooling capacity to the peak cooling load.
Units: -
C2 = dT8 ÷ dT9
where:
dT8: Installed Chiller Capacity (w/o backup) (tons)
dT9: Peak Chiller Load (tons)

C3: Air Economizer Utilization Factor


Description:
This metric characterizes the extent to which air-side economizer system is being used to provide
“free” cooling. It is defined as the percentage of hours in a year that the economizer system can
be in full or complete operation (i.e. without any cooling being provided by the chiller plant).
Units: %
C3 = (dC5 ÷ 8760) × 100
where:
dC5: Air economizer hours (full cooling)

113
C4: Water Economizer Utilization Factor
Description:
This metric is the percentage hours in a year that the water side economizer system meets the
entire cooling load of the data center.
Units: %
C4 = (dC6 ÷ 8760) × 100
where:
dC6: Water economizer hours (full cooling)

IV. Electrical Power Chain Metrics


ID Name Priority

P1 UPS Load Factor 1


P2 UPS System Efficiency 1
P3 IT or Server Equipment Load Density 1
P4 Lighting Density 3

P1: UPS Load Factor


Description:
This metric is the ratio of the load of the uninterruptible power supply (UPS) to the design value
of its capacity. This provides a measure of the UPS system over-sizing and redundancy.
Units: Dimensionless
P1 = dP1 ÷ dP2
where:
dP1: UPS peak load (kW)
dP2: UPS load capacity (kW)

P2: Data Center UPS System Efficiency


Description:
This metric is the ratio of the UPS input power to the UPS output power. The UPS efficiency
varies depending on its load factor.
Units: %
P2 = dP3 ÷ dP4
where:
dP3: UPS input power (kW)

114
dP4: UPS output power (kW)

P3: IT or Server Equipment Load Density


Description:
This metric is the ratio of the average IT or server power to the electrically active data center
area. This metric provides a measure of the power consumed by the servers.
Units: W/m2
P3 = (dP5 *1000 ) ÷ dB1
where:
dP5: Average IT or server power (W)
dB1: Electrically active area of the data center (m2)

P4: Data Center Lighting Density


Description:
This metric is the ratio of the data center lighting power consumption to the data center area.
Units: W/m2
P4 = (dP4 *1000) ÷ dB1
where
dI4: Data center lighting power (W)
dB1: Data center area (m2)

115
Data Required for Performance Metrics

ID Data Item Measurement/Calculation Guidance


General Data Center Data
dB1 Data Center Area (electrically active)
dB2 Data Center Location
dB3 Data Center Type
dB4 Year of Construction (or major renovation)
Data Center Energy Data
dE1 Total Electrical Energy Use
dE2 IT Electrical Energy Use
dE3 HVAC Electrical Energy Use
dE4 Total Fuel Energy Use
dE5 Total District Steam Energy Use
dE6 Total District Chilled Water Energy Use
Air Management
dA1 Supply Air Temperature
dA2 Return Air Temperature
dA3 Supply Air Relative Humidity
dA4 Return Air Relative Humidity
dA5 Rack Inlet Mean Temperature
dA6 Rack Outlet Mean Temperature
dA7 Total Fan Power (Supply and Return)
dA8 Total Fan Airflow rate (Supply and Return)
Cooling
dC1 Average Cooling System Power Consumption
dC2 Average Cooling Load
dC3 Installed Chiller Capacity (w/o backup)
dC4 Peak Chiller Load
dC5 Air Economizer Hours (full cooling)
dC6 Water Economizer Hours (full cooling)
Electrical Power Chain
dP1 UPS Peak Load
dP2 UPS Load Capacity
dP3 UPS Input Power
dP4 UPS Output Power
dP5 Average IT or Server Power
dP6 Average Lighting Power

116
Annexure 3
LIST OF ENERGY SAVING OPPORTUNITIES IN DATA CENTER
1. Power quality improvement in data center by installing harmonic filters
2. Energy efficiency improvement in ups systems by loading optimization
3. Energy efficiency improvement in lighting system by replacing fluorescent lamps with
light emitting diode (LED) lamps
4. Efficient power distribution using modular design systems
5. Optimization of chilled water supply temperature
6. Cooling system Economizer
7. Electronically commuted (EC) fans
8. Hot aisle / cold aisle containment
9. Cable management in raised floor system
10. Optimum indoor conditions of a datacenter
11. Humidity level optimization in a datacenter
12. Avoid CRAC units demand fighting
13. Operate it equipment cooling system at a higher temperature difference
14. Thermal storage system for emergency cooling requirements
15. Expanding data center capacity with water-cooled cabinets
16. Accelerated server refresh strategy for IT performance improvement
17. Consolidation of data centers for improving enterprise computing performance
18. Network virtualization for IT system performance improvement
19. Effective storage utilization optimizes Data center operational cost
20. Metering and monitoring system for Datacenter performance optimization
21. Optimization of condenser cooling water temperature and flow rate in a water cooled
chillers
22. Optimizing Return Temperature Index (RTI) and Rack Cooling Index (RCI) for improving
rack cooling performance
23. Installation of Energy efficient amorphous transformer by design
24. Installation of Evaporative condenser for chillers

117
25. Replacement of old energy inefficient chillers with latest energy efficient chillers
26. Conversion of 3-way chilled water values to 2-way values system
27. Performance improvement through effective utilization of area in cooling tower
28. Conversion of primary/secondary chilled water system to primary system only

118
References
 Sun Microsystems white paper on “Energy efficient datacenters - The role of modularity in datacenter
design”
 Pacific Gas & Electric company’s design guidelines source book on “High performance Datacenters”
 Sun Microsystems white paper on “Energy efficient datacenters - Electrical design”
 Lawrence Berkeley National Laboratory’s report on “DC Power for Improved Data Center Efficiency”
 Schneider electric’s white paper on “DC Power for Improved Data Center Efficiency”
 University of Southern California white paper on “Minimizing Data Center Cooling and Server Power
Costs” by Ehsan Pakbaznia and Massoud Pedram
 Green Grid white paper by “Green grid data center power efficiency metrics: PUE and DCiE”
 Green Grid white paper on “Seven strategies to improve data center cooling efficiency”
 Green Grid white paper on “The Green Grid metrics: Data center infrastructure efficiency (DCiE) detailed analysis”
 Green Grid white paper on “Guide lines for Energy-Efficient Datacenters”
 Intel’s white paper on “Air-Cooled High-Performance Data Centers: Case Studies and Best Methods”
 http://www.searchdatacenter.com – white paper on “Green Data Center Efficiency Developments” by
Brocade
 IBM’s white paper on “Championing Energy Efficiency in the Data Center: An IBM Server Consolidation
Analysis” April 20, 2007
 Intel’s white paper on “Intel Eco-Rack Version 1.5”
 Intel’s white paper on “Turning Challenges into Opportunities in the Data Center”
 Sun Microsystems white paper on “Sun Professional Services Sample Case Studies”
 APC white paper on “Data Center Projects: Establishing a Floor Plan” By Neil Rasmussen & Wendy Torell
 CARYAIRE bulletin on “Air distribution products”
 APC white paper on “ Rack Powering Options for High Density in 230VAC Countries”
 Intel white paper on “Reducing Data Center Energy Consumption with Wet Side Economizers” May 2007
 Intel white paper on “Reducing Data Center Cost with an Air Economizer” August 2008
 APC white paper on “A scalable, reconfigurable, and efficient data center power distribution architecture”
 APC white paper on “Reliability Models for Electric Power Systems”
 APC white paper on “Comparing UPS System Design Configurations”
 ACEEE report on “Best Practices for Data Centers: Lessons Learned from Benchmarking 22 Data Centers”
 Cisco white paper on “Data centers: Best practices for security and performance”
 Green Grid white paper on “Fundamentals of data center power and cooling efficiency zones”
 NetApp white paper on “Case Study: NetApp Air Side Economizer”

119
 US Environmental Protection Agency report to Congress on Server and Data Center Energy Efficiency,
August 2007
 Colorado Springs Utilities white paper on “Energy Efficiency in Computer Data Centers”
 PTS Data center solutions white paper on “Data Center Cooling Best Practices”
 Cadence project report on “PC Quest best implementations of the year 2009”
 Cisco systems white paper on “Airport Uses Network Virtualization to Consolidate and Scale Operations”
 Cisco systems white paper on “Cisco slashes storage costs with storage recovery program”
 Cisco best practices manual on “Storage Utilization management”
 Cisco systems white paper on “Network Virtualization for the campus”
 CtrlS manual on “Datacenter design philosophy”
 Emerson Network power write-up on “Best data center design practices”
 Hewlett Packard white paper on “Storage works virtualization”
 Hewlett Packard white paper on “Virtualization in business terms”
 Intel white paper on “Accelerated server refresh reduces datacenter cost”
 Intel white paper on “Building an enterprise data warehouse and business intelligence solution”
 Intel white paper on “Expanding Datacenter capacity using water cooled cabinets”
 Intel white paper on “Implementing Virtualization in global business-computing environment”
 Intel white paper on “Increasing datacenter efficiency through metering and monitoring power usage”
 Intel white paper on “Reducing datacenter cost with an air economizer”
 Intel white paper on “Energy efficient performance for the datacenter”
 Intel white paper on “Reducing storage growth and cost – A comprehensive approach to storage
optimization”
 Intel white paper on “Thermal storage system provides emergency datacenter cooling”
 International Resources Group ECO-III Project report on “ Datacenter Benchmarking guide”
 BMC software education services solution guide on “Managing Datacenter Virtualization”
 Cisco design guide on “Datacenter infrastructure”
 DELL & Intel white paper on “Datacenter Operation and Maintenance best practices for critical facilities”
 Texas Instrument presentation on “Technical audit report”
 Emerson Network power white paper on “Five Strategies for Cutting Data Center Energy Costs Through
Enhanced Cooling Efficiency”
 Eaton white paper on “Economic and electrical benefits of harmonic reduction methods in commercial
facilities”
 Emerson Network power report on “Technical Note: Using EC Plug Fans to Improve Energy Efficiency of
Chilled Water Cooling Systems in Large Data Centers”
 APC white paper on “Virtualization: Optimized Power and Cooling to Maximize Benefits”

120
GLOSSARY (TERMS AND DEFINITIONS)
Blanking panels - Panels typically placed in unallocated portions of enclosed IT equipment racks to
prevent internal recirculation of air from the rear to the front of the rack.

Bus, power (or electrical bus) - A physical electrical interface where many devices share the same electric
connection, which allows signals to be transferred between devices, allowing information or power to
be shared

Chiller - A heat exchanger using air, refrigerant, water and evaporation to transfer heat to produce air
conditioning. A chiller is comprised of an evaporator, condenser and compressor system.

Cooling tower - Heat-transfer device, often tower-like, in which atmospheric air cools warm water,
generally by direct contact (heat transfer and evaporation).

CRAC (Computer Room Air Conditioner) - A modular packaged environmental control unit designed
specifically to maintain the ambient air temperature and/or humidity of spaces that typically contain
data center equipment. These products can typically perform all (or a subset) of the following functions:
cool, reheat, humidify, dehumidify.

Dehumidifier - A device that removes moisture from air.

Economizer, air – A ducting arrangement and automatic control system that allow a cooling supply fan
system to supply outdoor (outside) air to reduce or eliminate the need for mechanical refrigeration
during mild or cold weather

Economizer, water - A system by which the supply air of a cooling system is cooled directly or indirectly
or both by evaporation of water or by other appropriate fluid (in order to reduce or eliminate the need
for mechanical refrigeration)

Efficiency - The ratio of the output to the input of any system. Typically used in relation to energy;
smaller amounts of wasted energy denote high efficiencies.

Free standing equipment - Equipment that resides outside of data center racks.

Generator - A machine, often powered by natural gas or diesel fuel, in which mechanical energy is
converted to electrical energy

Hot aisle/cold aisle - A common means to optimize cooling in IT equipment rooms by arranging IT
equipment in back-to-back rows. Cold supply air from the cold aisle is pulled through the inlets of the
IT equipment, and exhausted to a hot aisle to minimize recirculation

Humidifier - A device which adds moisture to the air.

121
Load - In data centers, load represents the total power requirement of all data center equipment
(typically servers and storage devices, and physical infrastructure).

PDU (Power Distribution Unit) – A floor or rack mounted enclosure for distributing branch circuit
electrical power via cables, either overhead or under a raised floor, to multiple racks or enclosures of IT
equipment. The main function of a PDU is to house circuit breakers that are used to create multiple
branch circuits from a single feeder circuit. A secondary function of some PDUs is to convert voltage. A
data center typically has multiple PDUs.

Pump - Machine for imparting energy to a fluid, causing it to do work.

Rack - Structure for housing electronic equipment.

Raised floor - Raised floors are a building system that utilizes pedestals and floor panels to create a
cavity between the building floor slab and the finished floor where equipment and furnishings are
located. The cavity can be used as an air distribution plenum to provide conditioned air throughout the
raised floor area. When used as an access floor, the cavity can also be used to rout power/data cabling
infrastructure and/or water or coolant piping.

rPDU – (Rack-mount Power Distribution Unit) – A device designed to mount in IT equipment racks or
cabinets, into which units in the rack are plugged to receive electrical power

Transformer - A device used to transfer an alternating current or voltage from one circuit to another by
means of electromagnetic induction.

UPS (Uninterruptible Power Supply) fixed - Typically uses batteries as an emergency power source to
provide power to data center facilities until emergency generators come on line. Fixed implies a
standalone unit hard wired to the building.

UPS (Uninterruptible Power Supply) modular/scalable - Typically uses batteries as an emergency power
source to provide power to data center facilities until emergency generators come on line. Modular/
scalable implies units installed in racks with factory-installed whips allowing for physical mobility and
flexibility.

Air intake - Device that allows fresh air to enter into the building.

ACH
Air changes per hour, typically referring to outdoor air changes per hour.

122
Acoustics
Generally, a measure of the noise level in an environment or from a sound source. For a point in an
environment, the quantity is sound pressure level in decibels (dB). For a sound source, the quantity is
sound power level in either decibels (dB) or bels (B). Either of these quantities may be stated in terms of
individual frequency bands or as an overall A-weighted value. Sound output typically is quantified by
sound pressure (dBA) or sound power (dB). Densely populated data and communications equipment
centers may cause annoyance, affect performance, interfere with communications, or even run the risk
of exceeding noise limits (and thus potentially causing hearing damage), and reference should be
made to the appropriate regulations and guidelines.

Agile Device
A device that supports automatic switching between multiple Physical Layer technologies. (See IEEE
802.3, Clause 28.).

AHU
Air-handling unit is a device used to condition and circulate air as part of a heating, ventilating, and air-
conditioning (HVAC) system.

Air and Liquid Cooling


• Cooling: removal of heat. • Air cooling: direct removal of heat at its source using air. • Liquid cooling:
direct removal of heat at its source using a liquid (usually water, water / glycol mixture or refrigerant). •
Air-cooled rack or cabinet: system conditioned by removal of heat using air. • Asterisk denotes definitions
from ASHRAE Terminology of Heating, Ventilation, Air Conditioning, & Refrigeration. • Liquid-cooled
rack or cabinet: system conditioned by removal of heat using a liquid. • Air-cooled equipment:
equipment conditioned by removal of heat using air. • Liquid-cooled equipment: equipment
conditioned by removal of heat using a liquid. • Air-cooled server: server conditioned by removal of
heat using air. • Liquid-cooled server: server conditioned by removal of heat using a liquid. • Air-cooled
blade: blade conditioned by removal of heat using air. • Liquid-cooled blade: blade conditioned by
removal of heat using a liquid. • Air-cooled board: circuit board conditioned by removal of heat using
air. • Liquid-cooled board: circuit board conditioned by removal of heat using a liquid. • Air-cooled chip:
chip conditioned by removal of heat from the chip using air. • Liquid-cooled chip: chip conditioned by
removal of heat using a liquid. .

Air Inlet T - e m perature


The temperature measured at the inlet at which air is drawn into a piece of equipment for the purpose
of conditioning its components.

Air Outlet -Temperature


The temperature measured at the outlet at which air is discharged from a piece of equipment.

123
Air Short-C-ycling
Air conditioners are most efficient when the warmest possible air is returned to them; when cooler-
than-expected air is returned to the air conditioner it will perhaps mistakenly read that as the space
temperature being satisfied. This air short cycling is because the air is not picking the heat from the
space before returning to the air conditioner.

Air Space
The air space below a raised floor or above a suspended ceiling is used to recirculate the air in information
technology equipment room/information technology equipment area environment.

Air, bypass
Air diverted around a cooling coil in a controlled manner for the purpose of avoiding saturated discharge
air. On an equipment room scale, bypass air can also refer to the supply air that “short-cycles” around the
load and returns to the air handler without producing effective cooling at the load.

Air, cabinet
Air (typically for the purposes of cooling) that passes through a cabinet housing IT Peripheral
equipment.

Air, condit-ioned
Air treated to control its t-emperature, relative humidity, purity, pressure, and movement.

Air, equipm-ent
Airflow that passes through the IT or IT Peripheral equipment

Air, return (RA)


Air extracted from a space and totally or partially returned to an air conditioner

Air, supply
Air entering a space from an -air-conditi-oning system

Air-Cooled Data Center


Facility cooled by forced air transmitted by raised floor, overhead ducting, or some other method.

Air-Cooled -System
Conditioned air is supplied to the inlets of the rack/cabinet for convective cooling of the heat rejected
by the components of the electronic equipment within the rack. It is understood that within the rack,

124
the transport of heat from the actual source component (e.g., CPU) within the rack itself can be either
liquid or air based, but the heat rejection media from the rack to the terminal cooling device outside of
the rack is air.

Annunciator
The portion of a fire alarm control panel, or a remote device attached to the fire alarm control panel
that displays the information associated with a notification. Notifications may include alarm or trouble
conditions.

Availability
A percentag-e value representing the degree to which a system or component is operational and
accessible when required for use.

Backplane
A printed circuit board with connectors where other cards are plugged. A backplane does not usually
have many active components on it in contrast to a system board.

Bandwidth
Data traffic through a device usually measured in bits-per-second.

BAS:
Building automation system.

Baseline
“Baseline” refers to a configuration that is more general and hopefully simpler than one tuned for a
specific benchmark. Usually a “baseline” configuration needs to be effective across a variety of
workloads, and there may be further restrictions such as requirements about the ease-of-use for any
features utilized. Commonly “baseline” is the alternative to a “peak” configuration..

Basis-of-De-sign
A document that captures the relevant physical aspects of the facility to achieve the performance
requirements in support of the mission.

Baud (Bd)
A unit of signaling speed, expressed as the number of times per second the signal can change the
electrical state of the transmission line or other medium.

125
Bay
• A frame containing electronic equipment. • A space in a rack into which a piece of electronic equipment
of a certain size can be physically mounted and connected to power and other input/output devices.

Benchmark
A “benchmark” is a test, or set of tests, designed to compare the performance of one computer system
against the performance of others. Note: a benchmark is not necessarily a capacity planning tool. That
is, benchmarks may not be useful in attempting to guess the correct size of a system required for a
particular use. In order to be effective in capacity planning, it is necessary for the test to be easily
configurable to match the targeted use. In order to be effective as a benchmark, it is necessary for the
test to be rigidly specified so that all systems tested perform comparable work. These two goals are
often at direct odds with one another, with the result that benchmarks are usually useful for comparing
systems against each other, but some other test is often required to establish what kind of system is
appropriate for an individual’s needs. Every benchmark code of SPEC has a technical advisor who is
knowledgeable about the code and the scientific/engineering problem.

BIOS
Basic Input / Output System. The BIOS gives the computer a built-in set of software instructions to run
additional system software during computer boot up.

Bipolar Semiconductor Technology


This technology was popular for digital applications until the CMOS semicondu-ctor technology was
developed. CMOS drew considerably less power in standby mode and so it replaced many of the
bipolar applications around the early 1990s.

Bit Error Ratio (BER)


The ratio of the number of bits received in error to the total number of bits received.

Bit Rate (BR)


The total number of bits per second transferred to or from the Media Access Control (MAC). For example,
100BASE-T has a bit rate of one hundred million bits per second (108 bps).

Blade Server
A modular electronic circuit board, containing one, two, or more microprocessors and memory, that is
intended for a single, dedicated application and that can be easily inserted into a space-saving rack
with many similar servers. Blade servers, which share a common high-speed bus, are designed to create
less heat and thus save energy costs as well as space.

Blanking Pa-nels

126
Panels typically placed in unallocated portions of enclosed IT equipment racks to prevent internal
recirculation of air from the rear to the front of the rack.

Blower
An air-moving device.

BTU
Abbreviatio-n for British thermal units; the amount of heat required to raise one pound of water one
degree Fahrenheit, a common measure of the quantity of heat.

Building Automation System (BAS)


Centralized building control typically for the purpose of monitoring and controlling environment,
lighting, power, security, fire/life safety, and elevators..

Bus, Power (or Electrical Bus)


A physical -electrical interface where many devices share the same electric connection, which allows
signals to be transferred between devices, allowing information or power to be shared.

Cabinet
Frame for housing electronic equipment that is enclosed by doors and is stand-alone; this is generally
found with high-end servers.

CAV
Constant air volume

CFD
Computation-al fluid dynamics. A computational technology that enables you to study the dynamics
of fluid flow and heat transfer numerically.

CFM
The abbreviation for cubic feet per minute commonly used to measure the rate of air flow in systems
that move air.

Chassis
The physical framework of the computer system that houses all electronic components, their
interconnections, internal cooling hardware, and power supplies..

127
Chilled Water System
A type of a-ir-conditio-ning system that has no refrigerant in the unit itself. The -refrigerant is contained
in a chiller, which is located remotely. The chiller cools water, which is piped to the air conditioner to
cool the space. An air or process conditioning system containing chiller(s), water pump(s), a water
piping distribution system, chilled-water cooling coil(s), and associated controls. The refrigerant cycle
is contained in a remotely located water chiller. The chiller cools the water, which is pumped through
the piping system to the cooling coils.

Chip
The term “chip” identifies the actual microprocessor, the physical package -containing one or more
“cores”.

Classes of -Fires
Class A: fires involving ordinary combustibles such as paper, wood, or cloth

Class B: fires involving burning liquids

Class C: fires involving any fuel and occurring in or on energized electrical equipment

Class D: fires involving combustible metals (such as magnesium)

Client
A server system that can operate independently but has some interdepen-dence with another server
system.

Cluster
Two or more interconnected servers that can access a common storage pool. Clustering prevents the
failure of a single file server from denying access to data and adds computing power to the network for
large numbers of users.

CMOS Electronic Technology


This technology draws considerab-ly less power than bipolar semiconductor technology in standby
mode and so it replaced many of the digital bipolar applications around the early 1990s.

Coefficient of Performance (COP) - Cooling


The ratio of the rate of heat removal to the rate of energy input, in consistent units, for a complete
cooling system or factory-assembl-ed equipment, as tested under a nationally recognized standard or
designated operating conditions.

128
Cold Plate
Cold plates are typically aluminum or copper plates of metal that are mounted to electronic components.
Cold plates can have various liquids circulating within their channels. Typically, a plate with cooling
passages through which liquid flows to remove the heat from the electronic component to which it is
attached.

Commissioni-ng Levels
• Factory acceptance tests (Level 1 c-ommissionin-g): the testing of products prior to leaving their
place of manufacture

• Field component verification (Level 2 commissioning): the inspection and veri¬fication of products
upon receipt

• System construction verification (Level 3 commissionin-g): field i-nspections and certifications that
components are assembled and properly integrated into systems as required by plans and specifications

• site acceptance testing (Level 4 commissioning): activities that demonstrate that related components,
equipment, and ancillaries that make up a defined system operate and function to rated, specified,
and/or advertised perfor¬mance criteria

• integrated systems tests (Level 5 commissioning): the testing of redundant and backup components,
systems, and groups of interrelated systems to demonstrate that they respond as predicted to expected
and unexpected anomalies .

Commissioni-ng Plan
A document that defines the verification and testing process to ensure the project delivers what is
expected, including training, documentation, and project close-out.

Commissioning
The process of ensuring that systems are designed, installed, -functionall-y tested, and capable of
being operated and maintained to perform in conformity with the design intent; it begins with planning
and includes design, construction, start-up, acceptance, and training and can be applied throughout
the life of the building..

Communicati-on Equipment
Equipment used for information transfer. The information can be in the form of digital data, for data
communications, or analog signals, for traditional wireline voice communication.

129
• Core Network or Equipment: A core network is a central network into which other networks feed.
Traditionally, the core network has been the ci-rcuit-orien-ted telephone system. More recently,
alternative optical networks bypass the traditional core and implement packet-oriented
technologies. Significant to core networks is “the edge,” where networks and users exist. The edge
may perform intelligent functions that are not performed inside the core network.

• Edge Equipment or Devices: In general, edge devices provide access to faster, more efficient
backbone and core networks. The trend is to make the edge smart and the core “dumb and fast.”
Edge devices may translate between one type of network protocol and another. .

Compute Ser-ver
Servers dedicated for computation or processing that are typically required to have greater processing
power (and, hence, dissipate more heat) than servers dedicated solely for storage.

Compute-Int-ensive
The term that applies to any computer -application demanding very high computational power, such
as meteorology programs and other scientific applications. A similar but distinct term, computer-
intensive, refers to applications that require a lot of computers, such as grid computing. The two types
of applications are not necessarily mutually exclusive; some applications are both compute- and
computer-intensive.

Computer System Availability


Probability that a computer system will be operable at a future time (takes into account the effects of
failure and repair/maintenance of the -system).

Computer System Reliability


Probability that a computer system will be operable throughout its mission duration (only takes into
account the effects of failure of the system).

Condenser
Heat exchanger in which vapor is liquefied (state change) by the rejection of heat as a part of the
refrigeration cycle

Conditioned Air
Air treated to control its t-emperature, relative humidity, purity, pressure, and movement..

Cooling Tower
Heat-transf-er device, often tower-like, in which atmospheric air cools warm water, generally by direct
contact (heat transfer and e-vaporation)

130
Cooling, Air
Conditioned air is supplied to the inlets of the rack/cabinet/se-rver for convective cooling of the heat
rejected by the components of the electronic equipment within the rack. It is understood that within
the rack, the transport of heat from the actual source component (e.g., CPU) within the rack itself can be
either liquid or air based, but the heat rejection media from the rack to the building cooling device
outside the rack is air. The use of heat pipes or pumped loops inside a server or rack where the liquid
remains is still considered air cooling

Cooling, Li-quid
Conditioned liquid is supplied to the inlets of the rack/cabinet-/server for thermal cooling of the heat
rejected by the components of the electronic equipment within the rack. It is understood that within
the rack, the transport of heat from the actual source component (e.g., CPU) within the rack itself can be
either liquid or air based (or any other heat transfer mechanism), but the heat rejection media to the
building cooling device outside of the rack is liquid

Core
The term “core” is used to identify the core set of architectural, computational processing elements that
provide the functionality of a CPU

CPU
Central Processing Unit, also called a processor. In a computer the CPU is the processor on an IC chip that
serves as the heart of the computer, containing a control unit, the arithmetic and logic unit (ALU), and
some form of memory. It interprets and carries out instructions, performs numeric computations, and
controls the external memory and peripherals connected to it..

CRAC (Computer Room Air Conditi-oning)


A modular packaged environmental control unit designed specifically to maintain the ambient air
temperature and/or humidity of spaces that typically contain IT Peripheral equipment. These products
can typically perform all or some of the following functions: cool, reheat, humidify, dehumidify. They
may have multiple steps for some of these functions. CRAC units should be specifically designed for
data and communications equipment room applications and meet the requirements of ANSI/ASHRAE
Standard.

Data Center
A building or portion of a building whose primary function is to house a computer room and its support
areas; data centers typically contain high-end servers and storage products with mission-critical
functions.

131
Data Terminal Equipment (DTE)
Any source or destination of data connected to the local area network..

IT Peripheral
A term that is used as an ab-breviation for the data and communications industry

Dataset
The set of inputs for a particular benchmark. There may be more than one dataset available for each
benchmark each serving a different purpose (e.g. measurement versus testing) or configured for
different problem sizes (small, medium, large, ...)..

dBm
Decibels referenced to 1.0 mW..

Dehumidific-ation
The process of removing moisture from air

Direct Expansion (DX) System


A system in which the cooling effect is obtained directly from the refrigerant; it typically incorporates a
compressor, and in most cases the refrigerant undergoes a change of state in the system.

Disk Unit
Hard disk drive installed in a piece of IT Peripheral equipment, such as a personal computer, laptop,
server, or storage product.

Diversity
A factor used to determine the load on a power or cooling system based on the actual operating
output of the individual equipment rather than the full-load capacity of the equipment.

Diversity
Two definitions for diversity exist, diverse routing and diversity from maximum.

• Systems that employ an alternate path for distribution are said to have diverse routing. In terms of an
HVAC system, it might be used in reference to an alternate chilled water piping system. To be truly
diverse (and of maximum benefit) both the normal and alternate paths must each be able to support
the entire normal load

132
• Diversity can also be defined as a ratio of maximum to actual for metrics such as power loads. For
example, the nominal power loading for a rack may be based on the maximum configuration of
components, all operating at their maximum intensities. Diversity would take into account variations
from the maximum in terms of rack occupancy, equipment configuration, operational intensity, etc., to
provide a number that could be deemed to be more realistic

Domain
A group of computers and devices on a network that are administered as a unit with common rules and
procedures. Within the Internet, domains are defined by the IP address. All devices sharing a common
part of the IP address are said to be in the same domain

Down Time
A period of time during which a system is not operational, due to a malfunction or maintenance

Downflow air system


Refers to a type of air-conditioning system that discharges air downward, directly beneath a raised
floor, commonly found in computer rooms and modern office spaces

Dry-Bulb Temperature (DB)


Temperature of air indicated by an ordinary thermometer

Drywell
A well in a piping system that allows a thermometer or other device to be inserted without direct
contact with the liquid medium being me-asured.

Economizer, Air
A ducting a-rrangement and automatic control system that allow a cooling supply fan system to
supply outdoor (outside) air to reduce or eliminate the need for mechanical refrigeration during mild
or cold weather.

Economizer, Water
A system by which the supply air of a cooling system is cooled directly or indirectly or both by
evaporation of water or by other -appropriate fluid (in order to reduce or eliminate the need for
mechanical refrigeration).

Efficiency, HVAC System

133
The ratio of the useful energy output (at the point of use) to the energy input, in consistent units, for a
designated time period, expressed in percent.

Efficiency
The ratio of the output to the input of any system. Typically used in relation to energy; smaller amounts
of wasted energy denote high ef-ficiencies

Electromagn-etic Compatibility (EMC)


The ability of electronic equipment or systems to operate in their intended operational environments
without causing or suffering unacceptable degradation because of electromagnetic radiation or
response

Electronica-lly Commutated Motor (ECM)


An EC motor is a DC motor with a shunt characteristic. The rotary motion of the motor is achieved by
supplying the power via a switching device-the so-called commutator. On the EC motors, this
commutation is performed using brushless electronic semiconductor modules

Energy Efficiency Ratio (EER)


The ratio of net equipment cooling capacity in Btu/h to total rate of electric input in watts under
designated operating conditions. When consistent units are used, this ratio becomes equal to COP

Equipment R-oom
Data center or central office room that houses computer and/or telecom equipment. For rooms housing
mostly telecom equipment

Equipment
Refers to, but not limited to, servers, storage products, workstations, personal computers, and
transportable computers. May also be referred to as electronic equipment or IT equipment..

ESD
Electrostat-ic Discharge (ESD), the sudden flow of electricity between two objects at different electrical
potentials. The transfer of voltage between two objects at different voltage -potentials. ESD is a primary
cause of integrated circuit damage or failure.

Evaporative Condenser
Condenser in which the removal of heat from the refrigerant is achieved by the evaporation of water
from the exterior of the condensing surface, induced by the forced circulation of air and sensible cooling
by the air..

134
F

Fan Sink
A heat sink with a fan directly and permanently attached

Fan
Device for moving air by two or more blades or vanes attached to a rotating shaft.

• Airfoil fan: shaped blade in a fan assembly to optimize flow with less turbulence-.

• Axial fan: fan that moves air in the general direction of the axis about which it rotates.

• Centrifugal fan: fan in which the air enters the impeller axially and leaves it substantially in a radial
direction.

• Propeller fan: fan in which the air enters and leaves the impeller in a direction substantially parallel to
its axis. .

Fault Toler-ance
The ability of a system to respond gracefully and meet the system performance specifications to an
unexpected hardware or software failure. There are many levels of fault tolerance, the lowest being the
ability to continue operation in the event of a power failure. Many fault-tolerant computer systems
mirror all operations-that is, every operation is performed on two or more duplicate systems, so if one
fails, the other can take over

Fenestration
An architectural term that refers to the arrangement, proportion, and design of window, skylight, and
door systems within a building

Fiber Optic Cable


A cable containing one or more optical fibers

Filter Dryer
Encased desiccant, generally inserted in the liquid line of a refrigeration system and sometimes in the
suction line, to remove entrained moisture, acids, and other c-ontaminants.

Float Voltage
Optimum voltage level at which a battery string gives maximum life and full capacity

135
Flux
Amount of some quantity flowing across a given area (often a unit area perpendicular to the flow) per
unit time. Note: The quantity may be, for example, mass or volume of a fluid, electromagnetic energy, or
number of particles. .

Heat Exchan-ger
Device to transfer heat between two physically separated fluids.

• Counterflow heat exchanger: heat exchanger in which fluids flow in opposite directions approximately
parallel to each other.

• Cross-flow heat exchanger: heat exchanger in which fluids flow perpendicular to each other.

• Heat pipe heat exchanger: Tubular closed chamber -containing a fluid in which heating one end of
the pipe causes the liquid to vaporize and transfer to the other end where it condenses and dissipates
its heat. The liquid that forms flows back toward the hot end by gravity or by means of a capillary wick

• -Parallel-fl-ow heat exchanger: heat exchanger in which fluids flow approximately parallel to each
other and in the same direction

• Plate heat exchanger or plate liquid cooler: thin plates formed so that liquid to be cooled flows
through passages between the plates and the cooling fluid flows through alternate passages. .

Heat Load per Product Footprint


Calculated by using product measured power divided by the actual area covered by the base of the
cabinet or equipment.

Heat Load, -Latent


Cooling load to remove latent heat, where latent heat is a change of enthalpy during a change of state.

Heat Load, -Sensible


The heat load that causes a change in temperature

Heat Sink
Component designed to transfer heat from an electronic device to a fluid. Processors, chipsets, and
other high heat flux devices typically require heat sinks..

136
Heat, Total (Enthalpy)
A thermodyn-amic quantity equal to the sum of the internal energy of a system plus the product of the
pressure-volume work done on the system: h = E + pv where h = enthalpy or total heat content, E =
internal energy of the system, p = pressure, and v =volume. For the purposes of this document, h =
sensible heat + latent heat. sensible heat: heat that causes a change in temperature latent heat: change
of enthalpy during a change of state .

Heat
• Total Heat (Enthalpy): A th-ermodynamic quantity equal to the sum of the internal energy of a system
plus the product of the pressure-volume work done on the system. h= E + pv where h= enthalpy or
total heat content, E = internal energy of the system, p = pressure, and v = volume. For the purposes of
this paper, h = sensible heat + latent heat. • Sensible Heat: Heat that causes a change in temp-erature. •
Latent Heat: Change of enthalpy during a change of state. .

High Performance Computing and Co-mmunication-s (HPCC)


High performance computing includes scientific workstations, supercomputer systems, high speed
networks, special purpose and experimental systems, the new generation of large-scale parallel systems,
and application and systems software with all components well integrated and linked over a high
speed network..

High-Effici-ency Particulate Air (HEPA) Filters


These filters are designed to remove 99.97% or more of all airborne pollutants 0.3 microns or larger
from the air that passes through the filter. There are different levels of cleanliness, and some HEPA filters
are designed for even higher removal efficiencies and/or removal of smaller particles

Horizontal -Displacemen-t (HDP)


An air-distribution system used to introduce air horizontally from one end of a cold -aisle

Horizontal Overhead (HOH)


An air-distribution system that is used to introduce the supply air horizontally above the cold aisles
and is generally utilized in raised-floor environments where the raised floor is used for cabling..

Hot Aisle/Cold Aisle


A common means of providing cooling to IT Peripheral rooms in which IT equipment is arranged in
rows and cold supply air is supplied to the cold aisle, pulled through the inlets of the IT equipment, and
exhausted to a hot aisle to minimize recirculation of the hot exhaust air with the cold supply air.

• A common -arrangement for the perforated tiles and the IT Peripheral equipment. Supply air is
introduced into a region called the cold aisle.

137
• On each side of the cold aisle, equipment racks are placed with their intake sides facing the cold aisle.
A hot aisle is the region between the backs of two rows of racks.

• The cooling air delivered is drawn into the intake side of the racks. This air heats up inside the racks and
is exhausted from the back of the racks into the hot aisle. .

Humidificat-ion
The process of adding moisture to air or gases

Humidity Ra-tio
The ratio of the mass of water to the total mass of a moist air sample. It is usually expressed as grams of
water per kilogram of dry air (gw/kgda) or as pounds of water per pound of dry air (lbw/lbda)..

Humidity
Water vapor within a given s-pace.

Abso-lute Humidity: The mass of water vapor in a specific volume of a mixture of water vapor and dry
air.

Relative Humidity: Ratio of the partial pressure or density of water vapor to the saturation pressure or
density, re-spectively, at the same dry-bulb temperature and barometric pressure of the ambient air.
Ratio of the mole fraction of water vapor to the mole fraction of water vapor saturated at the same
temperature and barometric pressure. At 100% relative humidity, the dry-bulb, wet-bulb, and dew-
point temperatur-es are equa-l.

Hydrofluoro-carbon (HFC)
A halocarbo-n that contains only fluorine, carbon, and hydro-gen

IEC
Internation-al Electrotechnical Commission; a global organizatio-n that prepares and publishes
international standards for all electrical, electronic, and related technol-ogies..

IEEE
Formerly, the Institute of Electrical and Electronics Engineers, Inc..

Infiltration

138
Flow of outdoor air into a building through cracks and other unintentiona-l openings and through
the normal use of exterior doors for entrance and egress; also known as air leakage into a building

Leakage Air-flow
Any airflow that does not flow along an intended path is considered to be a leakage in the system.
Leakage airflow results in excess fan energy and may also result in higher energy consumption of
refrigeration equipment

Mean Time To Repair (or Recover) (MTTR)


The expected time to recover a system from a failure, usually measured in hours.

Memory
Memory is a Internal storage area in a computer. The term memory identifies data storage that comes in
the form of silicon, and the word storage is used for memory that exists on tapes or disks. The term
memory is usually used as shorthand for physical memory, which refers to the actual chips capable of
holding data. Some computers also use virtual memory, which expands physical memory onto a hard
disk.

Metric
The final results of a benchmark. The significant statistics reported from a benchmark run. Each
benchmark defines what are valid metrics for that particular benchmark..

Minimum Efficiency Reporting Value -(MERV )


Previously there were several spe-cifications used to determine filter efficiency and characteristic-s.
ASHRAE has developed the MERV categories so that a single number can be used to select and specify
filters

MTBF
Mean time between failures

Nameplate R-ating

139
Term used for rating according to nameplate:“Equipment shall be provided with a power rating marking,
the purpose of which is to specify a supply of correct voltage and frequency, and of adequate current-
carrying capacity”

Non-Raised -Floor
Facilities without a raised floor utilize overhead ducted supply air to cool equipment. Ducted overhead
supply systems are typically limited to a cooling capacity of 100 W/ft2

OEM
Original Equipment Manufacturer. Describes a company that manufactures equipment that is then
marketed and sold to other companies under their own names..

Optical Fiber
A filament--shaped optical waveguide made of dielectric mat-erials..

Pascal (PA)
A unit of pressure equal to one newton per square meter. As a unit of sound pressure, one pascal
corresponds to a sound pressure level of 94

Perforated Floor Tile


A tile as part of a raised-floo-r system that is engineered to provide airflow from the cavity underneath
the floor to the space, tiles may be with or without volume dampers.

Performance Neutral
Performance neutral means that there is no significant differenc-e in performance. For example, a
performance neutral source code change would be one which would not have any significant impact
on the performance as measured by the benchmark

Plenum
A compartme-nt or chamber to which one or more air ducts are connected and that forms part of the
air distribution system

Point of Presence (PoP)

140
A PoP is a place where communicat-ion services are available to subscribers. Internet service providers
have one or more PoPs within their service area that local users dial into. This may be co-located at a
carrier’s central office

Power
Time rate of doing work, usually expressed in horsepower or watts..

Psychrometr-ic Chart
A graph of the properties (temperature, relative humidity, etc.) of air; it is used to determine how these
properties vary as the amount of moisture (water vapor) in the air changes.

Pump
Machine for imparting energy to a fluid, causing it to do work.

• Centrifugal pump: Pump having a stationary element (casing) and a rotary element (impeller) fitted
with vanes or blades arranged in a circular pattern around an inlet opening at the center. The casing
surrounds the impeller and usually has the form of a scroll or volute.

• Diaphragm pump: Type of pump in which water is drawn in and forced out of one or more chambers
by a flexible diaphragm. Check valves let water into and out of each chamber.

• Positive displacement pump: Has an expanding cavity on the suction side and a decreasing cavity on
the discharge side. Liquid flows into the pump as the cavity on the suction side expands and the liquid
flows out of the discharge as the cavity collapses. Examples of positive displacement pumps include
reciprocating pumps and rotary pumps.

• Reciprocating pump: A back-and-forth motion of pistons inside of cylinders provides the flow of fluid.
Reciprocating pumps, like rotary pumps, operate on the positive principle; that is, each stroke delivers
a definite volume of liquid to the system.

• Rotary pump: Pumps that deliver a constant volume of liquid regardless of the pressure they encounter.
A constant volume is pumped with each rotation of the shaft and this type of pump is frequently used
as a priming pump

Rack Power

141
Used to denote the total amount of electrical power being delivered to electronic equipment within a
given rack. Often expressed in kilowatts (kW), this is often incorrectly equated to be the heat dissipation
from the electrical components of the rack

Rack
• Structure for housing electronic equipment. Differing definitions exist between the computing industry
and the telecom industry

• Computing Industry: A rack is enclosed cabinet housing computer equipment. The front and back
panels may be solid, perforated, or open depending on the cooling requireme-nts of the equipment
within.

• Telecom Industry: A rack is a framework consisting of two vertical posts mounted to the floor and a
series of open shelves upon which electronic equipment is placed. Typically, there are no enclosed
panels on any side of the rack. .

Rack-Mounte-d Equipment
The Equipment that is mounted in a cabinet. These systems are generally specified in units such as 1U,
2U, 3U, etc., where 1U = 1.75 inches (44 mm)

Raised Floor
A platform with removable panels where equipment is installed, with the intervening space between
it and the main building floor used to house the -inter-conne-cting cables, which at times is used as a
means for supplying conditioned air to the information technology equipment and the room. Also
known as access floor. Raised floors are a building system that utilizes pedestals and floor panels to
create a cavity between the building floor slab and the finished floor where equipment and furnishings
are located. The cavity can be used as an air distribution plenum to provide conditioned air throughout
the raised floor area. The cavity can also be used for routing of power/data cabling infrastructure..

Rated Current
The rated current is the absolute maximum current that is required by the unit from an electrical branch
circuit..

Rated Frequency Range


The supply frequency range as declared by the manufacturer, expressed by its lower and upper rated
frequencies..

Rated Frequ-ency
The supply frequency as declared by the manufacturer..

142
Rated Voltage Range
The supply voltage range as declared by the manufacturer. .

Rated Voltage
The supply voltage as declared by the manufacturer..

Redundancy
“N” represents the number of pieces to satisfy the normal conditions. Redundancy is often expressed
compared to the baseline of “N”; some examples are “N+1,”“N+2,”“2N,” and 2(N+1). A critical decision is
whether “N” should represent just normal conditions or whether “N” includes full capacity during off-
line routine maintenance. Facility redundancy can apply to an entire site (backup site), systems, or
components. IT redundancy can apply to hardware and software

Refrigerants
In a refrigerating system, the medium of heat transfer that picks up heat by e-vaporating at a low
temperature and pressure and gives up heat on condensing at a higher temperature and pressure.

Relative Humidity (RH)


(a) ratio of the partial pressure or density of water vapor to the saturation pressure or density, respectively,
at the same dry-bulb temperature and barometric pressure of the ambient air;

(b) ratio of the mole fraction of water vapor to the mole fraction of water vapor saturated at the same
temperature and barometric pressure-at 100% relative humidity, the dry-bulb, wet-bulb, and dew-point
temperatures are equal.

Releasing P-anel
A particula-r fire alarm control panel whose specific purpose is to monitor fire detection devices in a
given area protected by a suppression system and, upon receiving alarm signals from those devices,
actuate the suppression system

Reliability
A percentag-e value representing the probability that a piece of equipment or system will be operable
throughout its mission duration. Values of 99.9% (three 9s) and higher are common in data and
communications equipment areas. For individual components, the reliability is often determined
through testing. For assemblies and systems, reliability is often the result of a mathematical evaluation
based on the reliability of individual components and any redundancy or diversity that may be
employed.

143
Reliability is a percentage value representing the probability that a piece of equipment or system will
be operable throughout its mission duration. Values of 99.9 percent (three 9s) and higher are common
in data and communications equipment areas. For individual components, the reliability is often
determined through testing. For assemblies and systems, reliability is often the result of a mathematical
evaluation based on the reliability of individual components and any redundancy or diversity that
may be employed

Semiconductor
A material that is neither a good conductor of electricity nor a good insulator. The most common
semiconductor materials are silicon, gallium arsenide, and germanium. These materials are then doped
to create an excess or lack of electrons and used to build computer chips..

Sensible Heat Ratio (SHR)


ratio of the sensible heat load to the total heat load (sensible plus latent).

Server
A computer that provides some service for other computers connected to it via a network; the most
common example is a file server, which has a local disk and services requests from remote clients to
read and write files on that disk.

Service Level Agreement (SLA)


A contract between a network service provider and a customer that specifies, usually in measurable
terms, what services the network service provider will furnish.

Single-Poin-t Failure
Any component that has the capability of causing failure of a system or a portion of a system if it
becomes inoperable..

SPEC
Standard Performance Evaluation Corpo-ration. SP-EC is an or-ganization of computer industry
vendors dedicated to developing standardized benchmarks and publishing reviewed results

SPECrate
A “SPECrate - ” is a throughput metric based on the SPEC CPU benchmarks (such as SPEC CPU95). This
metric measures a system’s capacity for processing jobs of a specified type in a given amount of time.
Note: This metric is used the same for multi-processor systems and for uniprocessors. It is not n-ecessarily
a measure of how fast a processor might be, but rather a measure of how much work the one or more

144
processors can accomplish. The other kind of metrics from the SPEC CPU suites are SPECratios, which
measure the speed at which a system completes a specified job. .

SPECratio
A measure of how fast a given system might be. The “SPECratio” is calculated by taking the elapsed time
that was measured for a system to complete a specified job, and dividing that into the reference time
(the elapsed time that job took on a standardized reference machine). Th-is measures how quickly, or
more specifically: how many times faster than a particular reference machine, one system can perform
a specified task. “SPECratios” are one style of metric from the SPEC CPU benchmarks, the other are
SPECrates. .

Switchgear
Combination of electrical disconnects and/or circuit breakers meant to isolate equipment in or near an
electrical substation.

Temperature
• Dew Point: The temperature at which water vapor has reached the -saturation point (100% relative
humidity).

• Dry Bulb: The temperature of air indicated by a thermomete - r.

• Wet Bulb: The temperature indicated by a psychrometer when the bulb of one thermometer is covered
with a water-saturated wick over which air is caused to flow at approximately 4.5 m/s (900 ft/min) to
reach an equilibrium temperature of water evaporating into air, where the heat of vaporization is
supplied by the sensible heat of the air. .

Thermal Storage Tank


Container used for the storage of thermal energy; thermal storage systems are often used as a component
of chilled-water systems.

Tonnage
The unit of measure used in air conditioning to describe the heating or cooling capacity of a system.
One ton of heat represents the amount of heat needed to melt one ton (2000 lb) of ice in one hour;
12,000 Btu/hr or 3024 kcals/hr equals one ton of refrigeration

Turn-Down Ratio

145
Ratio representing highest and lowest effective system capacity. Calculated by dividing the maximum
system output by the minimum output at which steady output can be maintained. For example, a 3:1
turn-down ratio indicates that minimum operating capacity is one-third of the maximum

UPS, Static
Typically uses batteries as an emergency power source to provide power to IT Peripheral facilities until
emergency generators come on line

Uptime
• Uptime is a computer industry term for the time during which a computer is operational. Downtime is
the time when it isn’t operational.

• Uptime is sometimes measured in terms of a percentile. For example, one standard for uptime that is
sometimes discussed is a goal called five 9s-that is, a computer that is operational 99.999 percent of the
time. .

Valve
A device to stop or regulate the flow of fluid in a pipe or a duct by throttling..

VAV
variable air volume.

Ventilation
The process of supplying or removing air by natural or mechanical mea-ns to or from any space; such
air may or may not have been conditioned .

VFD
variable frequency drive is a system for controlling the rotational speed of an alternating current (AC)
electric motor by controlling the frequency of the electrical power supplied to the motor.

Virtual Private Network ( VPN)


The use of -encryption in the lower protocol layers to provide a secure connection through an otherwise
insecure network, typically the Internet. VPNs are generally cheaper than real private networks using

146
private lines but rely on having the same encryption system at both ends. The encryption may be
performed by firewall software or possibly by routers..

Virtual Server
• A configuration of a networked server that appears to clients as an independe-nt server but is actually
running on a computer that is shared by any number of other virtual servers. Each virtual server can be
configured as an independent Web site, with its own hostname, content, and security settings.

• Virtual servers allow Internet service providers to share one computer between multiple Web sites
while allowing the owner of each Web site to use and administer the server as though they had complete
control. .

Virtual
Common alternative to logical, often used to refer to the artificial objects (such as addressable virtual
memory larger than physical memory) created by a computer system to help the system control access
to shared resources

Volatile Organic Compounds ( VOCs)


Organic (ca-rbon-contai-ning) compounds that evaporate readily at room temperature; these
compounds are used as solvents, degreasers, paints, thinners and fuels.

VSD
Variable speed drive is a system for controlling the rotational speed of either an alternating current
(AC) or direct current (DC) motor by varying the voltage to the electrical power supplied to the motor.

Workload
The workload is the definitio-n of the units of work that are to be performed during a benchmark run

147

You might also like