Critical Power and Cooling - DATA CENTER nr3 - 20112
Critical Power and Cooling - DATA CENTER nr3 - 20112
Visit: http://datacentermag.com/newsletter/
DataCENTER
FOR IT PROFESSIONALS
MAGAZINE
interview
12. Interview with Margaret Lewis
– director of Commercial Solutions interview
and Software Strategy at AMD
34. Interview with Lex Coors
– VP of Group Data Center Technology
& Engineering, Interxion
PUE
24. Power and Cooling in the Data Center 44. Power Usage Effectiveness (PUE):
Michael Weldon
Pros and Cons
26. Are You Managing the Energy & Airflow Greg More
Erik Schipper spreads a word about Cooling Failures and Power Outages, Dhananjay D. Garg brings out the Hot Aisle
vs. Cold Aisle topic in our Basics section. For those who are particularly interested in the security issue, we have a spe-
cial article on Social Networking Security (one of the hottest topics nowadays) written by Steven Breen.
You can also read some experts statements written by the representatives of such companies as Raritan, Interxion,
AMD and the other leaders in the data center knowledge field.
Magdalena Mojska
Editor in Chief
http://datacentermag.com
Data Center magazine team DTP: Marcin Ziółkowski Senior Consultant/Publisher: Marketing Director: Magdalena Mojska
Editor in Chief: Graphics & Design Studio Paweł Marciniak [email protected]
Magdalena Mojska tel.: 509 443 977
[email protected] [email protected] CEO: Publisher:
www.gdstudio.pl Ewa Dudzic [email protected] Software Press Sp. z o.o. SK
Editorial Contributors: Erik Schipper, 02-682 Warszawa, ul. Bokserska 1
Stephen Breen, Dhananjay D. Garg, Special thanks to: Sean Curry, Production Director: Phone: 1 917 338 3631
Mahdi Jelodari, Richard C. Batka, Stephen Breen, Erik Schipper, Jana Andrzej Kuca
Michael Weldon. Cokeley, Alberto Jose Aragon Alvarez [email protected] www.datacentermag.com
NEWS
O
ver the past few years there has been much talk must get out of town rather than sticking within the M25 when
of Britain’s CIOs rethinking their data centre loca- expanding or consolidating their data centre estates.
tion strategies and heading for the hills well away A recent Jones Lang LaSalle Data Centre Barometer1 re-
from the increasingly space and power-strapped London port is yet another indicator showing the penny has already
and South East. The tale of a Promised Land offering much dropped for most concerned. More than 70 per cent of corpo-
lower costs, access to plentiful and resilient power, high rates in Europe now agree that location of data centres over
speed fibre connections and less exposure to terror threat 200 km away from existing facilities is acceptable provided
is well-known. The big problem is that it has remained ex- there is at least a 20 per cent reduction in operational costs
actly that - a land full of promises - leaving many London- on the table. This is a big shift in opinion over the previous
bound CIOs to ponder “Where can I move to today that’s quarter’s survey where it was still around half and half.
any better?” In many parts of the world and especially in North Ameri-
For all the talk and hype about new out-of-town builds, most ca, large-scale data centre migration to out-of-town locations
have yet to make it any further than the planning stages. Lit- has already occurred and is widely accepted as being best
tle wonder therefore that most of the clamor to reject London practice. In the UK, city centre sites are exhausted, power
as the epicentre of the data centre market in favour of more is capped, choices are constrained and costs are high. As
remote locations would appear to have fallen on deaf ears. larger and higher calibre modern facilities become available
In defence of the ‘Londonites’, until the last year or so, well away from London, it is only a matter of time before
there has simply been insufficient ready-to-go stock for CIOs CFOs and shareholders of UK-based companies start to in-
to seriously consider and go out and see in terms of well out- sist upon less risky and more cost-effective corporate data
of-town facilities capable of supporting the multifarious data centre location strategies.
centre requirements of the modern enterprise organisation, Twenty years of conventional wisdom, dictating London is
hosting provider or carrier. data centre central for reasons of latency, sheer convenience
Unless one can actually see and touch the proposed out- and no other alternative, is finally past its sell by date. London
of-town facility and receive cast-iron guarantees on available cannot continue as the hub of the data centre marketplace.
space and the cost savings to be had, as well as SLAs on It’s time to get right out of town.
more plentiful and resilient power supply, low cost remote di-
agnostics to keep the server huggers happy, a wide choice Data Centre Barometer – Jones Lang LaSalle www.joneslangla-
of on-site high-speed low latency fibre interconnects, plus salle.co.uk/datacentres
demonstrable top-notch physical security, why would anyone
take the risk of moving from their London area sites? After Simon Taylor is chairman and co-founder of Next Generation
all, it’s always been the place to be. Data Limited the owner and operator of NGD Europe, one of the
However, once such remote-sited lower cost all singing all world’s largest data centres. www.nextgenerationdata.co.uk
dancing alternative facilities are seen to be available – and
they are now coming through - the data centre location di-
lemma miraculously goes away in most cases, except for a Editor contact:
very small minority of traders who just have to be within thirty Nigel Parker
odd miles of London for reasons of nano-second latency. The Strategic PR
majority of CIOs now recognise for the long-term health of Tel: +44 1494 434434
their businesses and wealth of corporate shareholders they [email protected]
N
EW YORK, March , 2011—Next Generation Data Lim- ing backhauling traffic to London for improved resilience, speed
ited, operator of the largest data centre in Europe, today and reduced latency.
announced it will be opening an office in New York City The New York office will open in the third quarter of this year
to market its services to American companies planning to locate and will be the base for a sales team serving the east coast.
computer operations in the UK. NGD already has a representative marketing the company to
“When American executives see what we have to offer, they potential west coast customers.
understand the benefits immediately,” said Simon Taylor, chair- BT and Logica are already operating out of the Newport data
man of NGD. “This is an ideal spot for a US company doing centre under contracts worth $20 million. NGD expects to have
business in the UK.” 20 percent of its space rented by the end of this year.
NGD Europe is on a 50-acre campus near Newport, South Taylor said the NGD’s board is discussing the possibility of
Wales; about two hours drive from London, near the Celtic Man- an initial public stock offering toward the end of this year, or in
or Club, where the 2010 Ryder Cup was played. It includes a 2012, either in the UK or the US. The IPO would fund growth
three-storey 750,000 square foot main building with the space that the company expects in the next five years.
and flexibility to house multiple custom-built data halls as well as
containerised computing operations. A separate 150,000 square About NGD Europe
feet building is also available. NGD Europe is able to support High Density Environments for
The entire site is protected by three layers of barbed wire such power-hungry and intensive applications as Cloud Com-
and other security features, such as double and triple skinned puting, Super Computing, Grid Computing and Containerized
walls; bomb proof glass; prison grade perimeter fencing; anti- Computing; and is ideally suited to large organizations wish-
ram bollards; infra-red detection; CCTV; and ex-army security ing to consolidate their existing space- or power-restricted data
guards. centre operations.
All of NGD’s power comes from alternative renewable energy NGD Europe is a purpose-built Tier 3 facility offering 750,000
sources, making it the first data centre in Europe to run entirely square feet of highly secure and cost-effective space housing
on green energy. The site also has its own power station with a up to 19,000 racks that can be arranged into self-contained da-
direct connect to the national grid to ensure redundant energy ta halls of various sizes all with independent services, resilient
sources. power and cooling systems. NGD Europe’s environmentally-
The centre has multiple, redundant fibre optic connections in- friendly high level technology infrastructure has been designed
to and out of the site, in southeast Wales. Being a carrier neutral to meet and exceed the ever increasing demand for more com-
facility, customers can select from a variety of high speed, low puting power and among its many features are a high capacity
latency, carrier network offerings for the network connections 180 MVA power supply direct from the super grid and a variety
appropriate to their needs. of on-site high-speed, low latency carrier interconnects.
A recent agreement with Surf Telecoms’ links NGD to Surf’s For further information, visit www.nextgenerationdata.co.uk
fault tolerant fibre ring network in the southwest of the UK to
deliver highly competitive, reliable and secure regional and na- Editor’s contact:
tionwide Ethernet, carrier class bandwidth, SDH leased line, Nigel Parker
optical wavelength and dark fibre services. Strategic Public Relations
The Surf network gives NGD access to direct fibre connec- +44 (0)1494 434434
tions to submarine cables in the South West of England for [email protected]
customers who require transatlantic communications, bypass- www.twitter.com/strategicpr
Improved cooling
efficiency for raised floors:
IT experts at BASF seal cable openings with special foam
in order to assure server climate control
As data centers all over the world grow in size, their energy requirements increase as well
– and so are the costs for their operators: Around twelve per cent of total expenses are now
attributed to energy, as shown by new figures released by IT market researchers Gartner
Inc. According to the analysts, 35 to 50 percent of this figure is spent on server cooling.
Although air conditioning systems are becoming more and more efficient, the increasingly
powerful racks give off more heat. A major problem here is the mixture of cooling air with
hot waste air from the systems. Yet, sealing the cable openings in the raised floor air-tight
could already reduce costs by a tenth. To prevent this kind of energy loss whilst at the same
time ensuring server availability, BASF has recently started using a new kind of foam floor
tiles at its data center in Ludwigshafen. These so-called Clima-Tect® panels manufactured by
Hanno prevent up to 99.9 percent of air losses– and can be fitted by hand in minutes.
U
nwanted exchange between warm and cold air is cur- it almost airtight. To facilitate installation in the data center, the
rently one of the biggest challenges in the climate con- foam is cut into 620 x 620 mm (24.4”x 24.4”) panels - based on
trol of data centers. Vortexes of warm and cold air, in the standard 600 x 600 mm (23.6”x 23.6”) dimensions of double
particular due to cable openings in the raised floor, change the floor tiles. These can also be cut to size with a regular cutter or
flows and the pressure in the spacing between the floors. The adapted to special shapes or sizes. Since the material is elas-
air exchange also warms up the cooling air. This disrupts the tic, the 5 centimetre (2”) thick floor tiles can be pressed into the
server cooling, causing so-called hotspots – accumulations of provided openings without much force, where they are secured
heat – which can lead to malfunctions or even server failures. when they expand. This way, the panels can be installed in
As a result, more energy is required to maintain the necessary a few minutes even while the servers are running.
flow of cold air. A study carried out by the Innovation Center for
Energy for the Technical University of Berlin concerning new Highly inflammable, dust-free and skin-friendly
concepts in data centers therefore lists optimizing the distribu- The uncomplicated installation was one of the main reasons
tion of cold - in particular the prevention of bypasses - as one of for Jürgen Müller, responsible for the infrastructure of BASF
the first quickly realisable energy saving measures. data centers, for choosing Clima-Tect®: “Compared to the other
materials that we previously used here it is very easy to use,
Special foam seals cable openings within minutes as it can be cut to an exact size and is dimensionally stable”.
In BASF’s Ludwigshafen data center, the cable openings in the The panels also have a perforation with 2 x 2 cm (0.8”x 0.8”)
raised floors were previously sealed with aluminium panels and edges, which allows cutting out undersized holes with preci-
elastomer foams to prevent turbulence from arising in the air sion for the cable route. The foam itself fits tightly to the form of
cooling of the around 4,000 servers. Server reliability is the be all the cable harness. However, if larger openings remain, these
and end all for the IT specialists who not only manage the IT ap- can be sealed with single Clima-Tect® corners, whose rough
plications of this chemical company but are also service provid- surface ensures a stable hold. Air loss is reduced by up to
ers for other companies. Consequently, the demands on server 99.9 percent in total.
climate control are high as is the interest in innovative solutions. Another important criterion was the safety of the material, as
In a large-scale trial in the summer of 2010, BASF switched to Müller explains: “One of our requirements was that the foam
a new material especially developed for sealing. must be allowed to be used in the data center from a fire-safety
The sealant Clima-Tect® based on the melamine resin foam perspective. In other words, the foam must be flame resistant”.
Basotect® from BASF is specially prepared by Hanno Werk Here, a special feature of the Basotect® base material took ef-
GmbH & Co. KG based in Laatzen: The base material, which fect: The melamine resin foam achieved the highest possible
is already heat-insulating, and also remains flexible at low tem- classification for organic materials in fire behaviour tests and
peratures, is stamped on both sides to compress it and make was assigned to fire protection category V0 according to U.S.
Image caption:
guidelines UL94, Class B1 according to the German norm DIN Since the summer of 2010 BASF has
4102, and Class C s1 d0 based on the European DIN EN 13501- been using Clima-Tect® panels to
1. The material is temperature-resistant to 240°C and does not seal its cable openings in its data
melt or drip when making contact with fire but instead chars centers in Ludwigshafen.
slowly without an afterglow. Source:
BASF IT Services Holding GmbH
In addition, the Clima-Tect® panels have a high resistance
to abrasive wear due to their thickness and stamping, which is
a very important factor: “There must not be any abrasion as the
Image caption:
abraded matter would be drawn into the fans in the server hard- An important consideration for BASF
ware”, says Müller. For the same reason, Hanno ensured that when selecting the material was that
with an age resistance of 15 years the non-fibrous special foam it is non-abrasive and does not raise
does not raise dust or crumble. “In regard to our employees and dust to prevent particles from being
the environment, we paid special attention when choosing and sucked into the hardware.
Source:
processing the material so that Clima-Tect® is sound absorbing, BASF IT Services Holding GmbH
skin-friendly, environmentally suitable and halogen-free”, adds
Hans J. Hoffmann, the CEO of Hanno.
Image caption:
The melamine resin foam used
Using a low-tech solution to tackle increasing is elastic, allowing the panels to
energy consumption fit tightly to every cable bundle
Due to data centers’ increasing thirst for energy saving solu- and also ensures a good seal from
tions are urgently required: Between 1998 and 2008, the energy the edges to the raised floor.
Source:
consumption of data centers in Germany increased five-fold, as
Hanno Werk GmbH & Co. KG
revealed by a study by the Berlin Technical University. A study
published in 2006 by the American Environmental Protection
Image caption:
Agency estimated that in 2011 as much as three percent of all The fire load is a crucial feature of all
energy consumption in the USA will come from data centers. materials installed in data centers. Due
According to the market research company Gartner, a low-tech to the fact that it is flame resistant and
solution such as sealing cable holes would already reduce the does not melt, Clima-Tect® achieved
necessary cooling output by ten percent. With these energy the highest fire protection class B1,
according to US guidelines UL94.
savings, combined with its simple, cost-effective installation,
Source:
Clima-Tect® will pay for itself within a month of usage in an av- Hanno Werk GmbH & Co. KG
erage data center that did not have a noteworthy sealing method
Image caption:
before. Data center planner and fitter Prior1 recommends the The partitioning in 2x2 cm
material in its new guide for data centers. (0.8”x 0.8”) grids makes it possible
However, for BASF it was not about making savings as they to cut precise holes in the panels
had already achieved good sealing results before. In the opin- for cables. The foam can also be
ion of infrastructure expert Müller, the new material convinced cut to any size with the cutter.
Source:
rather with its properties: “It just perfectly fits this application.”
Hanno Werk GmbH & Co. KG
By now the company is also planning to use Clima-Tect® in
other data centers.
More information for readers/viewers/interested persons:
(For more information go to: www.information-services.basf.
BASF IT Services Holding GmbH
com, http://www.clima-tect.de/englisch/index.htm) Jaegerstr. 1, 67059 Ludwigshafen
Phone: 0621 60-99550, Fax: 0621 60-99555
Email: [email protected]
Internet: www.information-services.basf.com
Under the brand “Information Services – connecting to create
value” BASF combines its many years of experience in the de-
Hanno Werk GmbH & Co. KG
sign, construction, integration, optimisation and management of Hanno-Ring 5, 30880 Laatzen
IT systems. In Europe, these IT services are provided by BASF Phone: 05102 7000-0, Fax: 05102 7000-102
IT Services Holding GmbH. The company has its headquarters Email: [email protected]
in Ludwigshafen and is a wholly-owned subsidiary of BASF. It has Internet: www.clima-tect.de
around 2,300 employees and an annual turnover of around 360
million Euros in 2009. Hanno VITO Corp.
75 Broad Street, 21st Floor
New York, NY 10004 – USA
Hanno Werk GmbH & Co. KG was formed in 1895 as Hannover- Phone: +1-646-405-1038, Fax: +1-646-405-1027
sche Filzfabrik M. Mehlhardt and has its headquarters in Laatzen E-mail: [email protected]
near Hanover. The company specialises in sealing and sound insu- Internet: www.hanno-vito.com
lation products. It manufactures sealing tapes and sealing agents
for structural engineering and automotive production, as well as More information for editors
materials for technical noise protection in machine construction or Gebhardt-Seele Press Office
for improving room acoustics. The company’s range of products Leonrodstraße 68, 80636 Munich
Phone: 089 500315-0, Fax: 089 500315-15
includes foams for various applications, e.g. conductive foams for Email: [email protected]
transporting pre-assembled PCBs. Since the end of 2010, Hanno Internet: www.gebhardt-seele.de
Werk has been offering special sealing panels for data centers with
its Clima-Tect®. The company has around 120 employees. Reproduction free of charge as long as source is named.
Ask for courtesy copies
Force10’s
Carbon-Balanced
Top-of-Rack Switch
Initiative
About the Initiative latency to maximize network performance. The compact S4810
As part of its ongoing environmental commitment, Force10 is design provides 48 dual-speed 1/10 GbE ports as well as four
purchasing and retiring carbon credits on behalf of customers 40 GbE uplinks to conserve valuable rack space.
purchasing selected models of its S-Series™ 10/40 Gigabit Eth-
ernet (GbE) top-of-rack (ToR) access switches and its E-Se- E-Series ExaScale
ries™ ExaScale core switch. For every unit of its ExaScale product sold during its current
fiscal year (beginning October 1, 2010), Force10 will retire veri-
S-Series S4810 fied carbon credits to balance the greenhouse gas emissions
This program will enable Force10 customers who buy the S4810 resulting from the ExaScale’s electricity consumption for an
top- of- rack switch to benefit from carbon credits, which off- entire year, and will offer its customers the opportunity to re-
set greenhouse gas emissions associated with the energy con- tire carbon credits to balance emissions in subsequent years.
sumption of the S4810 for the estimated useful life of the switch Force10 will offset the associated carbon footprint for the cus-
(i.e., five years). tomer through TerraPass, a third-party carbon offset provider.
Carbon credits are supplied by TerraPass, a third-party car- TerraPass utilizes those offsets to help fund carbon reduction
bon offset provider. TerraPass funds clean energy and carbon projects throughout the U.S., including wind power, farm power
reduction projects through the U.S., including wind power, farm and landfill gas capture.
power, and landfill gas capture. 100% of TerraPass projects
are verified annually against broadly accepted standards by How Was This Calculated?
independent third party verifiers, ensuring the highest quality The total power consumption for an average base ExaScale
offsets available. configuration is approximately 3.85 kWhrs. Assuming that this is
running 24 hours a day, 365 days a year, we then figure out the
How Was This Calculated? “carbon footprint”. The national average of the electrical grid’s
The total power consumption for a typical S4810™ unit is ap- carbon intensity is 1343 lbs per MWhrs. When all these fac-
proximately 0.193 kilowatts (kW). Assuming that this is running tors are multiplied together, we get a total footprint of 45,259
24 hours a day, 365 days a year, we then figure out the “carbon lbs of CO2 / year, or the equivalent of 4.5 cars (a typical car
footprint”. The national average of the electrical grid’s carbon averages 12,000 miles annually and produces about 10,000
intensity is 1343 pounds of carbon dioxide per megawatt hour lbs of CO2).
(MWhr). When all these factors are multiplied together, we get
an average footprint of 2,276 lbs of CO2 per year or 11,379 About the ExaScale
pounds of CO2 for 5 years (the estimated useful life of the prod- The Force10 Networks ExaScale platforms are virtualized
uct). This is the equivalent of the emissions from 1.1 cars for an chassis-based switches that enable a new way of designing
entire year (a typical car averages 12,000 miles annually and switching and routing infrastructures. ExaScale’s architecture,
produces about 10,000 lbs of CO2). and patented backplane and ASIC technology are designed
to increase network availability, agility and efficiency while re-
About the S4810 ducing power and cooling costs. ExaScale supports mission
The S4810 10/40 GbE ToR switch is purpose-built for applica- critical applications across converged fabrics in data center,
tions in data center and computing environments that require telecommunication provider, service provider, enterprise and
the highest bandwidth and lowest latency. Leveraging a non- HPCC networks.
blocking, cut-through switching architecture, the S4810 delivers Force10’s ExaScale platform coupled with FTOS make a
line-rate Layer 2 and Layer 3 forwarding capacity with ultra low cost-effective and flexible deployment option complete with
comprehensive management, automation and resource provi- half the power of the Cisco Nexus 7000 and 23% less power
sioning capabilities. than Juniper in line-rate Gigabit Ethernet (GbE) and 10 GbE
configurations, a switching requirement that is common in most
Force10’s Virtualized Network Fabric Framework data center networks.
• VirtualView™, a suite of monitoring and provisioning fea- Moreover, Force10 recently joined the Climate Savers Com-
tures, optimizes the performance of virtualized and distrib- puting Initiative (CSCI). Since its formation in 2007, CSCI main-
uted networks tains some 645 members, including large commercial enter-
• VirtualScale™ reduces TCO by consolidating the physical prises and technology industry stakeholders, who are focused
network and virtualizing boundaries on reducing the energy consumption of computers, servers and
• VirtualControl™ manages, controls and secures multiple storage devices.
switching and routing domains on one physical platform
About Force10 Networks
Key Applications Force10 Networks is a global technology leader that data center,
service provider and enterprise customers rely on when the net-
• Virtualized data center and cloud computing networks with work is their business. The company’s high performance Ether-
multiple switching or routing domains in local or geograph- net switching and routing solutions virtualize and automate Eth-
ically disparate environments ernet networks to deliver new and distinct economic advantages
• High capacity 40 GbE*, 10 GbE and 1 GbE switching/rout- by increasing network availability, agility and efficiency while
ing in high performance Layer 2, IP and MPLS core or ag- reducing power and cooling costs. Force10 provides 24x7 serv-
gregation networks ice and support capabilities to its global customer base in more
• HPCC networks with non-blocking and deterministic N:N than 60 countries worldwide. For more information on Force10
and N:1 management and storage I/O requirements Networks, please visit www.force10networks.com.
2
011 will be the year of the Enterprise and a year of further outsourcing providers, managed services and Cloud service
industry change. We have reflected the focus on enter- providers; Solutions and equipment vendors, offering products
prise markets and in the outlook and content of the event. and services to the enterprise and data centre markets; Consult-
Not only that – now in its 7th year, the data centres event for ing firms, professional intermediaries, law firms; and Financiers
Europe has transformed into a Global meeting place and fo- and investors in the data centre sector.
rum. With more than 30 countries represented in 2010, we have Join Europe’s premier data centre conference, exhibition, so-
teamed up with organizations in Europe and North American lution sessions, insight tracks and workshops and experience a
whose memberships comprise of this valuable audience to ex- comprehensive educational insight into technology, service and
tend the breadth of audience and the horizon of opportunity. business futures for data centres. Network with peers, and gain
Data Centres 2011 uniquely brings together key segments en- value from the largest exhibition and trading floor for European
gaged in the sector including: Enterprises seeking insight into ef- and international firms.
ficiency, cost reduction, and innovation but also how to increase When? 5-6 May, 2011
agility and business opportunity; Third party data centre and Where? Palais de Congrès, Nice, France
G
artner Data Center Summit delivers tactics and strategies infrastructure, and more. We’ll focus on “must-take” actions to help
to address your hottest Data Center, Infrastructure and you increase effectiveness as a Data Center leader.
Operations issues: next-stage virtualization, the impact
of cloud computing, best practices in cost optimization, disaster When? 5-6 April, 2011
recovery planning, managing escalating energy costs, the aging Where? Sheraton WTC Hotel, Sao Paulo, Brazil
C
loud computing has emerged as one of the most impor- a challenge. For those building private cloud infrastructures,
tant technology trends of 2011. Providing virtualized re- constructing flexible, highly available, and cost effective archi-
sources over the internet enables users to leverage key tectures is difficult. Using real case studies as examples, we’ll
technology infrastructure “in the cloud” without requiring direct walk through some best practices and new technologies that
management and expertise. IT organizations see cloud solu- will make cloud initiatives more successful.
tions as a way to reduce costs and complexity. Yet, for those When? 18-22 April, 2011
choosing managed solutions, service levels and security are Where? Day 1 – Mountain View, CA, Day 2-5 – Virtual
CONVENTIONAL COOLING Earlier rack densities were as low as 2kW per rack, energy costs
Conventional cooling method uses cold air from computer room were negligible and oversized hardware equipments became a
precision air conditioning (CRAC) units via an enclosed space common practice in data centers. These are the main reasons
in which the air pressure is higher than outside. This enclosed why conventional cooling methods were accepted widely. But
space is located under a raised floor. See Figure 1. Following today many data center professionals have recognized that the
are the characteristics of conventional cooling approach: conventional cooling method is inefficient and costly for the or-
ganization and so, have implemented the aisle containment
• Perimeter cooling: CRAC units are placed outside the system.
rack rows around the perimeter of the data center. Aisle containment works as a remedy for the short comings
• Raised floor: Cold air is delivered to the rows of racks of conventional cooling method. But, before installing aisle con-
from under the raised floor. tainment we need to address some common issues for improv-
• Mixing of cold and hot air: While hot air makes its way ing the overall cooling system energy efficiency.
back to the CRAC unit, the hot exhaust air from IT equip-
ment mixes with the cold air. SOME COMMON MEASURES
• Oversized equipments: Data center efficiency is reduced • Seal the Data Center Environment: Minimizing outside
in conventional cooling due to the use of oversized power air intake, good vapor barrier and no leaks around the
and cooling components. doors and windows will ensure that the total cooling avail-
able is used to cool only the computer heat load and also
minimize the moisture in the data center environment. Ca-
ble penetrations, perimeter penetrations and floor tile joints
should also be sealed properly.
• Air Flow Optimization: Air flow can be optimized by mak-
ing sure that the row of racks faces each other, with the
front of each opposing row of racks drawing cold air from
the same aisle. This is possible because most equipment
manufactured today is designed to draw in air through the
front and exhaust it out the rear. The temperature of the air
returning to the CRAC unit increases because the hot air
from two rows is exhausted into a hot aisle which allows it
to operate more efficiently.
• Using Blanking Panels: Blanking panels are used to
Figure 1. Conventional Cooling with Raised Floor prevent the hot air from circulating to the front of the rack
b. Improved Flexibility: HAC doesn’t deliver any hot air end of the aisle. Thus, the cold air is delivered to the
to the outside room and thus, it does not impact the electronic equipment air inlets through the perforated
temperature of the surrounding room. In a CAC sys- floor tiles which are in front of the cabinets. Thus, CAC
tem, the room temperature outside the room tempera- increases the cooling unit capacity and as stated in [1]
ture will rise and the hot air will mix in with the air out- this is capable of cooling 10 to 15kW heat load per
side of the cold aisle. HAC with internal cooling doesn’t rack.
require for the installation of duct work so can be im- CAC with internal cooling also follows the focused cool-
plemented without making any considerable changes ing approach. Here, the cooling units are located inside
to the existing data center cooling architecture. the containment, above or between the racks and the
c. Support for Cooling Failure: In case of a cooling fail- cooling unit takes the hot air directly from the hot aisle,
ure, the temperature inside a data centre using CAC cools it and delivers it to the cold aisle. In CAC with in-
will increase rapidly while with HAC the servers will ternal cooling, the cold aisle is contained with the top
draw cool air from outside the contained hot aisle, thus ceiling panels and doors at the end of the aisle. As de-
extending the available runtime. The runtime could be scribed in [1] this approach can cool about 30kW heat
minutes in a HAC system while it could be seconds in load per rack.
a CAC system. b. Easily Installed: CAC requires less space and can be
implemented without disturbing any data center oper-
2. Disadvantages: ation. Installing CAC only requires the addition of cov-
a. Unfocused Cooling: In HAC with external as well as er panels above the aisles and doors at the end of the
internal cooling, the cold air distribution that exhausts aisles.
to the server rack inlet from the cooling unit is “open” c. Can be used with raised floor: Most data centers are
and is dependent on surrounding area conditions and using raised floors and CAC (with internal as well as
equipments. external cooling) can be implemented with the raised
b. High Air Temperatures: HAC limits the mixing and floor approach.
leaking of cold air into the hot aisle. This containment d. Floor space is available: CAC with internal cooling is
method occasionally can increase the air temperature available with non-water based cooling units (Liebert
to an unacceptable level for working in the aisle. XDV or Liebert XDO) and so no floor space in the mid-
c. Difficult to Install: HAC with external cooling cannot dle of the data center is used for cooling equipment.
be installed in an already existing data center without
interrupting the data center operation, because ducting 2. Disadvantages
is required. a. Obstructions below the raised floor: CAC with ex-
d. Occupies floor space: In HAC with internal cooling, ternal cooling uses the raised floor by adding cables,
the cooling units are placed between the racks in the pipes and other obstructions below the raised floor.
middle of the data center, this occupies the much need- Limited cool air is delivered to the IT equipments due to
ed floor space. these obstructions.
e. Hot air ducting: In HAC with external cooling, the re- b. Temperature increase during cooling failure: CAC
turning hot air must be ducted all the way from the HAC with internal cooling minimizes the cold air available to
to the air inlet of the CRAC to avoid the mixing of hot the servers by containing the cold aisle. If there is a
and cold air. Thus, ducting needs to be implemented cooling failure in the data centre then the reduced vol-
between the HAC and the ceiling enclosed space and ume of cold air will result in more rapid increase in air
ducting between the ceiling plenum and CRACs. Al- temperature.
so, to implement ducting there should be an over head c. All Cold Aisles Containment: If cold air is allowed
space required. to mix with the hot air then it will result in little bene-
f. Cannot work with raised floor cooling: HAC with in- fits. Thus, in a CAC system all cold aisles must be con-
ternal cooling can be used in a raised floor installation tained so that hot air temperature reaches its maximum
but not with raised floor cooling because unless and level and thus allowing the cooling equipments to oper-
until there is a hot air ducting all the way to the air inlet ate at much higher efficiency levels.
of the CRACs, the raised floor cannot be expected to d. Sign of imminent Breakdown: Usually people experi-
provide cooling for the racks. ence a sudden sense of danger when exposed to unu-
1. Advantages
a. Increase in Cooling Capacity and Efficiency: CAC
with external cooling follows the focused cooling ap-
proach and due to this a higher air temperature leaves
the cooling unit. In CAC with external cooling, the cold
aisle is contained with the ceiling panels above the
aisle between adjoining racks and with doors at the Figure 4. Air Temperature with CAC
sually high temperatures. In CAC with internal cooling, If an organization with existing data center is willing to shut
cold aisles are contained and the room temperature in down for some time or is willing to spend some considerable
rest of the room is hotter. Workers coming to the data amount of money then HAC. Otherwise, CAC can be used to
center would naturally judge the conditions as a sign of reduce the energy costs, minimize hot spots and improve the
imminent system breakdown. Thus, workers who are carbon footprint of the data center.
not habitual of entering a data center operating at high
temperature need to be familiarized. See Figure 4.
FIRE PROTECTION
Every data center needs to contact the local fire authority to
discuss any aisle containment plans and needs to implement
them in place. The usual method to control fire is using heat
activated sprinklers or gaseous agents which are initiated by
smoke detectors.
CONCLUSION References
Cooling today is a hot topic in data center circles. The primary 1. 2009 Focused Cooling Using Cold Aisle Containment, A White Paper
strategy of increasing the data center efficiency and cooling is from the Experts in Business-Critical Continuity, Emerson Network
Power
to prevent the mixing of hot and cold air. As compared to the
2. Compaq server installation guide…
conventional cooling methods, both HAC and CAC can offer in- 3. 2008 ASHRAE Environmental Guidelines for Datacom Equipment -
creased power density and efficiency to today’s high heat data Expanding the Recommended Environmental Envelope
centers.
HAC allows the transfer of the hottest air into the coolers and
it doesn’t affect the outside room temperature. HAC with internal
cooling can address the high IT density requirements because
it is efficient and flexible. HAC consumes significant energy and
can also add to the cooling load of the room. DHANANJAY D. GARG
CAC supplies cold air to the servers by separating hot and The author is an information security enthusiast who is based in India. He li-
cold airs. Existing data centers with raised floor architecture can kes working on projects related to information security. As a freelance wri-
easily make use of the CAC for creating highly efficient cooling ter, he often writes on various topics related to computer sciences and can be
solutions. CAC offers focused cooling and can be implemented contacted at [email protected].
with or without conventional raised floor cooling.
Planning to Virtualize
your server environment?
Don’t loose your cool
Virtualization technologies, now widely deployed in mainstream
computing environments are proving to be the most transformative
force in business since the introduction of the personal computer.
As the PC did upon its debut, virtualization puts an exponential If you are planning to virtualize or have done so already, you
increase in computing power at the disposal of just about every should weigh your high availability and disaster recovery re-
organization, at radically lower cost, with incredible deployment quirements. No matter how advanced your data center is, no
flexibility and speed. matter how fully virtualized your environment is, it is all based
Yes, you can have virtual machines, by the dozens if you wish. on real, tangible servers, storage, and infrastructure.
But keep in mind, there must still be a physical server (or many You must plan for, build, and test not just disaster recovery,
servers) underneath them, packed with fast CPUs and boasting but true high availability because you’ve eliminated most of your
high I/O bandwidths, hooked into huge networks with the most redundant computing capacity by design. While VM checkpoint
advanced switches and storage devices. images and virtual-to-virtual (V2V) failover can provide some
More to the point, you can virtualize your server and storage of the protection you need, there is still no substitute for keep-
systems, consolidating down to a fraction of your former physi- ing a fully redundant, constantly updated copy of your data at a
cal hardware footprint, but in doing so, you place all of your physically remote secondary site. Building in availability at both
assets in one place –or all your eggs in one basket- and make levels, virtual and physical, must be an integral part of your mi-
them very vulnerable. gration project.
The more you consolidate, the closer you come to having
a single point of failure that can quickly turn even a small IT
problem into a true disaster. And ironically, the problem might
not be directly related to your servers. If, for example, your
data center cooling system fails and your server reaches a
temperature that triggers its thermal protection circuitry, you’ll Bill Hammond
loose access to all of your computing resources until the break- directs Vision Solutions’ product strategy for several of the company’s infor-
er resets and the system reboots. In worst-case scenarios, mation availability software solutions. Hammond joined Vision Solutions in
you’ll loose data, or damage the processor, motherboard or 2003 with over 15 years of experience in product marketing, product mana-
power supply. gement and product development roles in the technology industry.
SYSTOR 2011: The 4th Annual International Systems and Storage Conference
SYSTOR 2011, the 4th Annual International Systems and Storage Conference, promotes computer systems and storage research, and will take
place in Haifa, Israel. SYSTOR fosters close ties between the Israeli and worldwide systems research communities, and brings together academia
and industry.
To learn more, visit: http://www.research.ibm.com/haifa/conferences/systor2011/
Cooling Failures
and Power Outages
Several companies rely on data centers to deliver business-critical
services. Very high costs can be involved with system downtime.
A recent US study [1] found that when IT systems fail, businesses’
ability to generate revenue is reduced by 32%, and is still delayed
after systems are recovered while data is restored.
T
he annual Revenue Loss for small companies is € Cooling failure at Wikipedia’s data center
222.000,= for medium sized companies is € 450.000,= European Wikipedia servers went offline due to a overheat-
and for large companies € 928.000,=. For small busi- ing problem. To prevent overheating servers were shutdown.
nesses such an immense loss of revenue is catastrophic as they Failover to servers in Florida turned out to be broken, causing
tend to have small margins and difficult targets. If your systems DNS resolution to for Wikipedia to fail across the world for sev-
are critical, the availability of your systems should be one of eral hours.
your major concerns. This holds for small sized companies as
well as large companies. Chiller failure in City of London
At the 31th of May of 2010 in the Braham Street data center,
Major causes of downtime located in the City of London, and owned by Level 3 Com-
A recent study of Data Center Outages [2] shows the following munications, there was a chiller failure when one of the five
major causes of unplanned downtime: units designed to cool the data center failed. Several serv-
ers within the datacenter itself overheated and crashed. The
• UPS battery failure 65% internal temperatures within the datacenter soared to an es-
• UPS capacity exceeded 53% timated 50oC. These soaring temperatures claimed at least
• Accidental EPO/human error 51% one victim, bring down servers belonging to the music serv-
• UPS equipment failure 49% ice Last.fm, for five hours approximately. In this case there
• Water incursion 35% was no preventive shutdown of the servers. The failure of a
• Heat related/CRAC failure 33% chiller leads to extreme temperature and crashed several
• PDU/circuit breaker failure 33% running servers.
Totals exceed 100 percent because data center professionals Chiller failure at Nokia’s data center
cited multiple root-causes over a period of two years. At the 12th Februar of 2009 a cooler broke down in the hosting
In this article we will focus on two major downtime causes: pow- center that runs the Chat service Nokia’s Ovi. An overheated
er failures and cooling failures. Let’s start by examining some server crashed. This lead in to two catastrophic consequences.
recent failures. Firstly downtime of the service for a day. Secondly, the database
broke down. Despite the fact that they had regular backups,
Some recent power and cooling failures they were not able to set it right; the chiller failure lead to a long
period of downtime and data loss.
Major power outage in Provo, Utah
At the 25th May of 2010 customers of Bluehost reported being Power outage at a Primus datacenter in Melbourne
offline for several hours. The downtime was attributed to major At the 2th Februar of 2009 a major power outage at a Primus
power outage in Provo, Utah, apparently due to human error datacenter in Melbourne caused internet connectivity problems
during maintenance at a city-owned substation. Generators at in Australia. Primus’ primary generator at its data center failed
the company’s data center functioned properly and the facility to start properly after a utility outage. The outage caused loss
never lost power. The problem: much of Provo lost phone serv- of service for ISP customers including Westnet, iiNet, Netspace,
ice, apparently including the Internet connectivity for Bluehost. Internode and TPG. The Primus facility was operating on a sin-
This clearly shows that for availability of your services you need gle generator. As the UPS battery time was exceeded several
more than your systems running at your datacenter. servers where not shutdown properly.
Would a Tier compliance have helped in our failu- Environmental Guidelines for Data Centers
re cases? The American Society of Heating, Refrigerating and Air-Con-
ditioning Engineers. Inc. has written the “2008 ASHRAE Envi-
Case Tier 1 Tier 2 Tier 3 Tier 3 ronmental Guidelines for Datacom Equipment” [6]. The 2008
Major power outage No No No No recommended temperature range for the inlet air entering the
in Provo, Utah equipment: 18oC - 27oC (64.4F - 80.6F). Exceeding these op-
Cooling failure at No Maybe Yes Yes eration conditions can lead to a direct system crash as seen
Wikipedia's data center in our cases. But this can cause several other problems. In [6]
Chiller failure in City No Maybe Yes Yes we read:
of London “While the equipment may operate at air intake temperatures
Chiller failure No Maybe Yes Yes
of up to 32oC (90F) or at a relative humidity within 35% to 80%
at Nokia's data center (..), it may not run reliably or at specified performance stand-
ards.
Power outage at No No No No
a Primus datacenter (...)
in Melbourne High Soft Error Rates, Erratic or Unrepeatable Information,
and Outright Hardware Failures Can Result From Exceeding
Tier 1 compliance would not have prevented any of the cases. Recommended Environmental Recommendation”
Tier 2 compliance could have prevented case 2, case 3 and There are many examples of hardware failure rates exceed-
case as these tiers offer some redundancy. Tier 3 would have ing by more than four time normal field experience at a 35oC
been prevented these 3 cases. Tier 4 compliance would have air intake temperature.
prevented all cases except case 1. The high heat densities found in many modern data centers
If uptime is critical for your organization then you should take makes them very sensitive to cooling failures. When cooling fails
care that your data center has a Tier compliance that suits your in such data centers the intake temperature can soar to damag-
needs. ing levels within 60 seconds.
Virtually all high performance computer, communication, and
Choosing Tier Level that suits your business storage products now incorporate internal thermal sensors that
The “Guidelines for Specifying Data Center Criticality / Tier Lev- automatically will slow down or shut down processing when
el” [5] contains information that can be of help to choose the temperatures exceed predetermined thresholds. In practice trig-
needed data center criticality. Typical levels of criticality for dif- gering of this sensors could lead to severe damage and extra
ferent business characteristics downtime. Similar to a forced power off software running on
servers will have no time to properly shutdown and save internal
• Tier 1 buffers and this can lead to damaged file systems, database
– Typically small businesses systems, data loss, system crashes, etc.
– Mostly cash-based, no online revenue generation The compliance of your data center to maintain recommend-
– Limited online presence ed operating conditions should be a major concern if uptime is
– Low dependence on IT critical for your organization.
– Perceive downtime as a tolerable inconvenience
• Tier 2 How to avoid failures and limit the damage?
– Some amount of online revenue generation In this paragraph we look into how we could avoid downtime.
– Multiple servers As downtime never can be avoided completely we also look in
– IT systems vital to business additional measures we could take to limit the damage once
– Some tolerance to scheduled downtime cooling or power really fails.
• Tier 3
– World-wide presence • Choose the proper Tier level of your data center. First
– Majority of revenue from online business choice should be a certified data center. If certification is
– High dependence on IT systems no option due to money, availability in your region or other
– High costs of downtime reasons, you should further investigate details of the Tier
– Highly recognized brand level that best suits your needs. Second choice should be
• Tier 4 a data center that claims to support a criticality level that
– Multi-million dollar business meets your needed Tier level.
– Majority of revenues from electronic transactions • If your data center has no official Tier certification, you
– Business model entirely dependent on IT Extremely should check the redundancy levels they really support.
high cost of downtime You should make sure these that the redundancy levels
are part of your agreement
Choosing the proper Tier compliance of your data center would • You should contact your data center, ask questions and en-
probably have prevented most failures. But working with certi- sure the answers suit your needed Tier level, for instance:
fied Tier 3+ data center cost more money. Until a severe failure – If there is only one generator, do they regularly check if
like this, many companies do not want to put forward the money it starts?
that it would take to avoid the downtime. – How long can the data center run on the UPS?
There are additional measures you can take that would have – How long can the data center run on generator power?
prevented some of these failures. Furthermore there are other – Will the cooling still be available (must be) when the da-
ways to limit the damage. But first let’s take a look at operating tacenter runs on a generator?
conditions. – How many redundant cooling capacity is available?
• Knowing the importance of environmental conditions, you You can further improve your uptime with the installation of
should ensure that your datacenter respect these condi- temperature monitoring software, the addition of extra UPS
tions equipment, and the usage of automatic shutdown procedures.
• Perform a site visit to the data center. While it can gener- A neat shutdown when power fails or heat sensors are triggered
ally be assumed that a vendor has provided true and ade- will prevent data loss and system crashes and limit the down-
quate information on the proposal or during the evaluation time damage. Regular testing backup and failure procedures
process, it is still essential to visit the facility and meet the will further help to limit your downtime.
vendor to add an extra layer of confidence
• Add your own UPS to your equipment, and install proper
software with correct configuration to ensure that system References
performs a neat shutdown once your UPS battery drains
• Put software on your systems that monitors the operating • [1] CA Technologies – The Avoidable Costs of Downtime - Research
temperature Report – 2011
• When the normal operating conditions are violated a sys- • [2] Ponemon Institute – National Survey on Data Center Outages
– 2010
tem operator should be signaled. This would allow this op-
• [3] Uptime Institute – Data Center Site Infrastructure Tier Standard:
erator to start a failover procedure. Topology – 2011
• At some point there should be an automatic shutdown • [4] Telecommunications Industry Association – TIA 942 – Telecom-
when temperature range gets out of a certain threshold. munications Infrastructure Standard for Data Centers – 2005
Note that this would not have prevented the downtime in • [5] American Power Conversion - Guidelines for Specifying Data
our chiller failure cases, but it would have prevented the Center Criticality / Tier Levels – Victor Avelar – 2007
system crashes and loss of data. This could actually help a • [6] American Society of Heating, Refrigerating and Air-Conditioning
lot for limiting the damage. Engineers – ASHRAE Environmental Guidelines for Datacom Equ-
ipment – Expanding the Recommended Environmental Envelop
• Make sure you regularly check all backup, failure and – 2008
failover procedures. Note that this could have prevented
the downtime of the wikipedia’s data center case.
T
he computer environment will only continue to increase in “We make so many changes in our data center that you wouldn’t
density. How is this ever changing technology going to help believe what we pay in whip changes.”
provide a flexible foundation for us? Just like SAN’s allowed “We do our best to charge back for power usage and we have
for tape library sharing and later expanded for spinning disk growth, no idea how close we are.”
NAS provided file sharing and ease of grow ...and the paradigm “We cannot grow because we are out of power.”
changed before our eyes. Virtualization allowed for mission critical “We are doing our best to figure out how to direct cooled air
abilities across multiple servers or more clients on one server...and where we need it.”
the paradigm changed before our eyes. Tiered storage platforms Understanding your environment and the direction of your de-
allowing for high performance computer data to be ingested on the partment or company is a key exercise in optimization and flexibil-
fly and migrated to lower cost disk platforms….and the paradigm ity. One of the most important things to understand is your KW and
changed before our eyes. The foundation that hosts these technolo- CFM. Having this information is just as important as understanding
gies and supports these ever important changes…..the Paradigm your demand and availability needs. I have not even brought up
is changing before our eyes. Power and Cooling. What is changing mission critical data center Tiers. This is a factor you will consider
and how can I leverage? when remodeling or constructing from scratch. A couple of topics
These conversations take place daily. Utility costs will only con- we will not tackle at this point are LEED Certifications and Tiers from
tinue to grow. Power grids have only been built in certain areas to 5-1. Our clients and experiences come from all areas.
support so much. Protecting your data center with smart decisions From ten square feet to 1,000 square feet to hundreds of thou-
to allow your employees to work at an off site recovery center in sands of square feet. From raised floor to concrete floors. From drop
case of a disaster (i.e. tornado, flood, earthquake, power outage, ceiling to none. We work with all of these clients to help tackle this
ice storm, security breach, etc.) is of paramount importance. Data challenge and need. Power and Cooling.
centers are what will continue to allow us to grow and thrive. I say These next paragraphs will discuss some of the options that will
this because whether you are a company of one or a company of significantly improve efficiencies in the data center. Let us start with
thousands, regardless of your market, you will not even be able to where you are. What are your needs today and what are your needs
open your door without a functional data center. going to be in the future? Don’t know?
This means that we need the most reliable, most efficient and Let us assume that you are expanding a row or doing a little re-
most flexible Power and Cooling solutions. Just like any of the tech- modeling. Begin with the UPS. What vendor is going to give you the
nologies mentioned above, flexibility was key. Not locking yourself most efficiency? There are options available here to use batteries
into one solution is a very nice benefit. There are four key technolo- that are small, batteries that are large or even to use no batteries at
gies in the area of Power and Cooling that allow us to build, support all. Every environment is going to have different needs and will have
and grow with our ever changing data center demands. the ability to take different risks, if you want to call it that. I have seen
First and foremost, why now? Well, I cannot tell you how many some companies use small batteries that will allow them to plug and
conversations I have on a daily basis concerning this very topic. These
are several quotes that I have heard over just a few months.
“We just experienced a power failure and our facility is just so
out of date.”
“I’m experiencing a high number of power spikes and my equip-
ment cannot afford the one time it is not protected.”
“Our cooling is either working or not working and we cannot af-
ford to over spend on large CRAC units when we aren’t even sure
what will be in our racks and what we need.”
“After a study we did to lower our utility costs, we found that 80%
of our cost were coming from our data center and 80% of that was
from our CRAC units.”
From failure risk to over spending. I am only as good as my batter- server consumes the air. Otherwise hot exhaust air will circulate and
ies. The more small batteries you have, the more you will have to cause overheating. Wouldn’t it be neat to see what the air from your
replace and the more risk you will have when a failure does occur. CRAC unit actually does when it comes out through the perforated
These solutions can cost two and even three times more over the raised floor? Well, I’ve actually seen these studies by some of the
life of the UPS. The other option is Flywheel. Get rid of the battery largest research institutes in North America. I am sure that many of
all together. This works in some environments depending on eve- you can imagine what I am about to say next. The overhead lights
rything from budget to generator to load to foundation. One of the went out, the strobe lights came on and the smokeless spray was
more important differences is efficiency. How is this done? Many released. With the cold aisle having servers suck in air from the floor
UPS vendors are now or will be re-designing for this architecture. tiles, very little of the cold air was even inhaled into the rack. CRAC
This ability is game changing and allows for real power savings on units of twenty years ago were fantastic. But in today’s high growth
the bill and often very significant savings. This decision is critical and high density environments they are not for everyone. I am not
and should be considered wisely. In the Data Center, we have all saying they are not needed by any means. They just are not for
been shocked by service contract renewal costs. Make sure you every data center, I am saying that we can now optimize our power
have options. Can your UPS be supported by third party support? and cooling options to scale and grow on demand. We now have
If not, we all understand how this can affect our operational costs. options that give you flexibility to procure what is needed, where
Clearly, all of this is for a larger data center. What if your data center it is needed and how it is needed. With products available today
is made up of a closet or a remote location without a generator? All you now have the flexibility to use a completely self contained rack
you need is a UPS that will support your load for the amount of time and cooling solution that is completely sealed. Whether with closed
that you need it running based on your outage experiences. These loop containment or with perforated doors, you can now even scale
UPS’s need batteries changed more often. They are smaller. The a data center on concrete floors. You have the ability to architect
entire UPS needs to be swapped out more often as well. these environments to leverage variable speed blowers and direct
Have you considered an Electrical BUS system? There have airflow on demand to the server and storage products in the racks.
been so many great improvements in this area. Do your homework Think about the power and cooling saving that this brings into your
here and buy products that were made for the data center, not for environment. Contain the Hot Aisle or Contain the Cold Aisle. With
another market vertical. These Electrical BUS systems allow for either as options, you have the flexibility to optimize your data center
ever changing needs in the data center without having to run a whip for its ever dynamic and changing demands.
every time a server requirement changes. Keep it simple and lever- I realize that there are other things that need to be in place to
age these great new products. They are not for everyone or every leverage any and all of this. The point is that these technologies
environment. You know whether it is good for you. You pay a little are only going to continue to grow. But, they are here today. In data
more up front with huge flexibility and change cost savings on the centers where CRAC units are the sole provider of cooled air to the
back end. The overhead BUS more than pays for itself. racks, someone seems to be doing some sort of air flow/cooling
How about being able to monitor your power, temperature and hu- study. There are many tools including CFD model capabilities that
midity at the outlet? These tools are a great start to be deployed with will help you justify some of these decisions. Identify the companies
your ever changing environment. You can monitor the entire bus, that will talk to you about these solutions. Be prepared to modernize
racks, servers and non-rackables. It is up to you. Once you have the your racks for vendor diagnostics. This is just one more decision, I
information from your environment, you have the ability to learn and realize, albeit, an important one. From size to working space inside
grow on demand the way you need to grow. Cost is a huge factor in the rack to accessories, making these decisions up front can be a
this decision. Power Strip makers are offering these abilities that are deciding factor on the optimized strategy of your future.
very powerful and bring a tremendous amount of information to your Imagine when you have your facility architect-ed for completely
fingertips. Want to offer credits to departments for power savings? optimized UPS’s with an energy saving architecture. You have elec-
Want to be confident in what you are charging clients back for their trical overhead BUS systems where you can move, change and
power usage? As a client being charged, have you asked to see your support any power outlet need on demand. You are monitoring
usage? Wrap this into a more broad capability and there are many all the power usage and cooling demands on the fly. You’ve
new packages that allow for tying in security, HVAC and more. Now saved your company $$$ and given them the flexibility to grow
that these capabilities are here, we will all see Data centers lever- into areas not even imagined at this very moment. Your job
age power usage in more creative ways. From government credits hasn’t gone away. You’ve become a hero and now more impor-
to charge-back capabilities, making the decision on this framework tant…all because of Power and Cooling.
is important to your ever changing data center needs. I would like to thank all of our IT partners, general contrac-
Now on to another HOT topic. Airflow and Cooling. The front of tors, electrical Contractors, designers, suppliers and most im-
the racks server’s intake must fill up with chilled air as quickly as the portantly end users who have given us the experience to not
only help them but to witness these needs. I would like to thank
GE Power Quality, PDI BUS System and Rittal (Rack, Power
Monitoring and Cooling) for these products and solutions that
stand out above all others in providing flexibility and support to
solve customer challenges for today and future needs.
Michael Weldon
Critical Power Systems – Manufacturers Representative (GA/
AL/TN) GE Power Quality, PDI Bus System and Rittal Rack/
Cooling Environmetals [email protected]
our website: www.criticalpowersys.com
(404) 954-2553 cell \ (770) 956-0091 Office \ (770) 953-6449 Fax
follow us on twitter @criticalpwrsys
They went green before it was cool. Now they are helping companies
across the nation reduce their carbon footprint with used generators.
I
n an age where reducing our carbon footprint and conserving benefit their bottom-line, reputation, and environment, for gen-
energy is not just trendy, but also a necessity, Critical Power erations to come.
Exchange finds themselves in demand. Another method by which Critical Power Exchange contrib-
As the Pacific Northwest’s largest used and surplus genera- utes to the preservation of resources , is through the recovery,
tor distributor, they’ve recently expanded, hired new employees, recycling, and resale of data center air conditioning equipment,
and launched a new website. They are about to unveil a new uninterruptible power supplies and raised computer room floor-
solar-paneled product – making them an exclusive distributor ing all across North America. These items become available for
in the U.S. What’s behind all this energy? (Yeah, pun intended). a variety of reasons, including changes in facility requirements,
Turns out, they’re creating forward motion by providing eco- facility upgrades, facility closure, among others. Re-use of dis-
friendly, energy-efficient alternatives for their clients. And said carded equipment extends its useful life thereby conserving raw
clients like it. materials and reducing energy use. Recovering additional asset
“We re-sell or recycle 100% of the generators that run through value provides a compelling reason to recycle equipment in this
our facility. With our in-house metal recycling capabilities, CPE manner rather then traditional methods.
has the ability to move any unusable parts or units to their base Companies with outdated, obsolete, but enormous equipment
materials,” Steve Luedtke, Critical Power Director of Marketing within their facilities can be overwhelmed, thinking to remove or
said. “Low-hour late-model as well as surplus generators are recycle will be a costly and timely project. Which is where com-
tested, serviced and put back into the market. Clients who pur- panies like Critical Power Exchange comes in, providing com-
chase units from CPE can see cost savings as high as 40% not plete data center asset recovery and recycling services.
to mention the savings of raw materials and production costs.” Critical Power Exchange provides turn-key solutions for the
As companies review their project budgets, environmental most complex environmental projects. Their specialized teams
initiatives, and time targets, used generators rise above com- of de-installers and riggers conduct facility walk-through’s, and
petition, as they satisfy all of the above. work with data centers client to plan de-installation of the equip-
Critical Power’s time-tested refurbishment process enables ment. This “site survey” insures that all precautions are taken
them to provide used generators at slashed prices – while uti- into consideration to assure safe and secure conditions during
lizing recycling best practices. For instance, purchasing a brand the duration of the project. They’ve assisted hundreds of cus-
new 2007 Cummins 2250kW would start at $500,000 and go tomers from start to finish, virtually saving them thousands of
up from there. Whereas, Critical Power offers a similar surplus dollars.
model with 0 hours for $380,000. Going green is the cost-effec- Whether large or small, Critical Power Exchange provides
tive choice, when it comes to generators. But what about repairs cost saving solutions for businesses’ environmental projects.
or maintenance down the road? Will used machines operate at CPE routinely performs removal and disposal of a wide range
the same optimal levels as new? Critical Power recommends of equipment including UPS systems, Power Distribution Units,
offering a warranty to clients, even for used generators. raised computer room flooring, industrial air conditioning units,
“One of the ways you can establish yourselves in the industry chillers and cooling towers, diesel generators and electric trans-
as a trusted distributor is to provide warranties or guarantees for formers as well as fire suppression systems. Not only does
your products,” Steve of Critical Power, recommends. “It sets CPE’s power equipment disposal team handle the electrical
you apart from the competition and allows customers peace of disconnection, but also properly removes, transports, and dis-
mind they expect only from new product purchases.” poses of the equipment.
Critical Power offers a standard 30-day warranty and works Companies like Critical Power are changing the way we
with each client if they want a longer warranty for their indi- think about power – both as an electric reality and as an idea.
vidual unit. Equipped with large equipment and resources, damage to the
Recycled generators can support companies in their environ- earth can easily be done as a by-product. Instead, Critical Power
mental initiatives and maintaining green building codes. Build- is flexing their muscles to help save the earth, not destroy it.
ings following LEED guidelines during a remodeling project can Critical Power has been buying and selling used and surplus
receive points for disposing of a unit. generators for the past 16 years, providing equipment from lead-
Achieving design and construction sustainability, reaping ing vendors such as Caterpillar, Cummins, Kohler, Detroit Die-
huge cost savings, and recycling mass amounts of raw mate- sel, Mitsubishi, and many others. View their inventory at their
rial are just a few of the benefits of utilizing used generators. generator webpage: Critical Generator Supply http://www.criti-
Cost-effective and environmentally responsible, companies will calpower.com/index.html.
Social Networking
Security
Social networking has manifested as a web based tool and is taking
the world by storm. Media shows such as news agencies are using
social networks for feedback from their viewers and many business
companies are exploring social networks as alternative marketing tool.
People of all ages are using social networking for communicating to
friends and family and corresponding with their colleagues as well.
Social Network can be categorized into the following five most common types
Social networking has different interpretations. Shinder (2009, that may combine elements of more than one of these types of
March 4) describes social networking as the building of on-line networks and the focus of a social network may change over
communities based on common interests and activities while time:
Riedl (2011, January) made the point that social networking al-
ways involves groups of people interacting in some way, wheth- • Personal network;
er working together, playing together, or simply enjoying each • Status updates networks;
other company. • Location networks;
The creation of the Internet can be seen as the earliest form of • Content-sharing networks; and
social networking. In the 1960’s, during the Cold War when nu- • Shared-interest networks (Fact Sheet 35: Social Network-
clear war was a paramount threat, the Internet was designed in ing Privacy: How to be Safe, Secure and Social, 2010, De-
part to provide a communications network that would work even cember, par. Types of Social Networks).
if some of the sites were destroyed by nuclear attack (Howe,
March 24). The academe, computer experts, engineers, scien- Table 1. Below presents some examples of the common social networking
tists, librarians and the military used the Internet as an alterna- and their types
tive communication system. The main purpose was to share Status Content- Shared-
Personal Location
information from research and development in the scientific and Updates sharing interest
Networks Networks
military fields (Howe, March 24). Networks Networks Networks
In 1972, social networking manifested itself by the use of MySpace Twitter Brightkite thesixtyone deviantART
email. No longer was the Internet used only by the government Google
but was used both in the public and commercial sectors as well. Facebook, Foursquare YouTube LinkedIn
Buzz
E-mail became the dominant form of personal communication
Friendster Loopt Flickr Black Planet
to keep in touch with friends, family, and colleagues, as well
as, for business communication. Six years later, Roy Trubshaw
created a computer program that allowed people to join in a Personal networks
fantasy-based game from their home to discuss not only gam- Personal Network emphasizes on social relationships such as
ing topics but general topics as well (Harrison, n.d.). By 1988, friendship and dating. They ask all users to create detailed on-
Jarkko Oikarinen created Internet Relay Chat that was originally line profiles and often involve users sharing information with
designed to support bulletin board functions that allowed people other approved users, such as one’s gender, age, interests, ed-
to discuss software, news and other issues online. ucational background and employment, as well as files and links
The next major chat room milestone was the development of to music, photos and videos (Fact Sheet 35: Social Networking
Java. By 1995, Java chat rooms were created by companies like Privacy: How to be Safe, Secure and Social, 2010, December,
AOL and Yahoo and became popular social networking provid- par. Types of Social Networks).
ers. Harrison (n.d.) cited that today, the new form of chat rooms
is voice chat which allows users to these programs to chat, view Status update networks
a common slide show or application. The main goal of these types of social networking is to allow
users to post short status updates in order to communicate with
Types of Social Networking other users quickly and are designed to broadcast information
Harrison’s website posted that social networking is now mani- quickly and publicly (Fact Sheet 35: Social Networking Privacy:
fested in a web-based platform, using services such as Face- How to be Safe, Secure and Social, 2010, December, par. Types
book, LinkedIn, Twitter, etc. These web-based networking sites of Social Networks).
Location networks networking as their primary marketing tool and for companies to
These networks are starting to grow in popularity because of encourage their employees to participate on social networking
GPS-enabled cellular phones. They are designed to broadcast sites. However, this decision needs to be carefully assessed and
one’s real-time location, either as public information or as an the return of investment weighed against the threats and vulner-
update viewable to authorized contacts and many of these net- abilities that may occur by using social networks. The threats
works are built to interact with other social networks, so that an and vulnerabilities of social networking may be broken into the
update made to a location network could (with proper authoriza- following four types;
tion) post to one’s other social networks (Fact Sheet 35: Social
Networking Privacy: How to be Safe, Secure and Social, 2010, 1. Privacy threats;
December, par. Types of Social Networks). 2. Traditional network and information threats;
3. Identity threats; and
Content-sharing networks 4. Social threats.
When websites provide the capacity to create personal profiles,
establish contacts and interact with other users through com- Table 2 below illustrates some of the most common social net-
ments, they become both social network and content hubs. The working threats categorized by threat types. Both individuals
main objective of content-sharing networks is for sharing con- and corporations need to assess the impact of these threats,
tents, such as music, photographs and videos (Fact Sheet 35: most specifically, the damages that these threats can create.
Social Networking Privacy: How to be Safe, Secure and Social, Damages may affect the reputation, goodwill, financial status
2010, December, par. Types of Social Networks). or cause technological havoc (e.g. Trojan virus installed on the
client device or maybe Distribute Denial of Service due to bot-
Shared-interest networks nets). The extent of the damages may range from very mild as
These social networks center on common interest or geared an embarrassment moment to severe as contributing to death
to a specific group of people. These common interests may in- of an individuals.
clude such thing as similar hobbies, educational backgrounds,
Table 2. Common Social Networking Threats
political affiliations, ethnic backgrounds, religious views, sexual
orientations or other defining interests (Fact Sheet 35: Social Traditional
Networking Privacy: How to be Safe, Secure and Social, 2010, Network and Identity Social
Privacy Threats
Information Threats Threats
December, par. Types of Social Networks).
Threats
Drew was alleged to have committed and so the prosecutors in • March 5, 2011 - Birthday! Going to Hockey Hall Fame then
Los Angeles sought to indict Drew and charged her with: celebrate at the Motorcycle Club.
• Twitter
1. Unauthorized access to MySpace’s computers using Friend A: Dude what time will u be at Motorcycle Club?
a federal anti-hacking statute known as the Computer Friend B: I meet you at the Hockey Hall of Fame at 2.
Fraud and Abuse Act; and Me: I will be at The Original Motorcycle Club at 8 pm.
2. Violating MySpace’s terms-of-service agreement when she
and her co-conspirators allegedly provided false informa- Using vertical attack, an attacker can use inference and aggre-
tion to open the account and pose online as the 16-year- gation to make the following deduction:
old boy (Zette, K. (2008, November 26).
• I live in Toronto because the Hall of Fame and Motorcycle
The jurors found Drew guilty only of three counts of gaining Club is in Toronto.
unauthorized access to MySpace for the purpose of obtaining • I am 21 whose birthday is on March 5, 1990.
information on the 13 year old girl but jury unanimously rejected • I am big hockey fan.
the three felony computer hacking charges that alleged the un-
authorized access was part of a scheme to intentionally inflict The horizontal attack from Twitter is the times that will be at
emotional distress on the girl (Zette, 2008, November 26). This Hockey Hall of Fame and Motorcycle Club:
case pointed out the vulnerability of not having the right law to
fit the crime and more importantly the punishment fit for the • My house will be empty from 2 pm and later
crime. Since Drew’s behavior did not violate any US state laws,
the punishment resulted to mere misdemeanors that potentially Similar to hacking fingerprint techniques, an attacker can now
carry up to a year in prison, but most likely will result in lesser fingerprint me by using tool like the Canada Public Archive web-
time in custody. For the victim’s family, the punishment received site. In figure one below is a snapshot of home page of Canada
maybe just a slap of the wrist. On the other hand, for numerous Public Archive.
legal experts, the case set a “scary” precedent and potentially
made a felon out of anyone who violates the terms-of-service
of any website (Zette, 2008, November 26).
Could you introduce yourself to our readers? Could you briefly describe Interxion’s main features? Who is
I’m Lex Coors and I’m the VP of Interxion’s Data Center Tech- this solution dedicated to?
nology and Engineering Group. At Interxion, I’ve had the op- With regards to power, Interxion supplies N+1 or 2N re-
portunity to help shape the data center and colocation industry dundancy across critical components such as generators,
– most notably, I developed several new approaches to data CRAC units, PDUs, and battery backup, all of which enable
center design and management, including the metering, ana- the infrastructure to be highly available and redundant to
lyzing and improving of energy ratios between server loads and ensure reliability of mission critical applications. Each of
transformer loads (in 2003 before the PUE debate started;), and these components has been strategically selected by In-
the industry’s first ever modular design and build programme for terxion with availability in mind as well as a focus on en-
data centers as a standard across Europe. I’ve also supervised ergy efficiency.
the team who have been responsible for designing, building and In terms of cooling, we offer a range of highly-efficient
upgrading more than 55,000m² of data center space in Interx- cooling infrastructure. For instance, we offer free cooling as
ion’s 28 locations in 11 countries. standard, where relevant, in addition to we provide advice
I’m additionally a founding member of the EMEA Uptime Insti- on server installation or high power density deployment con-
tute, and a stakeholder of the European Commission DG Joint figurations to optimize heat dissipation and prevent hot spots
Research Committee on Sustainability (Data Center Code of from forming within the data center. Such cooling infrastruc-
Conduct). Further, I am the 2010 and 2011 chair for the NY Up- ture encompasses a range of different technologies including
time GEIT awards, and a fellow of the Uptime Institute. For the chilled water and air that include for which we have hot/cold
Green Grid, I have a position in the Technical Committee and air containment options.
advisory council. These posts enable Interxion to be in the fore- All customers within Interxion’s data center facilities make use
front of developing dissemination best practice around the data of its shared infrastructure approach, so that the above men-
center power and cooling discussions in Europe and around tioned power and cooling features apply to everyone within the
the world. data center. However, Interxion has the ability to adjust power
delivery and associated cooling based on customer require-
Could you make a brief introduction of your company? ments. For instance, certain customers want to have a high-
Interxion (NYSE: INXN) is a leading provider of carrier-neutral density deployment, in a virtualized environment.
colocation data center services, serving over 1,100 customers
through 28 data centers in 11 European countries. Interxion’s Why should your target audience choose your services?
uniformly designed, energy-efficient data centers offer custom- What is the difference between Interxion and the other
ers extensive security and uptime for their mission-critical ap- companies?
plications. With connectivity provided by 350 carriers and ISPs Enterprise class data centers today are on a pretty level play-
and 18 European Internet exchanges across its footprint, In- ing field, but Interxion is set apart based on three key areas;
terxion has created content and connectivity hubs that foster its connectivity, coverage and community advantages. With
growing customer communities of interest and is an especially access to 350 carriers and ISPs, Interxion’s connectivity op-
popular option for U.S. companies looking to expand their foot- tions provide a range of connectivity choices and improve
print abroad. availability. Interxion has the widest European reach than
any other data center vendor, providing low latency access
What is your company’s current focus in the Data Center to large populations of businesses and consumers. Interxion
Critical Power & Cooling field? also fosters a range of profitable customer communities of
Our first focus is to serve the business demanded for our cus- interest, creating revenue and partnership opportunities for
tomers. This results in a focus on the availability of the data participants.
centers in close combination with the PUE of our data centers’
infrastructure. What do you consider your greatest IT success?
Our greatest IT success has been the invention and implemen-
What are Interxion’s latest solutions in this area? tation of the modular design. This approach allows us to build
A combination of strong design based on lessons learned, from additional capacity within our infrastructure without affecting cur-
our Monte Carlo simulations (statistical availability of our data rent customers. The modular concept and design allows us to
center designs) and our 28 data centers, our involvement in the utilize the latest technology, which invariably improves over time
research and development of our vendor partner (Schneider, in terms of efficiency and effectiveness with regards to power
APC and Uniflair), and evaluating and improving the constant and cooling. We can expand our data center footprint easily with
PUE (PUE calculated is the PUEenergy measured at 90 percent this approach, using the latest technologies and thereby driving
occupancy of the installed infrastructure). efficiencies even further.
Power Consumption
and Virtualized Environments
A
s virtualization rapidly achieves mainstream status in ple, if you have been using only 200-300 watts of a 500-watt
today’s data centers, it is heightening demand for serv- server and then ramp it up to capacity, along with the rest of the
ers optimized to meet the unique challenges of hosting 500-watt servers in the rack, the rack gets hotter due to the in-
virtual machines. In particular, companies with virtualized infra- creased power draw. It becomes necessary to review the cool-
structures need manageable, cost-effective server hard-ware ing strategy as well. As IT pros assess cloud services and virtu-
with heavy-duty processing and memory resources and plenty alization activities to push up their utilization rates and to garner
of network connections. Those are precisely the qualities that more efficiency and availability, they routinely name availability
the next-generation servers deliver. With efficient server virtu- as a major concern. They typically are at large companies and
alization, data centers can save in terms of the server count, are used to having responsibility for all of the servers the busi-
footprint, energy consumption and expenditure on cooling. With ness uses. The idea of putting parts of their business on rented,
all the palpable advantages that virtualization provides, it comes “black box”-style cloud services or virtualizing and reducing their
with some discreet challenges. Data center managers are be- number of assets makes them uneasy. They understand the
ing challenged to maintain or improve availability in increasingly risks and how much an outage can cost them.
dense computing environments while reducing costs and aug- A large percentage of outages are triggered by electrical is-
menting efficiency. Some companies are looking to cloud com- sues that can be minimized or eliminated with adequate power
puting and virtualization for help. Both strategies present certain solutions. The challenge is optimizing the efficiency gains avail-
advantages and opportunities, but supporting them requires able in power approaches with IT criticality and the need for
a dedication to power—and the rest of the infrastructure, for that availability.
matter—so as not to compromise availability. So the developers in your IT department may love the fact that
they can immediately dial up thousands of virtual servers from
Challenges through Virtualization a cloud provider or code to an infinitely scalable platform like
Virtualization also is valuable with the ability to run multiple vir- Google’s App Engine (the platform-as-a-service tier), but it’s up
tual machines on a single physical piece of equipment, sharing to IT management to strike the balance between rapidly devel-
the resources of that computer across multiple environments. oping applications at Internet scale and planning for the impact
Virtualization improves the efficiency and availability of resourc- on the business that any cloud-related downtime will have. And
es and applications in your organization in a dramatic fashion. the higher up in the cloud stack you go to rent your services,
Virtualization also pushes up the utilization rate of the serv- the more vulnerable you are to downtime because you’re more
er, especially in blade server architecture. The impact on the locked into that particular provider’s solution.
power delivery systems is notable. You easily could move from
a low-density power application, say a single-phase circuit at Cloud and Virtual Environments Deal With Power
15 -20 amperes to a higher density application. By virtualizing, Consumption
it increases your utilization rate as well as the potential to need Ultimately, the cloud is here to stay and virtualization is the
to deliver more power to that application. It might push you to first step many are taking to better energy savings within
a high-density power application. a data center. By conducting due diligence prior to implemen-
Pre-virtualization, servers typically operate at a 10-20 percent tation, power issues can be avoided and the threat of outages
utilization rate. Post-virtualization, they run at 60, 70 and even lessened. Cloud computing has benefits that can’t be ignored,
80 percent. An interesting thing happens when you start pushing such as delivering infrastructure as a service, support for mas-
servers to those rates. They use all the compute power they’re sive sharing, flexibility and a pay-as-you-go model. One of the
capable of using, which is a good thing because you pay for that downsides of cloud computing is in the nearly weekly head-
compute power, but there are infrastructure impacts. For exam- lines of high-profile outages at data centers that host sites.
These incidents illustrate a general problem—power remains
an issue, but in the cloud, it’s an issue you can’t control. Cloud
users are completely tied to the provider’s level of infrastruc-
ture and availability. Virtualization can enable providers to un-
derstand power and data center performance. This also helps
control costs while enhancing the value of existing data center
facilities. The paper takes up issues one by one to provide
a relevant solution.
References
• A white paper introducing HP’s platform for virtualization technique
in Data Centers
• Professor B. Falsafi’s talk on Towards Energy-Centric Computing in
CADS’10
• Data Center Virtualization – Cisco Systems
Where do IT
“best practices” and
inefficient wasteful
procedure meet?
By: Peter Mead, Diskeeper Technical Correspondent
Drawing a clear line between the cost of doing business and wasteful
expenditures in the IT environment
I
t sounds simple. Using IT resources to get work done is good say, 1000 I/Os to read a file and 900 I/Os are due to having to
business. Using more than you need for the same result is read the file in pieces, that’s 90% wasted resources right at the
waste. Today’s business environment has mandated that IT bedrock of the network. Besides losing productivity, excessive
managers draw a clear line between the cost of doing busi- disk wear and cooling needs shorten disk life and increase en-
ness and wasteful expenditures. But it’s never as easy as it ergy costs. Slow workstations and servers bring about system
looks. IT budget lists are growing: Legacy hardware upgrades, hangs, time outs and errors that absorb system admin hours.
software updates, network reconfigurations, personnel issues, It’s all waste.
overtime man-hours, adequate data storage, system perform- Investigating the effectiveness of defrag software, a survey
ance — where do “best practices” and inefficient wasteful pro- carried out on users of Diskeeper — the leading performance
cedure meet? and reliability technology provider — found they got an average
Companies have always known that cutting overhead leads to of three additional years of high productivity from their systems
more profit. But today’s post-recession companies have come with some reporting additional longevity of over five years. Re-
to realize that achieving greater profitability and competitive- ported I/O reduction due to defrag ranged from 50% to over 75%
ness depends heavily upon embracing efficiency as a way of for more than half of the customers surveyed. But Diskeeper
understanding the processes, purposes and points of control in goes beyond simple defrag. It prevents up to 85% of fragmen-
doing business. tation before it can happen, making effective I/Os faster while
This has never been more apparent than with the IT Manager minimizing disk wear. Diskeeper is completely automatic and
who must ensure the ever-expanding IT stack of applications is able to function in real-time with zero resource conflicts. It re-
on stable footing, generating productivity and that the systemic quires no management.
problems and hands-on management that absorb personnel is This kind of efficiency, taken into the heart of a computer site,
kept to a minimum. creates the kind of direct and reliable operation that cuts operat-
Arguably the most basic area of inefficiency in the entire IT ing costs without sacrificing production. Clearly waste reduction
site is file fragmentation. A file is written to the hard drive in many begins at this basic level and any attempt to begin farther up the
pieces that are scattered randomly around the disk. Without de- technology stack negates what efficiency is all about.
frag, there is a steadily increasing degeneration of the system’s
ability to write and read data quickly. I/O speed is a major control Media Contact: Dorian Culmer
point for workstation and server performance and its impact on Email: [email protected]
an entire data centre must not be underestimated. If it takes, Phone: +44 (0) 1293 763290
…we were able to reduce our backup time down to 9 hours – Lotus Notes Administrator, SandvikSIT
B
ackups can be hindered by a number of factors, but Jimmy Beltran, Lotus Notes Administrator, SandvikSIT Ameri-
prime among them are those dealing with file read and cas further stated, “We are running Diskeeper Enterprise Server
write I/Os. In the backup process, an entire data set on our Domino Lotus Notes application and also on the mail
needs to be read, and then copied elsewhere. This data set servers. We are running these across several RAID 5 arrays
could be spread across one volume or many. If a high number encompassing over 2.75 terabytes of user data. It used to take
of additional I/Os are required to read files before they are us 25 hours to perform our weekly backup of the Domino Serv-
transferred, backup speed is heavily impacted. At best, the ers. By using Diskeeper, we were able to reduce our backup
result is a backup that is greatly slowed down, and at worst it time down to 9 hours.”
is a failed backup. Diskeeper performance software, going well beyond defrag,
The additional I/Os are needed when files are split into multi- includes IntelliWrite® technology which prevents a majority of
ple pieces, called fragments. Fragmentation is the condition in fragmentation before it ever happens. To round out a full optimi-
which pieces of individual files and free space on a disk are not zation solution, Diskeeper includes I-FAAST® technology, which
contiguous, but rather broken up and scattered around the disk. accelerates file access times to up to 80 percent faster to meet
It is not at all uncommon to see a file fragmented into thousands the heavy workloads of file-intensive applications. Diskeeper
or even tens of thousands of fragments. The impact on backups also contains advanced network management capabilities and
of files in such a state is considerable. critical system file protection. Diskeeper’s proprietary InvisiTask-
Due to the advanced complexity of today’s backup solutions, ing® technology allows all Diskeeper tasks to be performed
simple defrag is no longer adequate. As traditional defrag has automatically, in the background.
become outmoded, today a majority of enterprises address such An additional bonus is that because of the mechanical move-
issues with Diskeeper® performance software. ments eliminated with Diskeeper, disk drive life is also increased
“I noticed on my backup server it took about 14 hours to back by 50 percent or more.
up the data for the week,” stated Dusty Bailey, Network Engi- With Diskeeper employed, backups complete in a timely man-
neer, Z Capital Partners. “After finding Diskeeper, which I now ner and cease to fail because of I/O bottleneck issues. It is part
have used for over 3 years, on everything, I have saved so and parcel of any total enterprise backup solution.
much time, and I don’t have to worry about any slowdowns of
the servers or workstations. It also freed-up my weekends. The www.Diskeeper.com
12 terabyte backup server is totally defragmented, the 14 hours
for backing up the nine servers went from the 14 hours to 10 Contact: Dorian Culmer
hours. It has 15, 1 terabyte drives, so yeah, a noticeable differ- Email: [email protected]
ence in using Diskeeper.” Phone: +44 (0) 1293 763290
I
manage the IT operations group at Enterasys. It’s a great Building and Migrating
team made up of a unique combination of people that are Most people don’t need to build data centers. More commonly,
both very enthusiastic and knowledgeable. We have moved organizations either move into existing locations, or move to
offices more times than I care to remember and had to move our a colocation center that provides everything. If you do find your
primary data center three times in two years. One time we had organization needing to build your own, it’s an experience. I rec-
to complete the move from start to finish in less than 45 days. ommend getting a consultant who can help with the process.
We did it in 35, but ended up putting in close to five 100-hour There is a lot to know – from basic IT processes, to power, cool-
weeks to do it. ing, structural engineering and building codes. Let the experts
When I think of the lessons I’ve learned over the years, I like help with what you don’t know and manage the project.
to group them into three categories: contractual/negotiation, If you don’t bring in experts, keep a few tips in mind when
building/migrating, and managing. building the data center. When you are building, make sure you
include the inspectors early and often. It’s much easier to make
Contracts and Negotiations a change on paper than it is to wait until the final inspection to
The most valuable bit of advice I have, other than to get the find out you need to make a major change.
best value for data center equipment, is to make sure that the If you find yourself building a data center, plan on equipment
exit clauses are good. We found ourselves in a position where getting damaged. Either plan for it by having extra money,
we had to move our data center in less than 45 days because or make sure the subcontractors are required to fix the damage.
we were in a month to month scenario. Our 24-month contract You’d be surprised how hard it can be to get a 10’ piece of pipe
was up but we weren’t quite ready to move into the space we through a hallway without damaging something.
had built in our corporate headquarters. Furthermore, remember that just because you have 20 peo-
Unfortunately, the contract said that they needed to only ple helping with your move, all 20 can’t work on the same cabi-
give us a 45 day notice. To be honest, I never paid that much net. We plan no more than two people per cabinet, usually one
attention to that clause, but I do now. To really plan and move in the front, one in the back, handing cables back and forth.
a data center takes 12-18 months. In a pinch you can do it in If you can space it out even more, go for it. We actually have de-
six months, anything less than that makes it very difficult to tailed plans on who will work on what and for how long, usually
get the network connectivity in place in time. Make sure that down to the minute. Admittedly it never goes quite according to
you are given that much time to move out. We were lucky plan, but as everyone has heard “Plans are useless, planning
that we had space in our building and could rent portable air is priceless”.
conditioners and had a local building supply store that had For planning purposes a good rule of thumb is:
a lot of extension cords. It was ugly, but we were able to
avoid getting stuck signing a new two-year contract at almost 1. Sixty minutes to set up the rack and put it in place
double the rate. 2. Ten minutes to rack the server in the rack
3. Two minutes per power cable will help you make sure that the most important stuff is at least
4. Five minutes per Ethernet cable running while you get the rest working.
Always have a plan B. For example, if the elevator breaks, what Managing a data center
do you do? How feasible is it to carry all of those servers up the This is actually the easiest part, but also the part that most peo-
stairs? Our elevator broke once and we were able to get it fixed ple fail at. Not because it’s tricky, but because it can be tedious.
before the move started. It would have taken us much longer to When you first move in, the cables are all labeled and neat, the
carry them up the stairs one at a time. servers are all documented and up to date, but over time without
Make sure everyone has the right copy of the documentation. proper care things will get missed.
If it turns out everyone has a different copy, many things can and We only allow four people to run cables in our data center.
probably will go wrong. Also make sure the documentation isn’t This makes sure that they stay neat, documented and organ-
on one of the servers you are moving. Trust me, it happens. ized. I am not one of the four, because frankly I’m not good at
Before you move, check to make sure you have all the tools and it. The four that do run cables make sure that we keep true to
supplies you need. Finding a roll of packing tape on a Wednes- our procedure.
day afternoon is easy. It’s much harder at 3:00 a.m. on a Sat- We recently started forcing servers to authenticate to the
urday. Two other things to do before the move happens: first, network in the data center. If a server, physical or virtual, tries
schedule a reboot of the servers, if possible. This will often iden- to get on the network without all the information documented
tify problems that if you had waited, you may think was caused and the right people knowing about it, it can only get to the
by the move. If you think the move caused the problem you may Web server that hosts the right form to fill out. Once the form
and likely will spend hours looking at something completely un- is filled out and the workflow completed, the network is au-
related to the real problem. Also, before the move, make sure tomatically provisioned. It saves us time and avoids any un-
everyone walks through the process. This is also a good time pleasantness.
to make sure everyone knows which Ethernet interface is NIC0 Through the years, I’ve had a couple of issues with “rogue
and NIC1, and that the port numbering on the switch goes top servers” - or I guess now we call it virtual machine sprawl. The
to bottom instead of left to right. If you get these wrong, you’ll first started as a call from the helpdesk on Saturday morning
spend Sunday night fixing it when you should be celebrating at 2:15 because a critical server was down. Unfortunately, the
a job well done. documentation that would have told us who to call wasn’t filled
We have found that an open conference bridge works out well out. So I called everyone starting at the top of the list and work-
when you are testing and troubleshooting. Just have everyone ing my way down. I’m a pretty charming guy, but at 2:30 a.m.
mute their phones unless they are actively working an issue. people weren’t that excited to hear from me. It took me 14 calls
This keeps the managers in the loop but out of the way. before I found the admin responsible for it.
Make sure you have a startup list so you bring servers up in As painful as that was, the worst issue I’ve had was a restore
the right order. Often times, we leave all of them down until they request from our head of engineering. Unfortunately the server
are all cabled then bring them up in order. Some applications was brought online without backups being configured on it. As
can get “grumpy” if, for example, the SQL server isn’t up when you can imagine, the conversation about that started bad and
the application tries to start. SQL may not start without the do- went downhill quickly. Luckily, since we don’t allow servers on
main controllers. Make sure if there is an order, you follow it the network any more these problems no longer happen.
and don’t get overzealous and just power things on as you get Building, moving and running data centers isn’t always the
the power running. most glamorous job, or the most fun job, but hopefully this
Lastly, have a priority list. Things can go wrong, so knowing article at least helps it become less dramatic and more in-
ahead of time that SAP is more important than “bug tracking” teresting.
MAGAZINE
4/2011
Our May issue will cover the Data Center Design topic!
Power Usage
Effectiveness (PUE)
Pros and Cons:
Why it does not tell the whole data center monitoring story
Over the past few years, average power consumption per server has
increased more than 20 percent. Consolidations and build-outs are
causing data centers and their racks to be more and more densely
packed with power-hungry IT equipment, such as blade servers. Over
the last decade, the typical power required at a rack has increased
from 2 kilowatts to 10 kilowatts and is going higher.
E
lectrical power is now 30 percent of data center operat- cate that typical data centers have a PUE of about 2.0. In other
ing costs and 20 percent of the overall total cost of own- words, half of all the data center power is consumed by the IT
ership. To give a sense of the magnitude of data center equipment and an equal amount of power goes to the support
power consumption, in the San Francisco Bay/Silicon Valley infrastructure.
area today data centers alone consume 375 megawatts per an-
num. That is enough power to supply 75,000 households.
One way to moderate the increased power consumption in
data centers is to look at the purpose of a data center, i.e., con-
duct useful computational work, and minimize the amount of
power used for support activities. For example, many data cent-
ers are kept colder than necessary. Turning up the thermostat
reduces energy consumed by the support infrastructure without
harming the IT equipment doing computational work.
“Total Facility Power” in this equation is all the power required With proper design a PUE value of 1.7 or better should be
to operate the entire data center, including the IT equipment achievable. Some companies are claiming much better PUEs,
items: servers, storage, network equipment and other IT equip- such as Google’s PUE of 1.21.
ment; and the support infrastructure items: CRAC units, fans, DCiE is simply the reciprocal of PUE. The power data require-
condensers, UPS and lighting. “IT Equipment Power” repre- ments are identical.
sents the power required to operate the servers and IT equip-
ment alone. Tools to Calculate PUE
PUE can range from 1.0 (where all the power is consumed You must measure power usage when calculating your PUE to
by IT equipment only) to infinity. Many data centers may have establish a baseline and to track your improvements. It is best
PUEs of 3.0 or greater. Studies conducted by EYP, Lawrence to gather data over time to ensure that peak periods have been
Berkeley National Laboratory and the United States EPA indi- captured.
You will need to deploy devices on branch circuits to measure servers. However, your PUE will improve because the IT Equip-
the current draw and power consumption of the data center’s ment Power denominator is larger.
non-IT power-consuming items including CRAC units, lighting,
UPS and other non-IT loads. For data centers where facilities Summary
are shared, such as in office buildings with common HVAC sys- To properly manage power in your data center you need to
tems, it may be difficult to gather the necessary data accurately. reduce power consumption. This means gathering sufficiently
But working with facility managers you should be able to get detailed information so that you can take energy conservation
reasonably good estimates. actions. You should establish a baseline from which to make
Deploy metered or intelligent rack PDUs in the IT equipment meaningful changes to your data center, not just play games
racks. The data from the rack PDUs will provide the denomina- with a PUE number. For example, individual outlet-level infor-
tor of the PUE calculation, i.e., the total IT Equipment Power. mation will help you find ghost servers and determine which de-
There are power management software tools which, when com- vices in your data center are efficient and which ones are not.
bined with intelligent PDUs, can capture power consumption Or you can test the efficiency claims of IT device manufacturers
every few seconds over whatever time period makes sense. in your real-world data center.
Ideally, you’ll be able to gather power data at each individual To really improve efficiency in the data center you need to
outlet so that you will know what each device is drawing. This change people’s behavior. With the right power management
detailed IT load data provides the granularity of information re- solution you can track power consumption and cost by depart-
quired to measure the PUE and make recommendations on ment, location and type of equipment. You can prepare graphs
how to improve it. The detailed data also help you document and reports that show problem areas and the impact of actions
data center efficiency improvements which, in some cases, can taken.
actually hurt your PUE score. No single number can hope to explain a subject as complex
as power consumption in data centers. PUE does help data
What’s wrong with PUE? center IT and facilities managers think about where electric-
You can implement measures that reduce data center energy ity is being consumed and what they can do to become more
consumption yet actually make your PUE worse. For example, efficient. But don’t let achieving a low PUE become your only
if you were to replace older, less efficient servers with more ef- energy efficiency goal.
ficient ones your PUE would go up even though you are con-
suming less power. This is because IT Equipment Power is the
denominator in a PUE calculation. You could also engage in a
project to remove unused or underused “ghost” servers, i.e., Greg More
servers that are still consuming power but are not doing useful is a Senior Manager in Raritan’s Power Management Business. Raritan
work. By removing ghost servers you will reduce the amount (www.Raritan.com) is a provider of IT infrastructure management solutions
of power consumed by idle servers but again, your PUE will to advance more efficient data centers. Raritan’s portfolio of hardware and
get worse. software solutions address today’s data center challenges – including ope-
Alternatively, you could replace several “pizza box” 1U serv- rations uptime, energy costs, capacity availability and asset management
ers with a few new blade servers. But blade servers consume – so that IT resources can be used more efficiently and be managed effective-
a lot of power. If you overprovision your data center and do not ly in order to deliver business value. Raritan is one of the largest providers of
fully utilize the capabilities of the blade servers it is possible that KVM (keyboard, video and mouse) switches, serial console servers and intelli-
you will actually consume more power than you did with the 1U gent power management solutions.
Attend
Choose from over
90
Classes
& Workshops!
Learn from the most experienced
Sheraton Boston Hotel SharePoint experts in the industry!
Keynote Over 55 “I would recommend SPTechCon to SharePoint admins and
developers. By far the best tech event I have attended.”
by Dux Raymond Sy
Exhibiting —Venki Oruganti, Software Developer, Pitney Bowes