How to Build a Company Data Center? Part One

This article is a first in a series about building and managing a modern data center. Have you decided to build a brand new data center to meet your company’s needs? Then you have surely though about what type of a data center you’re going to build, how much technology is going to be inside it and at what temperature will these machines operate. However before all that, an absolutely key factor is just choosing the right place. The companies that chose to put all their data in Karlin-Prague – a district of Prague devastated by 2002 flooding – would now surely agree.

JIŘÍ HANÁK
  • JIŘÍ HANÁK

  • 04. 03. 2016
  • 9 min read
Zkopirovat do schránky

A key question: Where?

There are two possibilities that hinge on your budget. The first one is to build a data center from the ground up, the second one is a so-called retrofit – this means that the data center will be placed into an already existing building. Both approaches have advantages and disadvantages. A completely new building allows you to build it so that it precisely fits the data center’s needs, which in turn means you will be able to reach better energy parameters and higher levels of certification. The location should allow for future growth and should be in a seismically stable region outside flooding areas. The TIA 942 standard that contains proposals and recommendations for data center design talks about the ability to withstand even a 500-year flood.

It is generally best to choose a location with a high-quality access to both electrical energy and internet connectivity. Ideally, there should be two independent optical fibre tracks laid down already. When there’s only one fibre connection available, laying down a second one is an item that will strain your budget and building times immensely. Building a fibre connection usually entails a long process of acquiring the land or the rights to use it and so on.

Going the retrofit way usually brings an added bonus of lower costs. On the other hand, it brings limitations imposed by the existing building and its condition. One thing you should be sure to check is the structural strength of floors and their load-bearing capacity. This usually amounts to around 200 kg per square meter in older buildings. However, a full server rack can hold three or four times as much! And you shouldn’t forget about the load-bearing capacity of your roof, as that will hold some of the air conditioning equipment and other components as well.

 

What is the Tier System?

Rozděluje datová centra dle dostupnosti. Čím vyšší tier, tím vyšší teoretická dostupnost datacentra.

  • Tier I basic V datacentru jsou instalovány UPS i generátory, avšak bez redundanceRedundanceRedundance znamená nadbytek oproti nezbytně nutnému. V IT nejčastěji jako zdvojení HW/datvíce, pravidelná údržba proto může zapříčinit výpadek, garantovaná dostupnost 99,671 % (tj. výpadek až 28 hodin ročně)
  • Tier II redundant components N+1 redundance komponent (klimatizací, routerů, UPS), avšak pouze jedna distribuční cesta, garantovaná dostupnost 99,741 % (tj. výpadek až 22 hodin ročně)
  • Tier III concurrently maintanable Jakákoliv plánovaná údržba probíhá bez výpadku, k dispozici je dostatečná kapacita záložních linek vedených různými trasami, porucha na infrastruktuře stále může způsobit výpadek části DC, garantovaná dostupnost 99,982 % (tj. výpadek až 95 minut ročně)
  •  Tier IV fault tolerant Infrastruktura ustojí minimálně jeden výpadek libovolné části, typicky všechny prvky infrastruktury redundantní (2x N +1), garantovaná dostupnost 99,995 % (tj. výpadek až 26 minut ročně)

Power is a key factor

Uninterruptible access to energy is the alpha and omega of any data center. That is why data centers are built and why they should be ideally powered by two independent routes of energy coming through their own transformers. The energy goes through a substation through automatic transfer switch (APS) – with a diesel generator connected – and into the UPS units with their own battery modules. Smaller companies can make do with only one substation, but every machine should be dual powered through two independent routes. Larger data centers who expect to use more than 2 megawatts can meanwhile eschew the UPS with batteries and go for unique battery-less power solutions, the so-called dynamic or rotary UPS. These combine a transformer, a UPS and a motor-generator in one package. It works a bit like a flywheel in a vacuum case that is spinning endlessly and can convert this stored energy through a generator into electric energy when needed.

We should always place UPS units to a separate location, because they need a lower operational temperature. And you should not underestimate the importance of buying a high-quality UPS unit. Some of the cheaper ones have batteries with a limited life-span that have cost some companies a fortune in maintenance fees. And you won’t go wrong with a modular UPS that allows you to just add more modules when you want to expand the data center.

Did you know that...?

  • Zvolte vhodné místa mimo záplavovou oblast
  • Prohlédněte si stav budovy a zjistěte nosnost podlah i střechy
  • Ověřte si dostupnost vysokonapěťových přívodů
  • Myslete na konektivitu, ideálně by měly být dostupné 2 optické trasy
  • Při zakrytování uliček zacpěte i ty sebemenší otvory
  • Pozorně čtěte všechny datové listy a soustřeďte se na ztráty a účinnost jednotlivých zařízení
  • Nezapomínejte na pravidelné revize a bezpečnostní školení
  • Všechny vstupy a výstupy monitorujte a analyzujte, včetně napájecích okruhů
Dedicated Server

The cooling is a matter of cold and hot aisles

A necessary part of every data center is its cooling. A de facto standard is a double flooring and a system of cold and hot aisles. The cold aisles should be roofed over to prevent the mixing of hot and cold air. The server racks need to be placed in a specific way – they should “inhale” the cold air from their front sides and “exhale” it hot out their backs (the so-called front-to-back system). The double flooring mentioned allows the cold air to be led into the correct place and also hides metallic cables. Optical fibres should probably be led up over the racks in special plastic troughs.

Today, the preferred method of cooling is the so-called free cooling. It allows data centers to use cold outside air which saves electric energy that would be otherwise spent cooling the air. This technique works best when the outside is cold but manages to save money on cooling even when the difference between inside and outside temperatures is only a few degrees. This can be helped with a glycol-based circuits with an outside radiator that takes the heat out of the data center.

But a smaller company on a tight budget will make do with a normal direct expansion (DX) type of air conditioning. The DX units are cheaper to buy, but are more expensive in the long term. They’re best used for cooling lower amounts of heat.

Most modern data centers today are built as so-called high-temperature data centers. The newest computer technology can stand temperatures from 27 to even 35 degrees Celsius in some extreme cases. Even a marginal increase in the operation temperature can save a lot of money spent on energy. For example, Google saves 4 % of all energy expenditures by raising their data centers’ temperature by one degree Celsius and Intel manages to save even 7 % of their energy costs by doing the same thing. And if the conditions are right – as in the Czech Republic – it’s possible to use free cooling throughout the year and save even more money.

Hints and tips for building your own data center

The data needs to be safe from fire as well

Your servers need to be protected not only against energy outages, but also against fire and unauthorized access. Even the smallest server rooms should have at least a fire alarm and fire extinguishers for electric appliances. Medium-sized data centers usually employ automatic fire and heat detection systems and the fires are put out through inert gases, e.g. the ecological FM200 that is safe for both the servers and the environment. There are of course cheaper alternatives using for example demineralized water or sodium.

The physical side of security needs to be taken care of by surveillance cameras, coded doors and security workers. Don’t forget to monitor all the components. You should measure the energy consumption of every single rack to evaluate them and prevent unnecessary losses. Monitoring the temperatures and cooling is just as important. If the systems are forced to work outside their ideal operating range, their energy consumption usually rapidly grows. As this is the thing you want to avoid, it’s usually best to use systems and tools to monitor both these factors. There are many commercially available or you can just code your own solution.

But that’s still not all there is

What should you do when your data center is finally all built and equipped? It’s still not time to rest. To see the long road ahead of us, stay tuned for the next part of this series. The next article will tell you all about the virtualization of hardware and cloud building.

The article was previously published in the professional journal IT Systems, issue 10/2014

Líbil se vám článek? Ano / Ne