How to Build a Company Data Center? Part One

An image of your goal - a company data center full of racks.

This article is a first in a series about building and managing a modern data center. Have you decided to build a brand new data center to meet your company’s needs? Then you have surely though about what type of a data center you’re going to build, how much technology is going to be inside it and at what temperature will these machines operate. However before all that, an absolutely key factor is just choosing the right place. The companies that chose to put all their data in Karlin-Prague – a district of Prague devastated by 2002 flooding – would now surely agree.

A key question: Where?

There are two possibilities that hinge on your budget. The first one is to build a data center from the ground up, the second one is a so-called retrofit – this means that the data center will be placed into an already existing building. Both approaches have advantages and disadvantages. A completely new building allows you to build it so that it precisely fits the data center’s needs, which in turn means you will be able to reach better energy parameters and higher levels of certification. The location should allow for future growth and should be in a seismically stable region outside flooding areas. The TIA 942 standard that contains proposals and recommendations for data center design talks about the ability to withstand even a 500-year flood.

It is generally best to choose a location with a high-quality access to both electrical energy and internet connectivity. Ideally, there should be two independent optical fibre tracks laid down already. When there’s only one fibre connection available, laying down a second one is an item that will strain your budget and building times immensely. Building a fibre connection usually entails a long process of acquiring the land or the rights to use it and so on.

Going the retrofit way usually brings an added bonus of lower costs. On the other hand, it brings limitations imposed by the existing building and its condition. One thing you should be sure to check is the structural strength of floors and their load-bearing capacity. This usually amounts to around 200 kg per square meter in older buildings. However, a full server rack can hold three or four times as much! And you shouldn’t forget about the load-bearing capacity of your roof, as that will hold some of the air conditioning equipment and other components as well.

What is the Tier System?

The Tier System separates data centers into categories based on their availability. The higher the tier, the better the theoretical availability of the data center.

  • Tier I basic This data center has both uninterruptible power supplies (UPS) and generators, but they don’t have redundant backups. This can mean outages when the technicians are doing maintenance. They have guaranteed availability of 99,671 % (i.e. outages can amount up to 28 hours per year).
  • Tier II redundant components Means the data center has N+1 component redundancy (UPS, routers, AC), but only one canal of distribution. Equals to guaranteed availability of 99,741 % (i.e. up to 22 hours per year of outages).
  • Tier III concurrently maintainable Any planned maintenance can be done without an outage. Such data center has enough capacity of backup lines through different routes, though a fault in infrastructure can still bring down a part of the data center. Guaranteed availability of 99,982 % means that the overall time of outages can be only 95 minutes per year.
  • Tier IV fault tolerant The infrastructure can withstand at least one of its parts failing, usually all of them are redundant (2x N+1). Guaranteed availability goes up to 99,995 %, so the total amount of outages per year will be less than 26 minutes.

Power is a key factor

Uninterruptible access to energy is the alpha and omega of any data center. That is why data centers are built and why they should be ideally powered by two independent routes of energy coming through their own transformers. The energy goes through a substation through automatic transfer switch (APS) – with a diesel generator connected – and into the UPS units with their own battery modules. Smaller companies can make do with only one substation, but every machine should be dual powered through two independent routes. Larger data centers who expect to use more than 2 megawatts can meanwhile eschew the UPS with batteries and go for unique battery-less power solutions, the so-called dynamic or rotary UPS. These combine a transformer, a UPS and a motor-generator in one package. It works a bit like a flywheel in a vacuum case that is spinning endlessly and can convert this stored energy through a generator into electric energy when needed.

We should always place UPS units to a separate location, because they need a lower operational temperature. And you should not underestimate the importance of buying a high-quality UPS unit. Some of the cheaper ones have batteries with a limited life-span that have cost some companies a fortune in maintenance fees. And you won’t go wrong with a modular UPS that allows you to just add more modules when you want to expand the data center.

The cooling is a matter of cold and hot aisles

A necessary part of every data center is its cooling. A de facto standard is a double flooring and a system of cold and hot aisles. The cold aisles should be roofed over to prevent the mixing of hot and cold air. The server racks need to be placed in a specific way – they should “inhale” the cold air from their front sides and “exhale” it hot out their backs (the so-called front-to-back system). The double flooring mentioned allows the cold air to be led into the correct place and also hides metallic cables. Optical fibres should probably be led up over the racks in special plastic troughs.

Today, the preferred method of cooling is the so-called free cooling. It allows data centers to use cold outside air which saves electric energy that would be otherwise spent cooling the air. This technique works best when the outside is cold but manages to save money on cooling even when the difference between inside and outside temperatures is only a few degrees. This can be helped with a glycol-based circuits with an outside radiator that takes the heat out of the data center.

But a smaller company on a tight budget will make do with a normal direct expansion (DX) type of air conditioning. The DX units are cheaper to buy, but are more expensive in the long term. They’re best used for cooling lower amounts of heat.

Most modern data centers today are built as so-called high-temperature data centers. The newest computer technology can stand temperatures from 27 to even 35 degrees Celsius in some extreme cases. Even a marginal increase in the operation temperature can save a lot of money spent on energy. For example, Google saves 4 % of all energy expenditures by raising their data centers’ temperature by one degree Celsius and Intel manages to save even 7 % of their energy costs by doing the same thing. And if the conditions are right – as in the Czech Republic – it’s possible to use free cooling throughout the year and save even more money.

Hints and tips for building your own data center

  • Pick a spot outside a flooding area
  • Look the building over and check the load-bearing capacities of both floors and roofs
  • Check the availability of high-voltage power
  • Keep the connectivity in mind, ideally there should already be 2 optical fibre routes ready
  • When roofing over the aisles, be very meticulous and take care of even the smallest “leaks”
  • Read through all the datasheets and focus on energy loss values and the energy conversion efficiency of all the machines
  • Don’t forget to revise everything regularly and to hold safety courses for employees
  • You should monitor and analyse all inputs and outputs, including the power circuits

The data needs to be safe from fire as well

Your servers need to be protected not only against energy outages, but also against fire and unauthorized access. Even the smallest server rooms should have at least a fire alarm and fire extinguishers for electric appliances. Medium-sized data centers usually employ automatic fire and heat detection systems and the fires are put out through inert gases, e.g. the ecological FM200 that is safe for both the servers and the environment. There are of course cheaper alternatives using for example demineralized water or sodium.

The physical side of security needs to be taken care of by surveillance cameras, coded doors and security workers. Don’t forget to monitor all the components. You should measure the energy consumption of every single rack to evaluate them and prevent unnecessary losses. Monitoring the temperatures and cooling is just as important. If the systems are forced to work outside their ideal operating range, their energy consumption usually rapidly grows. As this is the thing you want to avoid, it’s usually best to use systems and tools to monitor both these factors. There are many commercially available or you can just code your own solution.

But that’s still not all there is

What should you do when your data center is finally all built and equipped? It’s still not time to rest. To see the long road ahead of us, stay tuned for the next part of this series. The next article will tell you all about the virtualization of hardware and cloud building.

The article was previously published in the professional journal IT Systems, issue 10/2014

The right place for your data

OUR DATA CENTERS ARE LOCATED IN PRAGUE AND BRNO