Are SSDs suitable for servers? A Comparison of Pros and Cons of SSD and HDD, part one

Disks with magnetic plates are done for. They are slow, they don’t take abuse well and their repair is half witchcraft, half alchemy – what other computer component can be fixed by baking it in the oven just as well as stashing it in the freezer? We should really get rid of them.


  • 02. 12. 2015
  • 9 min read
Zkopirovat do schránky

In all possible use scenarios, they are to be superseded by a new technology called solid state drive or SSD, that is much faster, newer, better.

But isn’t that just a marketing ploy to sell a new tech?

Manufacturers of these new disks have been trying to sell SSDs as a storage tech of the future for a while now.

Some of them go a step further and claim that classic hard drives are dead. Violin Memory, for instance, had even actually launched a campaign called ‘Disk is Dead’ and their marketing manager Amy Love stated on the company blog that ‘the all-flash data center is already a reality for many businesses in technology’ and ‘flash storage platforms will be the heart and soul of tomorrow’s datacenter’.

Many people are justly thrilled about SSD and think it actually is the tech of the future. However, marketing campaigns are to be taken with a grain of salt. How is the situation in SSD now? Is it time to dump all the hard drives? Should SSD supersede them in every kind of storage? And will the new technology conquer data centers and servers, as its supporters claim?

Read on and find out.

SSD: No spinning plates, just solid transistors

First a short review. What exactly are these solid state drives, and how do they work? To understand this, it’s necessary to have an idea about ordinary hard drives first.

The hard disk drive or HDD is an engineering marvel. It consists of many very precisely moving parts. The most important ones are the platters that rotate up to 7200 times per minute – and the fastest drives can do up to fifteen thousand revolutions per minute. The platters hold data written in a layer of magnetically soft material. The data is accessed by a moving arm that is holding several heads that read and write bits. The arm glides extremely quickly mere millimetres above the platters and its heads read or write through electromagnetic impulses where necessary. This contraption can very precisely single out a specific spot in the magnetic layer of the disk. It works like an extremely fast record player that doesn’t touch – or indeed scratch – the record at all.

The inner workings of HDD are strikingly similar to a record player. Except that there are no vinyl disks and the read head doesn’t touch them at all. And you usually don’t want to hear any sounds coming from your HDD.

It’s a complex system and as such needs to be treated with care. It can be disrupted just by small shock – like when a notebook falls from a desk down on the floor. Worst case scenario, the read/write heads hit the plates and damage the disk irrevocably. And they are sensitive to heat and other factors, which is an important fact for data centers and servers, as they produce heat in spades.

On the other hand, SSDs are resistant to impact, shocks, vibrations and other factors. And they tolerate heat better than hard drives and produce less of it themselves. It’s all thanks to the fact that SSDs don’t have any moving parts and instead of rotating platters use flash memory based on transistors. The development of SSDs has been heavily influenced by the findings of CPU research, as they are somewhat similar in nature.

Transistors inside the chips of a SSD are assembled into grids made of particular number of rows and columns. At every intersection, two transistors are combined into a cell, where ‘data is stored’ as an electrical current. The input and output part of every cell can decide where the current should flow and consequently, what charge is where. Thanks to this, it’s possible to store the values of one or zero in every cell – in essence, to store information.

Technological processes involved in the workings of SSD are genius, yet somewhat complicated. When choosing the right disk, for instance, you need to be aware of several terms, concepts and acronyms. Like, for example, the acronyms SLC, MLC and TLC. These indicate how many levels one cell of such disk has and how many bits of information it can store. The SLC – single-level cell – drives can store just one bit, whereas MLC and TLC – or multi-level cell and triple-level cell – can store two or three bits of information into each cell. Some companies are now working on new multi-level technology that would allow them to store four bits of information into each cell.

Disks based on SLC can store less data than MLC or TLC disks of the same dimensions. However, singe-level cells and its data can be accessed much faster and will also endure much higher number of rewrites than the other technologies. Those, on the other hand, offer much higher capacity and as a consequence lower price per gigabyte.

Do you want a VPS running on SSDs?

In Master Internet, you can choose a virtual server that runs completely on modern SSDs – just choose the desired option in the configurator. We use Proxmox (LXC, KVM / Docker) and Hyper-V technologies for virtualization.

VPS Europe

We want to get the data faster and faster

When comparing SSDs and HDDs, it’s appropriate to start with the largest difference – the speed. Flash drives beat the platter disks by a mile. Whereas the classic hard drives can attain read speeds of about 230 megabytes per second, SSDs can reach around 700 megabytes per second. The difference between write speeds is just as striking – SSDs gallop at 500 megabytes per seconds, while HDDs run at 190 megabytes at best.

Downside of slow protocols: throughput

And flash drives could be even faster. They are bogged down by their protocols and data transmission. The SATA interface of third revision has the maximum throughput of 600 megabytes per second, the SAS of third revision can theoretically attain double that speed. The experts knew about these limitations for quite some time now, which is why many of new SSDs use the new NVMe (non-volatile memory express) interface that allows them to reach their true potential. It theoretically allows them to reach up to 4 gigabytes per second of data bandwidth, which is about twenty times that of a classic hard drive.

It should be pointed out that the speed of flash drives can get lower over time thanks to the so-called write cliff phenomenon. If it’s constant linear speed you are after, the better (albeit slower) choice might just be going with mechanical disks. But SSD manufacturers know about this and are already building the drives in a way that should prevent write cliff from happening at all.

These numbers are to be taken just as rough estimates though, as the performance of disks made by different manufacturers can vary a lot. The speed difference between HDDs and SSDs is still striking though and is just as useful and visible when they are used in a server array.




Read speeds up to 230 MB/s up to 700 MB/s
Write speeds up to 190 MB/s up to 500 MB/s

The highest speeds attained by regular users in normal circumstances, as recorded by and 

The speed is most visible when the server’s disk has to work with heavy loads of input and output (IO) operations, and when the data is not distributed on the drive sequentially, but randomly. Thanks to not having to rotate platters and move read heads to find a specific bit, the access speed of SSDs is not mechanically limited. Hence they are much better at random IO, or non-sequential read and write operations.

If a server needs high speed for such tasks, SSDs are a great choice.

There’s no beating platters at capacity

Good old hard drives still have their place in many servers. And rightfully so, as they have several advantages.

The most distinct one being, until recently, their capacity. Even now, the differences between SSD and HDD ‘sizes’ are significant.

So whereas a classic drive in enterprise quality can be had with 2 to 6 terabytes of capacity, SSDs tend to be smaller, reaching 1 or 2 terabytes at the very best. In May, Fixstars announced their new SSD, the first in the world that can store 6 terabytes of data. Such high capacity is still more of an exception to the rule than anything though as most SSDs can be only found in sizes several terabytes smaller.

As the capacity of SSD grows, they use a different way to store their bits (remember the SLC, MLC and TLC acronyms?). These affect read and write speeds as well as the whole drive’s life expectancy. The differences are quite small, but they can still be noticeable, especially in highly stressed servers.

Where capacity is concerned, regular hard drives have an edge – HDDs in huge capacities are a normal occurrence. On those server levels where a lot of space is needed, platter-based drives rule.

The fight of two technologies: tied at half-time

An old mammoth and a young tiger, that’s the best way to describe the two contending techs. One of them is slower, but has a huge capacity, the other can run at a frantic pace, but won’t store such a large amount of information. Both technologies have their advantages and disadvantages.

After the first part of this article series it still is up for debate whether to choose the newer SSD or the tried and true HDD for your server. The next episode of this comparison will shine some more light on the situation. We’ll look at prices, energy requirements and the future outlook of both techs.

SSD drives in Master Internet

As SSD technology evolves, so does their use. At Master DC, we also offer modern SSDs in a variety of server configurations – for example, you can try them in our virtual servers (VPS).

Líbil se vám článek? Ano / Ne