Articles

Cool and collected

Ben Sampson

They’re all over the place and are becoming a critical part of our way of life but very few people can tell you where their nearest data centre is or what it takes to run one

Chilled cabinets: The heat generated by racks of servers is challenging to deal with 

Regardless of your Facebook status or whether you do your grocery shopping online, a growing majority of people see access to the internet as an essential. According to IT networking company Cisco, an estimated 2.5 billion people are connected to the internet. As the planet’s population increases and ICT infrastructure expands, the number of people online is expected to rise. Of the world’s 7.6 billion projected population in 2017, almost half, 3.6 billion, are expected be online.

The foundations of our expanding reliance on data and internet services are data centres, the places that house the computer servers that store and process data. Alex Rabbetts, chief executive of consultancy Migration Solutions, says that many data centres should now be viewed as critical infrastructure. “Almost everything you do in your daily life requires a data centre. If you switched off all the data centres, the world as we know it would stop. Everything from transport control to the pesticides and herbicides produced for crops would stop,” he says. “But it’s a fact that data centres use a huge amount of power and produce a huge amount of heat.”

Most people are ignorant that these centres even exist. They are commonly seen as large anonymous buildings that house row upon row of dark monolithic machines. And few people are aware of the important engineering challenge they represent. Computer servers produce heat as they operate and, as data centres grow in size and number, they use more power and produce more heat. The heat needs to be removed. But the installation of additional cooling and ventilation plant means using more power – a vicious cycle that can only be broken with clever engineering and innovation.

There is an environmental and commercial imperative to increase the energy efficiency of data centres. They account for 2% of global greenhouse gas emissions, the same as aviation, and the impact is growing steadily as we build more. Energy is also by far the largest proportion of cost for a data centre, and bills are on the rise. 

Data centre companies from the likes of Google and Amazon to lesser-known names such as Digital Realty and Anixter are therefore increasingly turning to the expertise of mechanical and electrical engineers to make them more energy efficient. Further evidence, if needed, that data centres have grown from being the sole domain of IT into utility-scale operations. 

But mechanical and electrical engineers aren’t always welcome in the sector, says Rabbetts, who also sits on the board of the European Data Centres Association. “Big firms that specialise in mechanical and electrical engineering can lack the IT expertise and miss solutions,” he says. “One of the biggest issues this industry faces is dinosaurs from an M&E background. We need innovation in cooling, but data centres are not mechanical and electrical infrastructure, they are IT infrastructure. You need IT knowledge to produce the best cooling solutions. The focus should always be the IT.”

An average data centre runs at between 18 and 27ºC to keep within the servers’ recommended operating parameters. But modern servers can frequently run up to 35ºC. It’s commonly said that for every kilowatt put into a data centre, at least a kilowatt of power is needed to remove the heat generated by its IT equipment. 

A typical “cascade” cooling process has up to five steps. A supply fan moves fresh air in, a pump is used in a chilled-water loop, a compressor is used in the refrigerant circuit, another pump is used in the condenser water loop and then there is another fan in the cooling tower. 

Adrian Jones, director of technical development at Cnet Training, says that, as the volume of plant and equipment increases, the amount of power required for cooling can often be closer to 2kW per kilowatt input.

The easiest way to improve cooling efficiency is to remove as many steps as possible while moving the heat out of the data centre as quickly as possible, says Jones. “The best thing to do then would be to recycle the heat into offices or other parts of the building, but in reality you need ducting and pumps to move that air about, which is hard,” he says.

Most cooling systems for large data centres work by pumping cold air under a pressurised raised floor. This system can be extended into the space between the server “racks” to create a “cold aisle”, a fairly well-established practice. Another standard technique is the use of “hot aisles”. This focuses on controlling the heat at its source, in the servers, and moving it out of the racks by channelling it up into ducts in the ceiling.

Another established technology is “in-row” cooling – the use of variable-speed fans to control pressure and direction of air flow in the server racks. This is sometimes effective enough to eliminate the use of cold and hot aisle air techniques. The methods can be used independently or combined. But the aim is always to get the cold air in and the hot air out – striking the right balance between the two is key.

Some of the most common problems faced in cooling management include an inability to use external air and poor design, says Jones, while engineers are effectively “fighting the laws of physics” to ensure a cooling system’s effectiveness. Engineers  work to avoid recirculation and bypass. Recirculation is where hot air mixes with the cold air stream and bypass is where cold air doesn’t reach the computer equipment. 

Another enemy is pressure loss. Reductions in pressure can result from obstructions such as cabling, poor design or leaks, and will mean that the cooling unit has to work harder, increasing its power requirement and costs.

The key to overcoming such operational challenges is to constantly measure and monitor conditions within the data centre, which means installing sensor systems. “The payback is often huge and rapid,” Jones says. 

The idea of a data centre creating large amounts of data about itself may seem circular, but it enables better planning for maintenance and upgrades, and tweaking of equipment according to load and environmental conditions.

There are other technologies in an engineer’s armoury. Computational fluid dynamics simulations can be used to model airflow, both at the initial design stage and if retrofitting a cooling system. Many data centres are using variable-speed drives in order to run motors for fans and pumps more efficiently. Airside economisers, to exhaust hot air, are also used. 

Some data centres cool their servers with water, mainly in the chiller of the air conditioning system. Some use liquid to cool computer processors directly, although this technique is finding more use in high-performance computing applications, where the density of servers within a rack is higher.

Which and how many cooling systems are applied should be a decision dictated by commercial necessity rather than technical ambition. “You can have as many cooling systems as you want,” says Jones. “It depends entirely on your IT equipment and your environment.” 

Conversely, there is a growing reaction against complicated solutions within the sector. Some companies are replacing servers more frequently and not bothering with cooling at all. Running a server outside of its recommended operational parameters will reduce its lifetime from six to two years, but makes economic sense when energy costs are high. 

Rabbetts says: “Big mechanical and electrical companies like lots of plant and equipment. But the reality is that modern servers can be run at higher temperatures. Fresh air is normally a perfectly adequate solution. Servers don’t feel the cold, so why not just run them hot? They are almost consumable items – all the cost is in operating them.”

Whatever the cooling solutions employed, there is little doubt that the data centre sector is changing fast as a response to its status as the newest utility. Jan Wiersma, from Dutch data marketing and translation company SDL, is tasked with purchasing the IT infrastructure that his company uses to provide its data analysis, monitoring and translation software in 170 countries. 

Data centres have been around for decades and the industry’s equipment is very standardised, and when Amazon began to sell computer storage capacity in 2006, says Wiersma, it effectively started a process of commoditising data centre services. Today, there is little difference between suppliers and there are only two factors that matter when buying from a data centre provider – price per kWh of the electricity and its availability. He says: “As a company it is better to consume IT as a utility – there is no competitive advantage to owning and running your own data centre. It also levels the playing field for new entrants to markets.”

Furthermore, Wiersma predicts an incoming period of consolidation in the sector. “In five years there will be only five to ten big providers in the world,” he says. “The big companies will be the only ones able to do the investment for the worldwide delivery of computer power with unlimited scaleability that businesses demand.”

However, the move to a few large companies and massive data centres will not negate the need for smaller regional centres, which act as temporary homes for data when it is first uploaded. 

For example, you post a photograph to Facebook or Flickr. That photograph is kept first on a regional server, to increase the speed with which it can be called up during the main time of interest in it. After a couple of weeks, it is transferred to the larger remote data centre. There are many such techniques that software developers use to increase the efficiency of data centres, and responsibility also lies here to reduce costs and environmental impact.

Whatever the energy efficiency solution, the increasing size and importance of data centres within society is placing them under more public scrutiny. 

Undeniably, the centres have moved from the IT domain into being vital infrastructure. If increased scrutiny results in more transparency for data centres, from maintaining the privacy of the information they contain to helping to reduce the energy they use, it can only be a good thing.

How to identify hot spots

Although traditionally used to tweak the aerodynamics of vehicles in the aerospace and automotive sectors, computational fluid dynamics (CFD) is also employed to optimise airflow during the initial design of a data centre and while it is operating. 

The modelling and computer simulation of airflow can help to identify and rectify hot spots within server racks and to arrange the most efficient configuration. It can also predict how changes to a ventilation system or IT equipment will affect temperature and efficiency. However, fully-fledged, professional CFD software is out-of-reach for many in the data centre sector. A full CFD analysis requires teams of specialist engineers to build models of data centres, a lengthy and expensive task.

So suppliers are offering CFD analysis and prediction tools integrated into suites of software that aim to offer control and monitoring of the whole data centre. These are called Data Centre Infrastructure Management packages. Examples include Siemens Clarity LC, which the German engineering giant launched last year. This is based on the company’s product lifecycle management software Teamcentre. Another example is by Schneider Electric. Its Ecostream module is sold as part of the Struxureware package. Both reuse data already present in the software suite from CAD plans to help optimise cooling and ventilation. 



Google search could save millions

Internet search giant Google is using machine learning to tweak the power usage effectiveness (PUE) of its data centres. The company, which runs 12 massive data centres in the US, Asia and Europe, gathers vast amounts of data every day. It measures energy load on its servers, outside air temperature and the levels at which mechanical and cooling equipment are running.

Last month the company announced that an engineer at its Dallas data centre had produced a software model that predicts PUE with 99.6% accuracy from its operational data. The software, which runs constantly as the data is generated, makes recommendations on an ongoing basis to the centre. Although the reduction in PUE of just 0.2 seems minor, Google’s data centres are continuously operating and any gain saves large amounts of money and reduces environmental impact. PUE is an indicator of how much energy in the data centre goes to powering IT equipment instead of cooling and ventilation.

Jim Gao, a mechanical engineer and data analyst, says: “The application of machine learning algorithms to existing monitoring data provides an opportunity to significantly improve data centre operating efficiency. A typical large-scale data centre generates millions of data points across thousands of sensors every day, yet this data is rarely used for applications other than monitoring purposes. Advances in processing power and monitoring capabilities create a large opportunity for machine learning to guide best practice.”

Share:

Professional Engineering magazine

Professional Engineering app

  • Industry features and content
  • Engineering and Institution news
  • News and features exclusive to app users

Download our Professional Engineering app

Professional Engineering newsletter

A weekly round-up of the most popular and topical stories featured on our website, so you won't miss anything

Subscribe to Professional Engineering newsletter

Opt into your industry sector newsletter

Related articles