Thursday, 15 February 2018

Humidity Control + Energy Saving: is there a solution?

ASHRAE has been working for many years on guidelines that allow a wider tolerance for temperature and humidity. Consequently, the need to humidify has decreased, making the value of humidification equipment less significant in the overall HVAC system of the DC.

However, what if maintaining the humidity also reduced the cooling demand?

One of the most effective solutions involves the use of adiabatic humidifiers: adding moisture to an air stream, absorbing heat in the air, increasing humidity and decreasing the temperature for very little energy consumption (c.1kW electrical power for 70kW cooling)

This evaporative cooling is increasingly used in new generation data centres in which the design conditions are close to the limits suggested by ASHRAE: this is made possible by careful design of air flows and good separation between the air entering the racks and the exhaust air (layout with “hot aisles and cold aisles”).

The higher operating temperature and humidity allow the use of outside air for ‘free cooling’ (e.g. when below 25°C), and when the outside air is hotter and drier, evaporative cooling can be adopted, increasing humidity up to 60% and higher while bringing the temperature down to acceptable values, simply through the evaporation of water.

There are several different adiabatic humidification technologies available, from “wetted media” to washers and spray systems: the principle underlying all of these devices is to maximise the contact surface area between air and water, so as to ensure effective evaporation and perfect absorption in the humidified air stream. The choice of the system depends on numerous factors, ranging from available space to required efficiency and the need for modulation.

In general, the solution needs to be evaluated in terms of TCO (Total Cost of Ownership) throughout the system’s working life, also taking into consideration its resilience in terms of continuous operation as well as water consumption, which in many areas may be a critical factor: indeed, many data centres, together with the classic PUE (Power Usage Effectiveness) for energy consumption also monitor WUE as regards water consumption.

Recently, atomisation systems have become quite popular; these use a system of nozzles and high pressure pumps to create minute droplets of water, thus ensuring optimum absorption. These systems can be controlled by inverters to modulate atomised water production and respond to different load conditions. Other benefits of these systems include very low air pressure drop, no recirculation (and consequently a high level of hygiene, something that unfortunately is often neglected) and the possibility to use one pumping unit with two separate distribution systems; one for summer (evaporative cooling) and one for humidification in winter, meaning significant flexibility - even with vertical air flows.

The effectiveness of such systems depends significantly on local temperature-humidity conditions and in much of Europe both free cooling and evaporative cooling can be exploited for most of the year, to the extent where some data centres are designed to use mechanical cooling as an emergency backup system only.

Guest blog written by Enrico Boscaro, Group Marketing Manager and William Littlewood, Business Development manager, Carel

For more information, please contact and

Brochure available at  

Wednesday, 3 January 2018


CNet Training recently welcomed Alexander Taylor, an anthropology PhD student from the University of Cambridge, onto its Certified Data Centre Management Professional (CDCMP®) education program. Alex recently researched the practices and discourses of data centres. In this article, he outlines his research in more detail and explains how the education program contributed to his ongoing anthropological exploration of the data centre industry.

Data Centres as Anthropological Field-sites

Traditionally, anthropologists would travel to a faraway land and live among a group of people so as to learn as much about their culture and ways of life as possible. Today, however, we conduct fieldwork with people in our own culture just as much as those from others. As such, I am currently working alongside people from diverse areas of the data centre industry in order to explore how data centre practices and discourses imaginatively intersect with ideas of security, resilience, disaster and the digital future.

Data centres pervade our lives in ways that many of us probably don’t even realise and we rely on them for even the most mundane activities, from supermarket shopping to satellite navigation. These data infrastructures now underpin such an incredible range of activities and utilities across government, business and society that it is important we begin to pay attention to them.

I have therefore spent this past year navigating the linguistic and mechanical wilderness of the data centre industry: its canyons of server cabinet formations, its empty wastelands of white space, its multi-coloured rivers of cables, its valleys of conferences, expos and trade shows, its forests filled with the sound of acronyms and its skies full of twinkling server lights.

While data centres may at first appear without cultural value, just nondescript buildings full of pipes, server cabinets and cooling systems, these buildings are in fact the tips of a vast sociocultural iceberg-of-ways that we are imagining and configuring both the present and the future. Beneath their surface, data centres say something important about how we perceive ourselves as a culture at this moment in time and what we think it means to be a ‘digital’ society.Working with data centres, cloud computing companies and industry education specialists such as CNet Training, I am thus approaching data centres as socially expressive artefacts through which cultural consciousness (and unconsciousness) is articulated and communicated.

The Cloud Unclothed

CNet Training recently provided me with something of a backstage pass to the cloud when they allowed me to audit their CDCMP®data centre program. ‘The cloud’, as it is commonly known, is a very misleading metaphor. Its connotations of ethereality and immateriality obscure the physical reality of this infrastructure and seemingly suggest that your data is some sort of evaporation in a weird internet water cycle. The little existing academic research on data centres typically argues that the industry strives for invisibility and uses the cloud metaphor to further obscure the political reality of data storage. My ethnographic experience so far, however, seems to suggest quite the opposite; that the industry is somewhat stuck behind the marketable but misleading cloud metaphor that really only serves to confuse customers.

Consequently, it seems that a big part of many data centres’ marketing strategies is to raise awareness that the cloud is material by rendering data centres more visible. We are thus finding ourselves increasingly inundated with high-res images of data centres displaying how stable and secure they are. Data centres have in fact become something like technophilic spectacles, with websites and e-magazines constantly showcasing flashy images of these technologically-endowed spaces. The growing popularity of data centre photography – a seemingly emerging genre of photography concerned with photographing the furniture of data centres in ways that make it look exhilarating – fuels the fervour and demand for images of techno-spatial excess. Photos of science fictional data centre-scapes now saturate the industry and the internet, from Kubrickian stills of sterile, spaceship-like interiors full of reflective aisles of alienware server cabinets to titillating glamour shots of pre-action mist systems and, of course, the occasional suggestive close-up of a CRAC unit.One image in particular recurs in data centre advertising campaigns and has quickly become what people imagine when they think of a data centre: the image of an empty aisle flanked by futuristic-looking server cabinets bathed in the blue light of coruscating LEDs.

With increased visibility comes public awareness of the physical machinery that powers the cloud mirage. This new-found physicality brings with it the associations of decay, entropy and, most importantly, vulnerability that are endemic to all things physical. As counterintuitive as it may seem, vulnerability is what data centres need so that they may then sell themselves as the safest, most secure and resilient choice for clients.

Some (Loosely Connected) Social Effects of Cloud Culture

The combination of the confusing cloud metaphor with the almost impenetrable, acronym-heavy jargon and the generally inward-looking orientation of the data centre sector effectively black boxes data centres and cloud computing from industry outsiders. This means that the industry has ended up a very middle-aged-male-dominated industry with a severe lack of young people, despite the fact that it’s one of the fastest growing, most high-tech industries in the UK and expected to continue to sustain extraordinary growth rates as internet usage booms with the proliferation of Internet-of-Things technologies. This also makes data centres ripe territory for conspiracy theories and media interest, which is another reason why they increasingly render themselves hyper-visible through highly publicised marketing campaigns. You often get the feeling, however, that these visual odes to transparency are in actual fact deployed to obscure something else, like the environmental implications of cloud computing or the fact that your data is stored on some company’s hard drives in a building somewhere you’ll never be able to access.

Furthermore, while cloud computing makes it incredibly easy for businesses to get online and access IT resources that once only larger companies could afford, the less-talked-about inverse effect of this is that the cloud also makes it incredibly difficult for businesses to not use the cloud. Consider, for a moment, the importance of this. In a world of near-compulsory online presence, the widespread availability and accessibility of IT resources makes it more work for businesses to get by without using the cloud. The cloud not only has an incredibly normative presence but comes with a strange kind of (non-weather-related) pressure, a kind of enforced conformity to be online. It wouldn’t be surprising if we begin to see resistance to this, with businesses emerging whose USP is simply that they are not cloud-based or don’t have an online presence.

And the current mass exodus into the cloud has seemingly induced a kind of ‘moral panic’ about our increasing societal dependence upon digital technology and, by extension, the resilience, sustainability and security of digital society and the underlying computer ‘grid’ that supports it. Fear of a potential digital disaster in the cloud-based future is not only reflected by cultural artifacts such as TV shows about global blackouts and books about electromagnetic pulse (EMP), but is also present in a number of practices within the data centre industry, from routine Disaster Recovery plans to the construction of EMP-proof data centres underground for the long-term bunkering of data.

Closing Acknowledgments

With the help of organisations like CNet Training I am thus studying the social and cultural dynamics of data-based digital ‘civilisation’ by analysing the growing importance of data infrastructures. Qualitative anthropological research is participatory in nature and, as such, relies upon the openness of the people, organisations and industries with whom the research is conducted. Every industry has its own vocabularies, culture, practices, structures and spheres of activity and CNet Training’s CDCMP®program acted as a vital window into the complexity of data centre lore. It provided me with a valuable insider’s way to learn the hardcore terms of data centre speak and also with the opportunity to meet people from all levels of the industry, ultimately equipping me with a detailed, in-depth overview of my field-site. Interdisciplinary and inter-industry sharing of information like this, where technical and academically-orientated perspectives and skills meet, helps not only to bridge fragmented education sectors, but to enable rewarding and enriching learning experiences. I would like to sincerely thank the CNet Training team for assisting my research. 

Guest blog by Alexander Taylor, PhD Candidate with the Department of Social Anthropology at the University of Cambridge

For further information on CNet’s training programs, please visit, call: +44 (0) 1284 767100 or email

CNet Training

Wednesday, 20 December 2017


The uptake of digital technology, the government’s upcoming Industrial Strategy and strong export demand all add up to an expanding manufacturing sector here in the UK. However, this increase in demand will no doubt lead to added pressure on UK power supply, so it becomes more important than ever to have robust power infrastructure in place.
Downtime can come at a significant cost for manufacturers, with some statistics showing that just one unplanned event can cost in the region of GBP £1.6m.
What’s more, the UK is reported as the worst-performing economy in Europe when it comes to productivity, so it is even more critical to keep downtime to a minimum.
At a large-scale manufacturing plant, for example, a power shutdown or breakdown in the supply of monitoring/control information can have a disastrous effect on productivity which ultimately could impact on a business’ bottom line.
Therefore, industrial processes should be fully protected to ensure productivity remains at its best, as well as risks and cost implications around machinery failure are reduced.
There are a number of measures that manufacturers can take to ensure continuous power – an uninterruptible power supply (commonly referred to as UPS) being one of them. A UPS device will not only protect against power outages, but also provide instant emergency power should the mains power fail.
The UPS will run for a few vital minutes to allow safe shutdown, ensuring that all data is backed up and that the generator has fired up properly and is providing power. But when you consider that 45% of blackouts typically occur due to voltage disturbances, the UPS is also a vital piece of equipment to correct power problems.
Manufacturing machinery is vulnerable to numerous electrical anomalies – from voltage sags and spikes to harmonic distortion and other interruptions. In this situation, a UPS can really come into its own – not only to protect against power outages, but also to operate as an effective power conditioning unit.
By smoothing out sags, surges and brownouts to provide a clean and stable power supply, the UPS prevents damage to sensitive and expensive equipment.
In the pharmaceutical industry, for example, when producing a batch of a very expensive drugs in glass or in a semiconductor, a small dip in the voltage will cause an imperfection in the finished product making it unusable and could even result in the batch being discarded altogether.
Even in steel or brick production, if there is a micro break in the power that causes the furnace controllers to shut down, the process has to be stopped. The material being processed will be scrapped and the whole process started again, which can take days and be very costly.
The UPS can also be deployed solely as a power conditioner without batteries, which will come in handy in environments over 40°C, which is the highest temperature a battery can be kept in.
An example of this is ‘cleaning’ power to prevent light flicker in offices next to heavy industry – cranes moving cargo at docks, for instance. In this situation, a UPS can act as a power conditioner on the power supply to the offices, preventing any flickering.
As we enter this exciting period of growth and see greater uptake of digital technologies, it is wise for those working in the industrial sector to take a step back and make sure their processes and equipment is as protected as it can be.
Manufacturers can do this by having a solid power protection solution in place in the form of a UPS device. This will not only give you peace of mind if machinery does fail, but will give the added reassurance that instances of downtime will be reduced, paving the way for a stronger manufacturing future.
Guest blog by Leo Craig, general manager of Riello UPS.  For more information, please email or call 0800 269394

Thursday, 14 December 2017

Vertiv Anticipates Advent of Gen 4 Data Centre in Look Ahead to 2018 Trends

The next-generation data centre will exist beyond walls, seamlessly integrating core facilities with a more intelligent, mission-critical edge of network. These Gen 4 data centres are emerging and will become the model for IT networks of the 2020s. The advent of this edge-dependent data centre is one of five 2018 data centre trends identified by a global panel of experts from Vertiv, formerly Emerson Network Power.

“Rising data volumes, fuelled largely by connected devices, has caused businesses to reevaluate their IT infrastructures to meet increasing consumer demands,” said Giordano Albertazzi, president of Vertiv in Europe, Middle East and Africa. “Although there are a number of directions companies can take to support this rise, many IT leaders are opting to move their facilities closer to the end-user – or to the edge. Whatever approach businesses take, speed and consistency of service delivered throughout this phase will become the most attractive offering for consumers.”

Previous Vertiv forecasts identified trends tied to the cloud, integrated systems, infrastructure security and more. Below are five trends expected to impact the data centre ecosystem in 2018:

  1. Emergence of the Gen 4 Data Centre: Whether traditional IT closets or 1,500 square-foot micro-data centres, organisations increasingly are relying on the edge. The Gen 4 data centre holistically and harmoniously integrates edge and core, elevating these new architectures beyond simple distributed networks.

This is happening with innovative architectures delivering near real-time capacity in scalable, economical modules that leverage optimised thermal solutions, high-density power supplies, lithium-ion batteries, and advanced power distribution units. Advanced monitoring and management technologies pull it all together, allowing hundreds or even thousands of distributed IT nodes to operate in concert to reduce latency and up-front costs, increase utilisation rates, remove complexity, and allow organisations to add network-connected IT capacity when and where they need it.

  1. Cloud Providers Go Colo: Cloud adoption is happening so fast that in many cases cloud providers can’t keep up with capacity demands. In reality, some would rather not try. They would prefer to focus on service delivery and other priorities over new data centre builds, and will turn to colocation providers to meet their capacity demands.

With their focus on efficiency and scalability, colos can meet demand quickly while driving costs downward. The proliferation of colocation facilities also allows cloud providers to choose colo partners in locations that match end-user demand, where they can operate as edge facilities. Colos are responding by provisioning portions of their data centres for cloud services or providing entire build-to-suit facilities.

  1. Reconfiguring the Data Centre’s Middle Class: It’s no secret that the greatest areas of growth in the data centre market are in hyperscale facilities – typically cloud or colocation providers – and at the edge of the network. With the growth in colo and cloud resources, traditional data centre operators now have the opportunity to reimagine and reconfigure their facilities and resources that remain critical to local operations.

Organisations with multiple data centres will continue to consolidate their internal IT resources, likely transitioning what they can to the cloud or colos while downsizing and leveraging rapid deployment configurations that can scale quickly. These new facilities will be smaller, but more efficient and secure, with high availability – consistent with the mission-critical nature of the data these organisations seek to protect.

In parts of the world where cloud and colo adoption is slower, hybrid cloud architectures are the expected next step, marrying more secure owned IT resources with a private or public cloud in the interest of lowering costs and managing risk.

  1. High-Density (Finally) Arrives: The data centre community has been predicting a spike in rack power densities for a decade, but those increases have been incremental at best. That’s changing. While densities under 10 kW per rack remain the norm, deployments at 15 kW are not uncommon in hyperscale facilities – and some are inching toward 25 kW.

Why now? The introduction and widespread adoption of hyper-converged computing systems is the chief driver. Colos, of course, put a premium on space in their facilities, and high rack densities can mean higher revenues. And the energy-saving advances in server and chip technologies can only delay the inevitability of high density for so long. There are reasons to believe, however, that a mainstream move toward higher densities may look more like a slow march than a sprint. Significantly higher densities can fundamentally change a data centre’s form factor – from the power infrastructure to the way organisations cool higher density environments. High-density is coming, but likely later in 2018 and beyond.

  1. The World Reacts to the Edge: As more and more businesses shift computing to the edge of their networks, critical evaluation of the facilities housing these edge resources and the security and ownership of the data contained there is needed. This includes the physical and mechanical design, construction and security of edge facilities as well as complicated questions related to data ownership. Governments and regulatory bodies around the world increasingly will be challenged to consider and act on these issues.

Moving data around the world to the cloud or a core facility and back for analysis is too slow and cumbersome, so more and more data clusters and analytical capabilities sit on the edge – an edge that resides in different cities, states or countries than the home business. Who owns that data, and what are they allowed to do with it? Debate is ongoing, but 2018 will see those discussions advance toward action and answers.

About Vertiv:

Vertiv designs, builds and services critical infrastructure that enables vital applications for data centres, communication networks and commercial and industrial facilities. Formerly Emerson Network Power, Vertiv supports today’s growing mobile and cloud computing markets with a portfolio of power, thermal and infrastructure management solutions including the Chloride®, Liebert®, NetSure™ and Trellis™ brands. Sales in fiscal 2016 were $4.4 billion.

Guest blog by Vertiv.  For more information, please visit or contact Hannah Sharland on +44 (0) 2380 649832 or email

Tuesday, 5 December 2017

RBS: Online Banking Partly to Blame for 62 Closures

Royal Bank of Scotland's (RBS) decision to close 62 mostly rural branches in Scotland has been met with plenty of protests amongst both customers and activist groups. RBS says that online banking is partly to blame for the closures, but at least one citizen's group doesn't believe them. They are accusing RBS of closing the branches strictly out of greed.

It is always a touchy situation when a large company with an extensive list of brick-and-mortar locations decides to close some of their local outlets. In the RBS case though, the sting of closing 62 branches is much more painful due to the bank's promise – a promise they reiterated many times in years past – that they would not close a branch even if they were the last bank in town.

That promise is at the forefront of action being taken by the Unite union to try to force RBS to maintain the status quo. Unite is hoping Scotland's government will get behind their efforts as well. The Scottish government is a part owner in RBS.

Business Minister Paul Wheelhouse initially responded to the Unite request by reminding those concerned that authority over banking remains the domain of the UK government. There's not much the Scottish government can do other than work with customers and citizen groups to try to convince RBS to change its course of action.

Dwindling Customer Use

For their part, RBS has said that closing the local branches is the result of changes to how people are using bank services. Prior to the internet age, the local bank branch was the lifeblood of both retail and commercial banking transactions. That is no longer the case.

RBS maintains that the number of customers making use of branches in Scotland has dropped by nearly half over the last five years. In announcing the closures, the bank noted that branch use had fallen by 44% over last five years while mobile banking has increased 39% in just the last two years.

Should RBS go ahead with its plans, customers will not be left without banking solutions. The bank says that customers would still have access to a community banker or mobile branch. RBS customers will be able to continue accessing bank services online as well.

So the question is this: are the closures really all about money as Unite contends, or is RBS justified in trying to cut its operating expenses by eliminating branches that are now seeing half as much traffic as they were seeing back in 2012? Unfortunately, there is no simple answer.

The internet age is a wonderful age in which to live. However, the expansion of online access is not without its drawbacks. It is not reasonable for us to expect an organisation to make themselves as efficient as possible through online means while, at the same time, continuing to do things in older, less efficient ways to satisfy those unwilling to embrace the new. We cannot move forward without leaving something behind.

Friday, 24 November 2017

Data Breach and Cover-Up Further Eroding Uber Image

Ride-hailing pioneer Uber has recently suffered a serious a blow to its reputation after officials in London failed to renew the company's operating licence following a discovery that illegal software was being used to circumvent official policy that bars government workers from using the service. In short, Uber has been accused by London of cheating the system. Their reputation will not fare any better on recent news that tens of millions of customers and drivers have been hacked – and the company has known about it for more than a year.

The BBC and other news outlets report that some 57 million Uber customers and drivers are victims of a data breach that occurred back in 2016. Not only did the company know about the breach at the time, but they also failed to report the fact to regulators as is required by law. Making matters worse is the fact that Uber paid the hackers $100,000 (£75,000) to delete the data they stole.

Both law and common sense would dictate that Uber report the breach when first discovered. They probably also should not have paid the ransom without making at least some attempt to fight the hackers. Why they paid and chose not to tell regulators is anyone's guess.

A Series of Missteps

This latest episode with Uber is just another in a long list of mishaps over the last three or four years. Former chief executive Travis Kalanick deserves much of the blame, as his management style and Lone Ranger mentality have upset customers, employees and investors alike.

Kalanick was at the helm when the data breach occurred. The BBC speculates that he may have prevented chief security officer Joe Sullivan or anyone else from reporting it because the company, at that time, was trying to secure a new round of funding. For his part, Sullivan resigned when news of the data breach broke.

Bigger Issues in Play

The BBC's Dave Lee says the biggest part of the problem is not the data breach itself, but the cover-up allegedly orchestrated by Kalanick. He says that most customers and drivers would eventually have forgiven Uber if they had been up front and forthright about what happened. Now that we know they refused to do so, forgiveness and future trust may be harder to come by.

All the Uber-specific implications aside, there are some bigger issues in play here. Most important is how the hackers managed to steal the information. They did it by hacking into Github, an online portal where software developers publish and share their work. Once inside, the hackers were able to find Uber's login credentials to Amazon Web Services. This is the cloud computing service Uber uses to host its software – and data.

Github and Amazon Web Services are equally culpable here. If either one knew about the hack when it occurred, neither reported it. Moreover, Amazon Web Services accounts for a significant portion of cloud software solutions used across the globe. They have some answering to do as well.

Wednesday, 15 November 2017

Johannesburg Cable Heist: Money or Something Else?

Officials in Johannesburg, South Africa have been left scratching their heads, following a brazen cable heist that resulted in the loss of 2 million rand (£110,000) worth of power cables during a burglary some are calling an inside job. The theft occurred at a brand-new data centre in Braamfontein.

News sources say the data centre is a combination data and recovery centre designed to increase the server space and infrastructure necessary for the city to end its reliance on outside service providers. The city essentially wants to host its own data on city-owned servers powered by city infrastructure.

Those plans took a step back after burglars broke into the data centre by entering through an emergency exit on the ground floor. However, there were no signs of forced entry. Once inside the building, the thieves broke into a room where contractors had been storing their tools. They used some of those tools to cut the cables that they eventually stole.

Apparently, the cables were attached to new generators that contractors were testing. There was no loss of power, indicating that the generators were turned off prior to the theft. There were no reports detailing whether the generators were damaged or not. Investigators are now left to speculate as to the motive behind the theft.

Several Possibilities

The first assumption is that the thieves stole the cables for money. After all, they are worth more than £100,000. But how would the thieves off-load the stolen cables without being discovered? This is a question that investigators are still trying to answer. However, there is another possible motive...

In an official statement released after the burglary was discovered, Mayor Herman Mashaba indicated that the heist was an inside job given how little damage was done. He maintained that whoever stole the cables knew exactly what they were looking for and where to find them. He believes the theft may have had nothing to do with money.

Mayor Mashaba has suggested that perhaps the heist occurred in order to dissuade the city of Johannesburg from continuing to build. If it was not to dissuade them, then at least to slow down the progress. If the mayor is right, this would indicate an action taken by one of the companies providing data centre services to the city. They do not want the city to succeed because that would mean a loss of contracts for them.

An Impressive Theft

Right now, there is no clear indication as to the motive behind the theft. Whether it was for money or competitive purposes, one thing is certain: the theft was a rather impressive event in terms of what it took to get in, find the tools, cut the cables and run.

The Mayor has made it clear that the theft will not deter his city's efforts to finish the data and operational centre. It is probably a safe bet that the city will beef up security until the centre is up and running, perhaps even beyond that.