Thursday, 30 July 2015

IBM Working on Absorption Cooling for Data Centres

IBM recently announced a new project aimed at using absorption cooling to keep data centres cool.  The project, known as 'THRIVE', aims to combine silica gel with some of the heat produced by data centre servers to cool those same servers through a process that could reduce the amount of power used for cooling purposes.  Although using heat to cool data centres seems counter-intuitive, the principal has proven effective on a small scale.

Absorption cooling uses heat to remove heat.  How does that work?  To understand the principle, it helps to first understand how vapour compression refrigeration works.

A traditional air-conditioning unit utilises a compressor, heat exchanger and gas or liquid refrigerant to remove heat from a given space.  The refrigerant is sent to a compressor as it enters the space that needs to be kept cool, causing that refrigerant to absorb heat from ambient air.  The refrigerant continues through tubing until it exits the refrigerated space, enters a heat exchanger and is forced to rapidly expand, thereby releasing all of the absorbed heat.  It then cycles through to the compressor and starts the process over again.

Vapour compression refrigeration requires electricity to power compressors and pumps.  You can imagine the power and cooling needs of the average data centre make this form of cooling an expensive proposition.  This is the very reason data centre designers are constantly looking for new ways to keep spaces cool.  Enter IBM and its absorption cooling process.

Absorption cooling involves no mechanical parts and, thereby, no electricity.  The process begins with a solid material, like silica gel, that absorbs the desired refrigerant in its natural state.  Heating the solid material releases the refrigerant, which is then free to absorb heat from ambient air.  Removing the heat source from the solid material causes it to reabsorb the refrigerant, thereby completing the cycle.

Some of the heat generated by the data centre server can be used to drive the system so that no external electricity is necessary.  The best part is that once you get the process started, the physics keep it going without much additional energy needed.

Long-Range Potential

IBM is most likely a long way from perfecting an absorption cooling method that would be viable for large-scale commercial purposes.  Unfortunately, they have to overcome some fundamental laws of physics in order to create a system that is self-sustaining over time.  However, they believe they are on the right track with the goals set for the THRIVE project.

IBM researchers claim that if their theories prove correct, these could lead to a reduction in data centre power consumption by as much as 65% by 2040.  At the same time, they believe they can reduce fossil fuel consumption for data centre cooling purposes by as much as 18%.  These are ambitious goals, to say the least.  Accomplishing them would put IBM at the forefront of new power and cooling systems that could make the data centres of tomorrow more eco-friendly.

Thursday, 23 July 2015

Data Centres and Better Water Management Policies

What is data?  To many of us, it is this innocuous digital substance that floats around the internet without any tangible properties.  Indeed, you could weigh a blank CD, insert it into your computer and fill it with data, then remove it and weigh it again.  You would discover it is no heavier despite now being full.  Yet that does not mean data communications do not have a physical impact.  In fact, it is quite the opposite.

One part of the world offering living proof is southern California.  America's poster child for devastating drought is now in a scenario some experts say could mean the end of local water supplies within a year.  California is depending on an extremely wet El NiƱo winter to replenish its water supply – but what if that does not happen?  Furthermore, how are modern data communications adding to the problem?

In our drive to make data centres as energy efficient as possible, we have gambled on cooling systems relying on massive amounts of fresh water, according to a recent report in The Guardian.  In exchange for reducing power consumption, we have increased the quantity of water we use to keep data centres cool.  And in an era of intense cloud computing and internet on demand, there is no end in sight.  In California, data communications have a very direct impact on the physical environment by way of energy consumption and water usage.


Workable Solutions

California's drought provides the rest of the world with a perfect illustration of why one should find solutions to problems before those problems occur.  So now it is time for the Golden State to start exploring workable solutions.  At the top of the list should be an end to unreasonable environmental policies that have created the current mess, at least in part.

Understand that southern California is an arid region with a climate similar to Arizona.  Sustained periods of drought are common there yet environmental policies have prevented the building of any new dams in northern California for decades.  And the result?  All of the snow that falls in northern California every year melts and flows right into the ocean come spring.  That needs to change.  If Arizona can sustain itself with a properly designed and maintained dam system, so can southern California.

Where data centres are concerned, designers and builders need to start looking at other cooling systems.  One system now being used in Santa Clara is an air-cooled system that relies on outside air.  The system is regulated by computers and software, waiting for just the right conditions to cool indoor spaces.  When those conditions are met, massive ventilation systems open to allow external air in.  It has proven so successful that the campus' central cooling station either drastically reduces output or shuts down completely during air cooling periods.

Today's data centres use more water than they need to.  It is going to take a combination of better regulation and new technologies to change that.

Sources:

Wednesday, 15 July 2015

Government Announces New Investments in the Digital Economy

The Government has announced plans to invest some £45 million in improving the lives of UK citizens through digital research and technology.  The investment will be made by way of numerous research centres located throughout the UK.  The Engineering and Physical Services Research Council will provide £23 million of funding; the remaining £22 million will come from other government agencies.

Universities and Science Minister Jo Johnson made the announcement on the heels of Chancellor George Osborne's commitment in the summer Budget announcement to invest more in the digital economy.  Part of Osborne's plan includes the newly announced Digital Transformation Plan, which will provide an overall framework for the government's investment in the digital sector.

Funding for the new research centres will go to University College London and Bath, Newcastle, Nottingham, Swansea and York universities. It is hoped that the funding will help the research centres attract attention from partners willing to contribute investments of their own.  The research will cover everything from managed services to education to entertainment.

Some projects specifically mentioned by Johnson include:

data refinement for use in personal products
open source initiatives to meet the needs of government services
motion capture technology for medical and athletic purposes
game development to improve educational outcomes.

It appears as though the investment will go well beyond simple hardware and IT services for business.  The government aims to invest in a broad range of initiatives that it hopes will improve the economy, the culture, and the entire social structure of the UK.


Long-Term Outlook

Despite a rather sizeable investment in the digital economy, government plans will likely have little effect on to things such as the nation's data centres or data centre jobs.  This new plan is more about research than anything else, leaving the private sector to continue leading the way in real-world economic development.  That is not a bad thing.  Let the government fund the research while the private sector puts the knowledge gained to practical use.  Such partnerships provide examples of how the public and private sectors can work together.

In terms of the long-term effect on the Government's plans, scepticism remains.  There are those who question the ability of local communities to translate funding into actionable results on a consistent basis.  If local involvement falters, the long-term benefits of digital research will be minimal at best.

One way to prevent wasting government investment is to heavily promote local partnerships between research centres and private sector businesses.  Perhaps incentivising business investment would encourage private enterprise to put some money into the game, thereby guaranteeing they play an active role in making sure research yields profitable results.

It is clear that the current government believes the future of the UK economy will rely heavily on digital technologies.  It is committing a significant amount of funding that it believes will further enhance the digital economy and keep the UK at the forefront of technology in Europe. Time will tell if they are right.

Friday, 10 July 2015

Payback time: The constraints of contracted data centre staff

It’s common for data centre providers to outsource the running of their facilities to outside companies; they provide the building, power and cooling and get other people to run the data centre itself.

These outsourced companies may sign a three, four or five year contract (or for even less if they come in halfway through) and so there is little incentive for the engineers and technicians to improve the smallest things that ultimately pay dividends later on down the line.

Those that implement initiatives that pay off in 10, 20 or 30 years are rarely given the credit they deserve; instead, when the effect kicks in, someone else takes the praise. The “payback” period is therefore relatively short and not conducive to a world-class, incentivised operation.

There is a focus at an industry and government level on improving the efficiency of data centres. When providers outsource the running of their facilities, they risk missing the incremental improvements that ultimately add up and become best practises.

Permanently employed data centre operatives have a high level of personal investment, which should be reassuring to those handing over a certain degree of their IT infrastructure. These permanent staff maintain accountability for their facility and know that they can make a serious difference - it feels like ‘theirs.’ If they think of a better way of doing something, they’re empowered to implement process optimisations and drive them into global operations. You can be sure that the rightful credit is given and that person gets the recognition he or she deserves.

The ability to make a real change is a powerful motivator and helps attract the best operatives. Equally, the operations employing permanent staff are incentivised to invest in their people and provide industry-leading training.

There is an important distinction between specialised real estate and professional data centre operation and that is, more often than not, the people.

Connectivity, cooling and power are (of course) fundamentals but it’s the people inside that are the real differentiators.  They have the power to evolve a facility; they have the power to make a potentially great data centre average or a good data centre great. But it’s not just about the quality of staff; it’s about enabling those talented individuals to constantly improve to the benefit of everyone involved. And when data centre engineering staff can stay with their company for 25 years, even the smallest things are worth doing as they feel the benefit down the line.

We’ve all heard the story of how British cycling coach Dave Brailsford enabled Team GB to completely dominate London 2012’s cycling medal table through marginal gains. The same principal applies to running a data centre. It’s the little things that add up to make a difference.

If a data centre is filled with employees who are motivated and enthusiastic about finding new and more efficient ways of doing things and solving issues instead of just leaving things as they are, there will be a constant stream of innovative forward thinking and strategies.

This attitude will inherently be the catalyst that spurs on constant improvement and advancement in the set-up, and provide customers with peace of mind that their infrastructure is being handled by the best in the business.

Guest blog by Mike Bennett, VP Global Data Centre Acquisition and Expansion at CenturyLink EMEA

Wednesday, 8 July 2015

New Research Hopes to Extend Fibre-Optic Data Transmissions

Optical fibre is currently the material of choice for manufacturing high-speed data cables used for broadband internet access and commercial networking.  As good as fibre-optic cabling is, it does have its drawbacks.  Among those drawbacks is something known as 'crosstalk'.  The crosstalk phenomenon is holding back our ability to scale up fibre-optic distances by distorting signals to the extent that receivers cannot decipher them.  However, all this may be on the verge of changing, thanks to research being carried out in the US.

Researchers from the University of California, San Diego (UCSD) claim they have figured out a way to increase the distance of fibre optic data communications, without sacrificing speed, by boosting the maximum power sent through the cable network at its origin.  This runs counter to everything we currently know about fibre-optic transmissions.

A major concern with current technology is that data signals are distorted once the amount of power being sent through a cable reaches a certain threshold.  This distortion - or crosstalk - can only be handled by limiting the power at the origin of the signal and using amplifiers and repeaters along the way.

As an example, data service providers currently extend fibre-optic distances using repeaters at strategic points in a network to boost signals incrementally without introducing distortion.  Unfortunately, this system hampers overall speed.  The UCSD researchers have apparently found a way to increase power past the established threshold without encountering the same crosstalk issues.  This offers the potential for increasing signal distance while maintaining speed and signal integrity.

The researchers explain what they did in a recently released paper, saying in part:

“Our approach removes this power limit, which in turn extends how far signals can travel in optical fibre without needing a repeater.”

How They Did It:

At the heart of the research team's experiments was the prediction of crosstalk parameters based on something the team referred to as ‘frequency combs’.  By being able to effectively predict crosstalk, the team was able to deploy receivers capable of correctly deciphering signals without any loss of data.  They were able to successfully power a signal across 12km of fibre-optic cable with no additional repeaters.

The team says they were able to predict crosstalk parameters because the phenomenon follows a known set of ‘fixed physical laws’ that researchers were able to observe and document.  Success was then a matter of manipulating data streams within those laws to produce the desired outcome.


What this means:

Expect infrastructure and network design to change rapidly if the US research proves fruitful on a large scale.  The first organisation that can harness greater fibre-optic distance and speed will be in an excellent position moving forward, as evidenced by companies such as BT and Sky.  BT is already working on a network it hopes will bring speeds of 500 Mbps to good portion of the UK in the next few years; Sky is hoping to reach 1 Gbps for its residential and commercial customers in the near future.

Wednesday, 1 July 2015

Mega Data Centre and Load Bank Testing

Hillstone CEO Paul Smethurst has written a brilliant piece about the evolution of load bank testing as it relates to the mega data centre.  In his piece, Smethurst breaks down the evolution of the mega data centre and how modern solutions are being developed to handle the needs of what appears to be the future of cloud computing on a grand scale.  Rather than attempting to summarise Smethurst, we offer you portions of his text along with a few comments of our own.  

Smethurst began his piece by identifying the problem at hand:

“The insatiable demand for data and the growth of cloud-based services has changed the European data centre landscape with the arrival of the MEGA DATA CENTRE.”

“The Mega Data centre allows global software giants like Microsoft, Google, and Apple to provide our day to day IT services. The Mega Data centre is also the foundation for colocation providers such as Digital Reality Trust, Equinix, Telecity, and Interxion to facilitate connectivity to the cloud for multi-national conglomerates in banking, oil & gas and telecoms.”

“This rapid expansion of cloud services has created the challenge of how to commission Mega Data centres of 20MW, 40MW, 80MW, and 100MW.”

Indeed, data centres with such immense demands are not only extremely challenging to design and build; they cost a tremendous amount of money. It is imperative that stakeholders get it right if they are to achieve maximum return on investment.

Smethurst continued:

“Fortunately the evolution of the Mega Data centre has taken a practical modular build approach, with roll out phases of dual halls at 2500KW or as a single 5000KW empty white space. Such a reduction in rating does not, however, reduce the challenges of sourcing the quantity of required load banks needed to complete the Integrated System Testing (IST).”

“The primary objective for data hall IST commissioning is to verify the mechanical and electrical systems under full load operating conditions, maintenance and failure scenarios to understanding that the data hall is ready for deployment of active equipment.”

“Today’s IST requires a package of equipment that will closely replicate the data hall when in live operation. Server Simulators load banks, flexible Cable Distribution, Automatic Transfer Switches, data logging for electrical power,  environmental conditions (temperature and humidity) and the ability to incorporating the load banks within temporary hot aisle separation partitions give the foundations for a successful IST. These tools allow the commissioning report to present a CFD model of the actual data hall operation during IST.”

Avoiding Common Mistakes:

Understanding the complex problems of deploying a mega data centre is one thing, coming up with the proper solutions is another.  Smethurst took time to explain some of the common mistakes the industry readily grapples with:

“The restricted choices in the market dilute the availability of load bank choices and equipment. Decisions to select on cost the wrong type of load bank solution can compromise the validity of the IST but the unknown hidden problems will not manifest until the data hall goes live with active IT equipment.

“The temptation to choose 20KW 3 phase industrial space heaters rather than server simulators load banks effects the commissioning of mechanical cooling systems. The design of such heaters, having thermostatic temperature controls, creates lumpy load during the on & off cycling needed to protect the unit from overheating. These heaters prevent the elevation of the data hall ambient room temperature reaching the design criteria needed to commission the CRAC or AUH units. Some suppliers have reported they have removed the thermostatic controls only to find the space heater overheat and in some circumstance have caught fire due to being operated outside the intended product design and operation.”

“The choice of large 110KW load banks can be justified when testing site equipment such as PDU panels, bus bars, or switchboards to Level 3 ASHRAE requirements. These load banks provide a cost effective solution to  proving the electrical infrastructure of the Mega Data centres; however, they will create localised hot spots or areas of concentrated heat should they be used for the commissioning of the cooling systems.”

“In extreme circumstances, it has been observed during Tier certification that the electrical load has been provided by 2KW infra-red heaters or 1KW hair dryers. Infrared heaters create an ambient temperature of >40oC and wall skin temperatures of 70oC. Hair dryers become a fire risk, as they are not designed for continuous operation as required in an IST. This type of low-cost solution should not be considered to replicate the operation of IT equipment and risk costly delays and comprise the integrity of the testing programme.”

Smethurst closed out his piece by talking about some of the solutions his company offers. Since 1989, Hillstone Load Banks have been designing and manufacturing industry leading load banks for companies and organisations around the world.  We encourage you to read Smethurst's full article to fully understand what Hillstone can offer for mega data centre deployment by clicking here.

Paul Smethurst, CEO, Hillstone Loadbanks.

For more information please visit www.hillstone.co.uk email sales@hillstone.co.uk or call Paul Smethurst on +44 (0) 161 763 3100.