Thursday, 27 November 2014

Google Signs Long-Term Deal for Wind Power

When Google flips the switch on their brand-new data centre in Delfzijl, Netherlands, it will be powered entirely by renewable energy generated by local power providers.  Some of that power will come from a new wind farm now being constructed by Dutch energy company Eneco.  Google and Eneco just signed a 10-year agreement that will see the search engine giant purchase all of the energy produced by the wind farm through until 2026.

Google is building the new data centre near the port of Eemshaven, on the Netherlands' north coast.  It chose the location due to it being the site of a major fibre-optic cable providing data services to the region.  The area is also home to a number of significant energy providers capable of generating the power Google will need for the facility… and it will certainly need a lot.

The data centre will be large enough to cover an area greater than 40 football pitches.  Within its walls will be tens of thousands of servers providing hosting and collocation for companies all over Europe.  A data centre of this size will require a tremendous amount of power for running the servers and keeping the entire building cool.  Google will use all of the 62 MW generated by the Eneco wind farm… and then some.  For their part, Eneco says construction of the wind farm will provide approximately 80 local jobs for a year and a half.

Signing the deal with Eneco is just the latest in Google's ongoing efforts to use more renewable energy.  In addition to the Eneco agreement, the company has also signed power purchase agreements (PPAs) with two other companies in Europe.  What's more, it has now signed a total of eight power agreements around the globe enabling it to purchase renewable energy.  Google is clearly committed to adopting a green strategy for its new data centre operations.

A Boost to the Industry

By signing renewable energy agreements with companies such as Eneco, Google is doing more than just helping local environments and keeping its own energy bills and check.  The company is also providing a much-needed boost to the renewable energy industry. Keep in mind that renewables are an expensive proposition requiring the promise of payback before companies are willing to invest large sums of money.

Every time a company such as Google signs a PPA, they enable renewable energy companies to design and build the architecture necessary to make renewable energy profitable and efficient.  In this case specifically, Eneco has the incentive to put the time and money into the new wind farm because that company is guaranteed all of the power will be purchased.  If the wind farm proves successful in meeting Google's needs, and we have no doubt that it will, Eneco will be encouraged to build more wind farms for other customers.

We are getting closer to the day when the data centre industry will be relying almost exclusively on renewable energy sources.  That day cannot come soon enough...


Friday, 21 November 2014

Report: Many CIOs are Failing to Understand their Ever-Evolving Role

It wasn't too long ago that the main responsibility of the chief information officer (CIO) was to ensure that the company IT department kept up with the interoffice e-mail system and the website however that was then.  Today, however, things have changed.  The modern CIO is an integral part of company management with a more prominent role than ever before yet many of them do not fully grasp their new role, according to a brand-new report from the Society for Information Management (SIM).

The report lays out the role of IT and the CIO in the modern business environment.  It then goes on to discuss how the new roles are supposed to fit into the larger business environment and the fact that the new rules are not being implemented as they should be in many companies.

For example, information technology now involves more than just keeping company computers running.  It is all about transmitting data, analysing data and linking together every organisational department through effective communications.  This new paradigm suggests that the CIO needs to worry as much about company management as infrastructure support.

The report presents data from a study involving just over 1,000 responses to a survey presented to senior IT leaders.  Among the participants, 451 identified themselves as a CIO – either by title or workplace role. According to the data, CEOs are unhappy with the fact that their CIOs do not seem to understand their new roles within the company structure.  CIOs are still keeping computer systems functioning but they are not providing the data, analysis and other IT tools needed to help companies be at their best.

Possible Solutions

 With IT services and technologies consuming ever-larger portions of the company budget, more attention needs to be paid to the issue of CIOs not meeting executive management expectations.  There are three solutions that need to be considered, either separately or in a combined effort.

Firstly, companies need to adjust their cultures so that the IT department is no longer considered a separate, stand-alone entity that exists by itself in a back corner of the building.  IT needs to be considered just as integral to the overall success of the company as the sales force, labour and office staff.

Secondly, the CIO position must be elevated to executive level management at companies in which this has not yet happened.  As with the chief financial officer or chief operations officer, the CIO should be reporting directly to the CEO as a member of the executive management staff.

Thirdly, the CIO needs to be included in the discussions of any business decisions involving the other members of the executive management.  As one of the company's senior officers, he or she cannot be expected to contribute to their full potential if they are not included in executive level decision-making.

There is little debate the roles of both IT and the CIO have evolved over the years.  That evolution is now occurring at a faster rate, requiring more urgency to get it right.



Friday, 14 November 2014

Telecom Fraud Reveals Oft Overlooked Security Flaw

Cyber security these days focuses almost entirely on electronic data breaches by way of network hacks, malware and the like.  And rightly so however the recent fraud conviction of a telecom director suggests that we might be ignoring one of the most fundamental aspects of fraud – to our own detriment.  What is it we are ignoring?  The old-fashioned con artist.

Matthew Devlin, a 25-year-old telecom director from Halifax, recently appeared before a magistrates’ court after he was caught impersonating a security official in order to gain sensitive customer information.  Devlin apparently contacted Everything Everywhere (EE), among other telecoms companies, in an effort to obtain user names and passwords for customer accounts.  He succeeded in obtaining the information he was after, relating to more than 1,000 customers.

Devlin intended to use the information to determine when mobile customers were in line for an upgrade so that he could contact them and pitch his own company's products and services.  Magistrates’ court fined him £500 and assessed a £50 victim surcharge and more than £430 in court costs.

More Severe Penalties

Upon reading the penalties imposed on Mr Devlin, it is hard to imagine he will be deterred from trying the scheme again.  After all, what is a £1,000 bill if he can successfully sell tens of thousands of pounds in new products and services?  Not much, according to Information Commissioner Christopher Graham. Graham was quoted as saying:

“Fines like this are no deterrent.   Our personal details are worth serious money to rogue operators.  If we don't want people to steal our personal details or buy and sell them as they like, then we need to show them how serious we are taking this.  And that means the prospect of prison for the most serious cases.”

The thing we seem to be forgetting is that fighting con artists is completely different from fighting cybercrime at the local data centre or commercial IT department.  By their nature, con games involve the human element which, unfortunately, makes them harder to thwart.  The only way to combat them effectively is with a combination of efficient training and harsh penalties that make such activities a losing proposition.

In most parts of Western Europe, we tend to take an approach toward crime that only deals with the issues around the edges.  Simply put, we are more prone to deal with the symptoms of crime than the actual cause of it.  Therefore, while we can continue to develop sophisticated digital technologies to protect networking and sensitive communications, we allow people such as Mr Devlin to brazenly impersonate security personnel to steal personal data.  Moreover, when caught, we impose penalties that amount to nothing more than a slap on the wrist.

Christopher Graham is right. If we are to prevent this sort of fraud in the future, the penalties for such crimes need to be tougher.  They need to be harsh enough that criminals will be forced to think two or three times before perpetrating such crimes.


Thursday, 13 November 2014

2bm design and build a data centre for ARM in US

Data centre design and build specialists 2bm have recently received some great news.  One of their most recent and challenging contracts has been nominated for three separate awards in the Datacenter Dynamics North American Awards 2014.

1.    Special Assignment Team of the Year
2.    Innovation in the Medium Data Centre
3.    The Green Data Centre

The project was carried out for ARM, who were looking for an experienced and qualified team to provide them with a customised data centre solution that aligned with their ethos of innovation, efficiency and sustainability (as per EUHPC) in Austin, Texas.

Despite never working with ARM in the US before, 2bm has worked alongside them on a facility in Cambridge, UK.  This project was, therefore, intended to serve as a high level model for Austin.

Project Summary:
·        ARM required a 2.0 MW High Performance Cluster (HPC) data centre
·        950 kW operational Jan 2014 + 950 kW on-demand March 2014
·        N+1, Tier III, PUE 1.30 or below
·        Conforms to TIA-942, AIA, EU Code of Conduct for Data Centres, NFPA, ASHRAE, & Local Codes

Senior Project Manager Gordon Smith represented 2bm, working closely alongside Digital Realty. This time the task was to deliver multiple, innovative energy saving features within a single facility.

The system was designed to efficiently handle varying IT loads up to 22kW per cabinet, with the flexibility to dynamically control cooling and adjust to low loads per cabinet.

Innovation features of the project included high density, water cooled racks, using high supply water temperature (75°F).  The temperature of each cabinet (81°F) is tightly controlled  and Water to RDHx loops, supplied via a Venturi negative pressure system, provide full leak prevention to data halls.

In addition, the energy consumption was fully optimised with zero water consumption.
The project represented a series of ‘first times’ for 2bm and a number of the partnering companies. For example, this was the first time 2bm had worked alongside Digital Realty, and then the first time 2bm and ARM had worked together on a project in the US.

“We had an immediate need for a high efficiency, high density data centre. But, more importantly, we have a long-term need for a partner who can potentially help us with our go forward data centre strategy.” -- John Goodenough, Vice President of Design Technology and Automation at ARM Holdings

“The facility is an exceptional example of an energy efficient data centre with every aspect meeting and in some cases surpassing the requirements of the EU code of conduct for data centres. We believe that the design of the ARM NAHPC data centre fully embraces and surpasses the sustainability credentials required for commercial buildings with the city of Austin.” -- CEEDA Recognition

The full list of finalists in all judging categories can be found here and the winners will be announced early next month.
2bm is proud of what has been accomplished and excited about the potential to do this again globally.

To read more about the project - including design highlights - click here.

Guest blog by Ashleigh Soppet, Digital Marketing Manager, 2bm




Tuesday, 11 November 2014

Free Cooling. Is It Really Free?

The term ‘free cooling’ has been used more and more in recent years in relation to data centre cooling. However, on the basis that there is no such thing as a free lunch, we suspect there’s no such thing as FREE cooling either.  In this blog, Alan Beresford, Technical and Managing Director at EcoCooling explains the true costs of so-called free cooling.

If we took a rack of servers and put it in field in a cool environment like the UK you could just about claim you had totally free cooling.    However, in truth, there is the cost and power usage of the two or more blower fans in every server. So even on a cool day, in a cool field, free cooling isn’t actually free!

Let’s move into the real data centre scenario. Provided the external fresh-air is below 18-20C and provided we can force enough of fresh-air through the data hall, then we have nearly-free cooling going on.

However, we now we need big fans to blow around 10 cubic metres per second of fresh-air through the data hall for every 100kW of IT load (about 30 to 50 racks).  So, in addition to the server blower-fans we’re going to need power for these big air-movers.  

In practice we also need filtration. This increases air resistance which adds to the fan power requirement.  We also need to add evaporative cooling (where the fresh-air is cooled by the effects of water evaporation) to deal with the few days per year where the outside temperature is over 20C. 

So the power budget is now up to 3-4kW per 100kW of IT load – a PUE of 1.03 to 1.04. Whilst this is still not ‘free’, it’s massively cheaper than the conventional refrigeration-based cooling systems that have been deployed for the last twenty years or more.

As yet, probably less than two per cent of data centres are cooled with fresh-air and evaporative cooling.  And whilst a lot more could be, it’s not appropriate for every data centre. But we’ll cover that later.

Refrigeration dominates:
Refrigeration–based cooling systems come in a number of formats, the main examples are:

DX CRACS – where there is a DX (direct expansion) compressor and heat exchanger inside a CRAC (computer room air conditioning) unit within the data centre hall. Pipework containing a refrigerant connects the CRAC to a fan-assisted condenser unit outside.   The refrigerator unit inside the CRAC extracts the heat from the data centre hot air and then transports the hot refrigerant to the condenser where the heat is expelled into the atmosphere.

Chilled-Water systems, where a refrigeration unit generally sits outside the data centre. This uses the standard compressor, evaporator and condenser-plus-fans model of refrigeration. However, it requires an additional heat exchanger to chill a water circuit that transports low temperature water to either a data-hall CRAH unit (computer room air handling) or to in-rack solutions like rear door coolers (where another heat exchanger with fans extract the heat from the data hall air).

A legacy chilled water refrigeration system can use up to 100 per cent of the IT load – that’s 100kW of cooling power per 100kW of IT load!

Modern refrigeration systems have benefited significantly from variable-speed fans and consume somewhat less.

Refrigeration with free cooling:
A lot of manufacturers have realised, righty, that there are a large number of days each year, in temperate countries, where the outside air is theoretically low enough to cool the data centre without the refrigeration system being used - and hence save significant power and energy cost.

The trouble with this idea is however two-fold:  Firstly, you still need to power internal and external fans and pumps.

Secondly, in ‘free cooling’ mode a system designed for refrigeration is fairly inefficient.  As a result, ‘free cooling days’ are not those with temperatures up to 18-20C, the inlet temperature to the air cooler unit would in theory need to be below 14C.

In practice, however, external chiller units are generally installed in ‘chiller-farms’ either on the roof or on the ground. There can be significant leakage of hot exhaust-air back into the chiller inlet. This means that the inlet is almost never below 14C. So, in some installations, despite the theory, you’ll practically get zero ‘free-cooling’ days.

Higher temperature, hidden cost:
Theoretically, you can get more ‘free cooling’ days from a system if you increase the server inlet temperature. Under recently relaxed rules from ASHRAE (which sets data centre cooling standards) the input temperature to the servers can be elevated from 18C temperature to 27C.

With say a 10C heat differential from front to back of the servers, this means the exhaust air will be at around 37C.

However, from ASHRAE’s own published figures component reliability on the servers is quite badly reduced – typically 30% more heat related failures at 27C compared with 20C inlet air temperature. Not great in a mission-critical facility.

It’s a little known fact that the servers themselves use more energy at higher supply air temperatures.  A consequence of using high inlet temperatures to maximise cooling plant efficiency can increase server energy use by 3%.   The PUE may look good but the actual operating cost may go up.

And then there are the implications for the engineers and all the power and data cabling, patching and network switches that are housed at the back of the rack - which were never designed to work with ambient temperatures near 40C.

Indirect air cooling:
Some data centre operators are beginning to understand that in many situations, though not all, it’s OK to blow filtered fresh-air through the data hall and servers.

Indirect air systems are a compromise with an air-to-air heat exchanger to keep the data hall air separate from the external fresh-air.   

If, say, your data centre is close to a chemical works, or an inner city full of exhaust fumes, it can make sense. But the downside is that, with two air circuits, you need two sets of fans and the convoluted airflow path increases the air resistance in both circuits and the practical power use is more like 10-15kW when you add up all of the fan power usage.

Most indirect air systems, even those that use evaporative cooling, also need refrigeration for some days of the year, adding the cost of a refrigeration system to the expensive heat exchangers.

Nearest to free:
I’d love to be able to tell you that direct fresh-air cooling is the universal panacea for nearly-free cooling. But, sadly, I have to say it’s only for some people - because you can’t deploy direct fresh-air cooling at every site, nor in every climate. 

EcoCooling now has evidence from 200 installations and from research studies by Cambridge University that show internal data hall air can meet clean-room standards and ASHRAE humidity requirements without any need for dehumidification. 

And all of those 200 data centres have been able to operate for 365 days/year without any need for refrigeration back-up.

But even at PUE of 1.05 to 1.10, it’s still not quite free!

Guest blog by Alan Beresford, Technical and Managing Director at EcoCooling

For more information please contact sales@ecocooling.org or +44 (0) 1284 810586


Wednesday, 5 November 2014

Cloud Sprawl: Are There Too Many Clouds in the Data Sky?

There is a new term emerging in the world of cloud computing: 'cloud sprawl’.  It is a term being used to describe current conditions in which organisations have multiple cloud environments all in place simultaneously, with each one including multiple instances of virtualisation.  The principle of cloud sprawl is based on the municipal planning concept of urban sprawl; it denotes growth that is quickly getting out of control.

The early days of cloud computing were marred by excessive capacity and not enough personnel and systems to properly manage it all.  Despite fairly rapid adoption in North America, Europe was not as quick to catch on because of the implied weaknesses of the system.  Things are now much improved thanks to better management however some experts fear the principle of cloud sprawl could tip things back in the other direction.

For example, a company working with its own enterprise server may no longer have just one cloud.  In fact, most do not.  Most have multiple cloud environments used to serve different groups of people; for example, they might have a private cloud for company employers and vendors and a completely separate cloud for the general public.  Driving these multi-cloud environments is a new love of distributed computer systems.

Another potential problem is one of new cloud administrators being given a piece of new technology and running wild with it, only to find that things get out of hand very quickly.  Those concerned with cloud sprawl say now is the time to get control of cloud environments before these become completely unmanageable.  It is something that needs to be dealt with at data centres and corporate IT facilities alike.

The Cloud Is Here To Stay

It would appear as though the cloud is here to stay.  There was some speculation a few years ago, but the broad adoption of cloud computing has pretty much cemented its place in the world of Internet technology. Furthermore, Internet use is only going to expand as we move into the future.  It is not likely the global community can reach its goal of worldwide Internet access without continuing to utilise the cloud for everything from web-based applications to IT services.  It is what it is.

Having said that, as long as web administrators are going to begin thinking of ways to attack cloud sprawl, an equal amount of attention needs to be paid to on-demand Internet.  It is the insatiable thirst for streaming data and real-time applications that are driving the need for ever-increasing speeds.  Any methodologies put in place to control cloud sprawl have to be measured against the ability to provide for the world's on-demand needs.

It is an exciting time to be part of the world of data centres and cloud computing.  As with the entrepreneurs of the early industrial age, those of us involved in developing the future of the Internet face a daunting world of exciting challenges.  Only time will tell where we end up…