Friday, 28 March 2014

Data Centre Efficiency Opens Door to More Green Options

Technology advances have led to data centres being far more efficient in their power needs than ever before, despite a frightening 2005 report from the US Environmental Protection Agency (EPA) that predicted data centre power usage would triple by the year 2010.  According to Amazon's James Hamilton, the report was flawed because it did not take into account technological innovation.

Hamilton wrote in a recent blog post that US data centres were consuming power at a rate equal to 1.5% of America's total power consumption in 2005.  The EPA predicted it would jump to 3% or more by 2010.  Truth be known, actual power consumption four years ago was between 1% and 1.5%.  It continues to remain the same today.

The primary factor in keeping power consumption constant while continuing to expand the number and size of the world's data centres is one of technology.  In other words, technology has increased energy efficiency nearly 50% over the last decade.

One of the most important technological advancements has been cloud computing.  It has allowed the industry to grow considerably without increasing the amount of power consumed and, what's more, it has opened the door to more green options.  According to Hamilton, we have now reached a point where new data centres are capable of generating more power than consumed by way of renewable sources.  Doing so means the data centres can actually contribute to the power grid rather than taking from it.

Hamilton cited a new US data centre being built by Facebook in Altoona, Iowa.  Although Facebook is not generating its own power on-site, it is partially funding a nearby wind farm project that will provide 100% of the power needed to run the facility.  The data centre will be connected to the grid for those times when wind is just not getting the job done, but it is expected that the wind farm will generate more than enough on most days.

Mr Hamilton makes the case that the open door to more green options does not necessarily mean data centres should be generating their own power on-site.  His belief rests in the fact that data centre operators are experts in hosting digital information, not producing electricity and maintaining power generation capabilities. He sees far too many things that could potentially go wrong if data centres got into the power generation business.

He believes the approach taken by Facebook is a far better way to go.  Let data centre operators focus their attentions on more efficient computer systems that will require less power, while allowing power companies to focus on generation and transmission.  That is the best way to serve everyone's needs with as few problems as possible.

The EPA was incorrect in its predictions largely because it did not take into account innovations and efficiency.  Since then, we have learned our lesson about making predictions for the future.  Instead, we are focusing our attention on making things better in any way we can.  That's the way it should be.


Thursday, 27 March 2014

Guest blog by Kevin Brown, VP Data Center Solutions Offer at Schneider Electric: "Upgrade, Build New or Outsource?"

A new Schneider Electric white paper discusses merits of ways to increase data centre capacity

As businesses evolve and grow, the need for increased levels of it leaves managers with the challenge of how to resource additional physical capacity.  White Paper 171 from Schneider Electric compares the benefits of three different approaches.

When a business identifies the need for new IT equipment, thought must be put into where that IT equipment is going to reside.  Considerations for owning versus outsourcing data center physical infrastructure.

For existing data centres with available power, cooling, and space capacity, the decision is often an obvious one, but when a data centre is at or near full capacity, a decision must be made as to where the additional IT equipment is going to be accommodated.  According to authors Kevin Brown and Wendy Torell, there are three fundamental approaches to meeting the new capacity requirement: upgrade, new build-out, or outsource to a co-location provider.

Upgrade Physical Infrastructure:
Depending on the capabilities of an existing data centre, upgrading the facility may be sufficient to meet new IT requirements.  The level of disruption, cost, and capacity gain depends largely on the scope of the upgrade project, which may include anything from implementing simple air flow management practices like adding blanking panels, to adding a high-density pod to increase power, cooling, and/or rack capacity in a low-density data centre. An “at a view” table summarises the types of upgrades that are assessed and contrasted to outsourcing in the paper.

Build a New Data Centre:
Increasing levels of standardisation, modularity, and data centre infrastructure management (DCIM) software are all playing an important role in simplifying the way facilities deployed and  operated.  These technologies and approaches result in more integrated power and cooling infrastructure, significantly impacting implementation time, cost, efficiency, and predictability.

For example, building a data centre with scalable, pre-assembled, and integrated facility power and cooling modules may provide TCO savings up to 30% compared to traditional, built-out data centre power and cooling infrastructure. When evaluating the costs of building vs. outsourcing data centre physical infrastructure, prefabricated, modular data centre approaches should be considered.

Move to Co-location:
Co-location has been of growing interest due primarily to the fast deployment capability, and the providers’ core expertise in operating data centres which leads to a secure, highly available space with economies of scale that can help keep costs competitive. Outsourcing to the Public Cloud is a further extension of this, where not only the physical infrastructure is outsourced, but also the software (SaaS), the IT infrastructure (IaaS), and/or the platform (PaaS).

White Paper 171:
Considerations for Owning versus Outsourcing Data Center Physical Infrastructure:
The decision between the three approaches is based on financial savings, sensitivity to cash flow, and other key strategic factors. The former two categories, cost and cash flow, are quantitative, while the third, strategic factors, consists of common business preferences and constraints that affect the decision qualitatively. Some of the strategic factors may eliminate certain alternatives altogether, while other factors can heavily influence the decision, depending on the business objectives and priorities of the decision maker(s).

The 10-year TCO may favour upgrading or building over outsourcing, however, White Paper 171 demonstrates that the economics could be overwhelmed by a business’ sensitivity to cash flow, cash cross-over point, deployment timeframe, data centre life expectancy, regulatory requirements, and other strategic factors. The new paper, which is available for free download, discusses how to assess these key factors to help make a sound decision.

White Paper 171 is available immediately for free download here http://www.apc.com/whitepaper/?wp=171


Guest blog by Kevin Brown, VP Data Center Solutions Offer at Schneider Electric.  Kevin is also co-author of the new white paper.

Monday, 24 March 2014

Swiss Researchers Propose New Idea for Liquid Cooled Chips

The EU-sponsored GreenDataNet project is so committed to reducing the energy demands of Europe's data centres by as much as 80%, that it is willing to commit €2.9 million to research.  The project may be on the verge of realising its first substantial milestone thanks to a Swiss venture being undertaken at Ecole Polytechnique, Federale de Lausanne (EPFL).  Project researchers believe they can increase server capabilities and deal with subsequent heat problems by using a revolutionary liquid cooling system.

The system, which was recently revealed during a GreenDataNet exhibition, involves creating a stack containing a CPU and multiple memory chips layered on top.  By stacking the memory on top of the CPU, signal delays are reduced, thus allowing for faster processor times without the need for substantially more power.  However, such a stack would generate a tremendous amount of heat.

To solve that problem, researchers have developed a revolutionary two-phase cooling system that utilises tiny channels running through the heart of the silicon chips, through which a cooling liquid could flow.  The two-phase system indicates that a portion of the liquid would evaporate in order to take the heat with it.  Nevertheless, that creates new problems of backflow and dry spots that still need to be solved.  The researchers have said that they will design and build a system that works as intended; it is just a matter of time.

Should the project be successful, researchers believe there will be an added benefit for European cities.  That benefit comes by way of harnessing the heat produced by the high-powered data centres of the future.

Because European cities tend to have very high population densities, municipal heating systems are the norm.  What's more, European data centres tend to be concentrated in high population areas, unlike their American counterparts.  Combining the two paradigms would mean excess heat from data centres being put to use to provide for municipal heating needs.

An Important Breakthrough

Those who are intimately familiar with how computers operate know that the problems associated with power and cooling are the bane of European data centres.  Cooling more so, to the extent that inadequate cooling systems are largely responsible for holding back the development of more powerful computers.  Simply put, we have not found cooling methods that are effective enough to push high-powered computing into the future.  If the Swiss project is successful, what it accomplishes could be an important breakthrough.

Research has been ongoing for several years now, so it is safe to assume that the project is not on the doorstep of success quite yet.  It is still going to be some time before it is able to produce a product with broad commercial appeal.  We will be waiting with great anticipation.


In the meantime, it would be interesting to see if this project could be combined with another plan we wrote about last week;  a plan to harness used electric car batteries to provide backup power for data centres.  It is certainly something worth thinking about. 

Friday, 21 March 2014

Line-interactive vs On-line Network UPS Systems and Which Should You Choose?

The two main types of UPS operation (known as the UPS system’s topology) used in Network/Server UPS Systems are line-interactive and on-line. At the most basic level, line-interactive UPS systems are less expensive than on-line UPS systems (approximately 20 to 40% less, depending on the model and manufacturer), but they also provide less protection than on-line UPS systems. It’s helpful to explore the differences between line-interactive and on-line models to understand the trade-offs involved. Note: If you need a UPS system larger than 5,000 VA (4,000 watts), an on-line UPS is your most likely choice. Let’s consider four key UPS features and how line-interactive and on-line technology deliver each feature to connected equipment.

Voltage Regulation
Line-interactive UPS systems use automatic voltage regulation (AVR) to correct abnormal voltages without switching to battery. (Regulating voltage by switching to battery drains your backup power and can cause batteries to wear out prematurely.) The UPS detects when voltage crosses a preset low or high threshold value and uses transformers to boost or lower the voltage by a set amount to return it to the acceptable range. On-line UPS systems use a more precise method of voltage regulation: they continuously convert incoming AC power to DC power and then convert the DC power to ideal AC output power. This continuous double-conversion operation isolates connected equipment from problems on the AC line, including blackouts, brownouts, overvoltages, surges, line noise, harmonic distortion, electrical impulses and frequency variations. In “line” mode (i.e. when not operating from battery), line-interactive UPS systems typically regulate output within ±8-15% of the nominal voltage (e.g. 120, 208, 230 or 240 volts). On-line UPS systems typically regulate voltage within ±2-3%.

Surge/Noise Protection
All Network/Server UPS Systems include surge suppression and line noise filtering components to shield your equipment from damage caused by lightning, surges and electromagnetic (EMI/RFI) line noise. On-line UPS systems offer superior protection because the double-conversion operation isolates equipment from problems on the AC line.

Pure Sine Wave Output
When operating from battery power, a line-interactive UPS system generates the waveform of its AC output. An on-line UPS does this continuously. All on-line and many line-interactive UPS systems have pure sine wave output. Pure sine wave output provides maximum stability and superior compatibility with sensitive equipment. Pure sine wave power is required by some equipment power supplies and prevents others from overheating, malfunctioning or failing prematurely.

Transfer Time to Battery
During an outage, line-interactive UPS systems typically transfer from line power to battery-derived power within two to four milliseconds, which is more than fast enough to keep all but a small percentage of the most power-sensitive equipment operating without interruption. On-line UPS systems do not have a transfer time because the inverter is already supplying the connected equipment load when an outage occurs.
I hope this post helps clarify the difference between Line-interactive and On-line Network UPS Systems and which UPS topology best suits your specific network application.


Guest Blog by Tripp Lite http://www.tripplite.com/

Thursday, 20 March 2014

Novel Plan Looks at Used Car Batteries for Data Centre Power

One of the latest and most intriguing bits of data centre news comes by way of a novel plan that proposes providing backup power for data centres through used electric car batteries.  The plan is part of a €2.9 million project intended to encourage data centres to trim their power consumption, embrace more renewables and depend less on the grid.

The GreenDataNet consortium, recently launched in Switzerland, proposed the idea thanks to data provided by consortium member and electric car manufacturer Nissan.  According to the carmaker, the batteries used in its Leaf model still have plenty of usable life left in them, even when they are no longer useful for powering vehicles.

A new electric car battery can store 24 kW of energy for a useful life of 10 to 14 years.  At the end of that life, it can still store up to 18 kW of energy – even though that amount represents too much of a degradation for an electric car.  By stacking those batteries together, they could be used to provide supplemental power to data centres that could reduce their dependence on the grid and balance loads during peak times.

Nissan has already proven the concept through its innovative ‘Leaf 2 Home’ programme for Leaf owners in Japan.  Participating owners are set up with used batteries in their homes as a way of levelling out the power demands of both house and car. The consortium believes the same principle would work just fine for European data centres, which tend to be smaller than their American counterparts.

Another one of the advantages of using car batteries for data centre power is the fact that no significant infrastructure changes need to be made.  That is a big plus when you consider the vast majority of European data centres are located in urban areas with little room to accommodate big changes.

A Question of Demand

In order to make all of this work, Europe would have to see a significant increase in the demand for electric cars.  Why?  Because it would require a stack of hundreds of batteries to achieve any noticeable benefit for the average data centre.  Moreover, no battery will last forever.  Right now, there are just not enough electric cars on the road to make the plan worthwhile.

The consortium has been given the task of achieving a significant reduction in grid-based power consumption among data centres over the next several decades.  If the car battery idea is to go anywhere, one of the first strategies should be to use some of their funding to promote electric car purchases among Europeans.  Without enough cars on the road, there will not be enough batteries to make it feasible.


At first glance, this is a great idea that deserves more exploration and promotion.  Nevertheless, it is also one that should be approached with cautious optimism.  It will only work under the right conditions; conditions no one can promise will exist in the coming years. 

Monday, 17 March 2014

Irish Cybercrime Numbers Raise Concerns about Security

Attendees at a March 11 cybercrime conference hosted by the Cork Institute of Technology heard some startling information that raises new concerns about cyber security in Ireland and across Europe. According to experts, Internet users in Ireland are being targeted for cybercrimes at a rate of one-in-five. The conference went on to say that cybercrime negatively affected Ireland's economy to the tune of €350 million in 2013 alone.

According to The Independent, the 20% cybercrime rate in Ireland is probably not even accurate due to many of these crimes not being reported.  Crimes are underreported because at least some people do not even know they have been victimised.  If that's true, it just shows how oblivious Internet users are and how sophisticated cyber criminals have become in levelling their attacks.

The CIT conference suggested that roughly 80% of the cybercrimes in Ireland are related to credit card and bank fraud.  Among all such fraud cases, roughly 84% are related to crimes being committed by those who have stolen credit card information without actually possessing the card itself.  These crimes, known as 'card not present' crimes, are increasingly easy to pull off thanks to the proliferation of online purchasing.

Making matters worse is the fact that so many people are now engaged in questionable data communications practices, by way of mobile phones, without ever thinking about it.  Despite how far we have come in beefing up secure networking, the system is prone to breakdown with every new device added to it.  The proliferation of the smart phone is just making the problem that much worse.

Dr Stephen Minton, a psychologist at Trinity College Dublin, made that very point to The Independent.  He made note of the fact that primary school students are now coming to school with smart phones as young as age six.  Not only does Dr Minton believe this is unnecessary, he also warns it is contributing to the problem.

More Than Just Secure Connections

The one thing that was not mentioned in The Independent report was how all of this cybercrime relates to the modern world of cloud computing and virtualisation.  This aspect should not be ignored in the discussion of how to beef up cyber security.  The fact is that security is about more than just creating secure online connections with encrypted data transfers.  We also have to work on beefing up security at the physical locations where data is stored.

A number of recent high-profile cases have shown that hackers can breach a secure database and steal the data at will.  This is not a problem with Internet connections; it is a problem with secure cloud computing and virtualisation environments.  Somehow, cyber criminals are getting access to servers in order to steal information on credit cards and bank accounts.  That needs to change before we can ever hope to get close to solving the cybercrime issue. In the meantime, we must continue to be as vigilant as possible.

Sources:


Friday, 14 March 2014

Bionic Leaf Explores New Angle of Solar Energy

For more than a decade, we've been awaiting the introduction of a fully electric car capable of meeting the performance of its fossil fuel counterpart without costing the equivalent of the GDP of a small, South American country.  And, while hydrogen fuel cells hold a lot of promise, the one thing standing in the way is the lack of an efficient way to generate electricity to power the cells.  That may be changing thanks to a new 'bionic' leaf programme at California's Lawrence Berkeley National Laboratory.

Researchers at the laboratory have been developing a new artificial leaf system capable of using solar energy to create hydrogen, which could then be used to produce electricity.  The lab has been working to design and build a device that would be both effective and affordable enough for eventual commercial applications.  Apparently, they are closer than ever before.

A recent exhibition of the artificial leaf shows a device that mimics the process of photosynthesis to convert solar energy into hydrogen.  How does it work?  It's as simple as dropping the bionic leaf into standing water and letting it go to work.

The artificial leaf is actually a photocathode, powered by the sun, which removes hydrogen from water and stores it in a small cell.  Observers say the device actually simulates photosynthesis rather accurately, at least in terms of generating energy.  Instead of using that energy to power a living organism though, it is stored for later use in electrical generation.

Now that the Berkeley researchers have proven the concept, the next step is to figure out how to do it in a way is cost-effective.  That's the key to making it a workable product for the auto industry.  Assuming they figure it out sooner rather than later, it is possible to conceive the idea of all-electric vehicles being mass-produced worldwide within the next 10 years.

Storage Is the Key


Anyone who has followed alternative fuel vehicles research/production knows that such vehicles, at least in terms of solar power, stake all of their viability on storage.  In other words, there needs to be a way to effectively store energy created from solar sources so that said vehicles are operable at night or during inclement weather.  Without effective storage capacity, a solar powered electric vehicle is just not practical.

The researchers at Berkeley have given the auto industry real hope with their bionic leaf project by offering the solar production and storage they've been looking for.  If it works, it might even be possible for auto manufacturers to all but abandon batteries.  The one exception might be an emergency battery able to get a vehicle to a service garage in the event of a solar breakdown.


As we search for new and better ways to use renewable resources, it is inevitable that we will better learn how to mimic what nature already does.  To the extent we can do so, we'll be able to make maximum use of the natural energy sources around us. 

Wednesday, 12 March 2014

Guest blog by Terry Vergon: Building Mission Critical Facilities Organizations

From a data-center-space perspective, regardless of the temperature, the heat energy generated by the servers and equipment must be transported or expelled from the area. It is in this process that you can either make the system efficient or not.
Consider the following diagram:
100kwdcheatflowmechanical
In this conventional design, it doesn’t matter what temperature the data center is held at, there still needs to be 100KW of heat removed. This moving of heat energy costs energy. If we have a centrifugal chiller, it takes about 0.6 KW per ton to remove this heat. So if we use this example:
341200 BTU/hr = 28.4 tons = 17 KW of electrical power
This is the power to remove that heat with the chiller; but we also have cooling towers, pumps, and other equipment to consider. For most systems, you can add an additional 25 percent of the chiller load for the supporting-equipment load. In this case, that would be 5.7 KW, for a total of 22.7 KW. So the bottom line is that to run a data center of 100 KW, we need to pay for an additional 23KW of cooling. What this means is that, for a 5- to 6-MW data center, we would typically see about 1 to 1.2 MW of that total load just for cooling. This can sometimes translate to $50- to $60,000 per month just to move heat out of the building.
So how does this all play with raising data center temperature?
It depends on how you remove the heat load and how much of that “moving heat cost” you can reduce. In an ideal situation – and many of the newer, huge data centers are moving in this direction – you would have the following:
100kwdcheatflowmechanical2
In the ideal situation, there is no cost incurred for mechanical cooling systems – no pumps, no towers, no chillers – however, you are totally at the mercy of the environment and ambient temperature of the incoming air to remove your heat energy. Several experiments have exposed servers to these conditions, but the findings held that relying on the environment to provide cooling generally resulted in reduced server equipment lifetimes. [Note: As manufacturers are making their equipment more robust and capable of handling greater temperature extremes and changes, a natural-convection model may well become the norm in the future.]
For now, the best solution is normally a combination of both natural (economization, etc.) and mechanical processes; but for this to work and take advantage of the environmental cooling available, the acceptable temperature of the data center should be set to maximize effectiveness (maximize cost savings).
100kwdcheatflowmechanical3
In this scenario (data center with economization), the temperature we desire to maintain at the data center directly affects how much we can use outside air for cooling. In a location like Phoenix, if our data center supply air temperature is set at 62F, we can use outside air to cool the data center for approximately 3 months of the year. If we set the temperature to 72F, this increases to approximately 6.5 months of the year. This change represents a potential cost savings of $120,000 to $160,000 a year. Average outside air temperatures plays an important role in data center costs, and so it pays to find a location that allows you to utilize the outside air for cooling. (This explains why the northern latitudes are sought-after data-center locations.)  Many areas in the northern United States can support almost 8,000 hours a year in free cooling with a data center supply air temperature of 72F.
So simply raising the temperature of the data center will not give you the large reduction in costs. Couple free cooling (economization) with raising data center temperatures and you should see significant savings.

Guest blog by industry veteran Terry Vergon - see Terry's blog here http://blog.sapientservicesllc.com/ 

Monday, 10 March 2014

Reuters: Facebook on the Verge of Drone Acquisition

Reuters reported late last week that Facebook is currently in negotiations to acquire a prominent aerospace company for USD $60 million, with the goal of using the company's solar powered drones to provide Internet access in remote parts of the world.  According to various media reports, the company in question is Titan Aerospace, a New Mexico startup with fewer than 50 employees.

Titan's claim to fame is their solar powered unmanned aerial vehicles (UAV) capable of flying nonstop for up to five years. Industry rumours suggest that Facebook wants the vehicles in order to put them to work for their Internet.org project.  The project is aimed at providing Internet access to billions of potential consumers in remote areas of Africa and Asia.

As things currently stand, these remote areas simply do not have the infrastructure to support traditional online access via fibre-optic and other means. Nor do they have the money to develop their own wireless communications via satellite networks.  Facebook could be the 'knight in shining armour' by furnishing the Tighten drones with the necessary equipment to provide ongoing wireless access.

The fact that the drones are solar powered solves the fuel issue for Facebook. Moreover, because they can stay aloft for so long, it would make vehicle rotation and maintenance a fairly easy task once the fleet was up and running.  The biggest challenge would be to make sure there are enough vehicles in play to cover breakdowns and weather-related problems.

Neither Facebook nor Titan Aerospace would comment one way or the other regarding the possible acquisition.  Should it go through however, it will mark the second significant acquisition by Facebook in less than a month. In late February, the company paid $19 billion for the WhatsApp mobile messaging app in a move that was widely criticised throughout the technology sector.

Going It Alone

According to the Reuters report, Facebook is the only company working on wireless Internet service in those remote areas.  Seeing as they are going it alone, TechCrunch says the company is hoping to build 11,000 drones to equip Internet.org however, this leads to an obvious question: do all of these potential new customers have - or want - the computers or mobile devices necessary to connect to a Facebook service?

Quite frankly, it's hard to see how this project will have any commercial benefit in the short term.  Perhaps there is plenty of long-term potential waiting on the horizon, but it will likely take a long time to get to that horizon.  The remote areas that we are talking about are considered remote for a reason.  Many of them lack even simple utilities such as the electricity that will be required to operate their computers.

It may be that Facebook believes in the potential of these highly advanced drones for future ISP and web hosting purposes.  Nevertheless, apparently no one else shares the same vision at this time.  All we can do is sit back and watch what happens. It will certainly be interesting…


Thursday, 6 March 2014

High-Speed Traders Set to Embrace Laser Technology

In the never-ending race to see who can pull off the fastest trades on the world’s busiest stock exchanges, a Chicago-based communications company has been commissioned to link the New Jersey offices of the New York Stock Exchange (NYSE) and NASDAQ via roof mounted lasers.  The network of lasers will be perched on the tops of office buildings and apartments along a 35-mile stretch between Carteret and Mahwah, New Jersey.

The new system is just the latest in the race between exchanges to offer the fastest trading possible.  In an industry where nanoseconds can mean the difference between profit and loss, trades cannot be performed fast enough.  According to the Wall Street Journal, this latest project takes the financial services sector one step closer to its dream of finally winning the 'race to zero'.

The Journal explains that the 'race to zero' is the idea of getting to the point where data communications - at least for the purposes of executing trades - occur faster than the speed of light.  To date, technology companies working with the financial sector have relied on fibre optics and microwave transmissions to achieve the highest speeds possible.  The successful implementation of laser transmissions would put those other technologies at a decided disadvantage.

As for the traders themselves, they will enjoy the added benefit of being able to place their own servers at the exchange data centre of their own choosing.  If they want servers at multiple data centres, they will be able to do that as well.  This only increases the speed at which exchanges will be able to execute orders and update market data.

Moving beyond Financial Services


It will be interesting to see what kind of success Anova – the company contracted to set up the laser system – experiences with their new laser network.  If it works as promised, it could have a definite impact on the way data communications are developed for the future.  We can easily see particular laser applications being developed for specific industries in a way that best takes advantage of what each one has to offer.

Will laser data transmission ever reach the point of international applications? P erhaps, but that will require overcoming some very formidable barriers, not least of which is distance.  For example, the New York Stock Exchange is linked to the London Stock Exchange via fibre-optic cables running across the ocean floor.  The distance between the two is too great for laser technology to be effective right now.  There would need to be some sort of progressive linking system to make it work.

In all likelihood, we are not even close to the day when laser-based data communications would be commercially viable for IT services and overseas communications.  Nevertheless, that is not what's important in this case.  The importance of the Anova project is one of confirming that directed and concentrated laser light is an effective and reliable medium for high-speed data communications.   We will know for sure in due course…

Sources:

WSJ – http://online.wsj.com/news/articles/SB10001424052702303947904579340711424615716

Monday, 3 March 2014

BBC: Power Companies Being Turned Away for Cyber-Attack Insurance

The BBC reported late last week about a troubling trend plaguing the nation's power companies: these are increasingly applying for insurance cover against cyber-attacks but are being turned away in large numbers.  According to the BBC, the main problem power companies are encountering is that insurance company audits that show their cyber defences are too weak, thus exposing underwriters to unreasonable risk.

Lloyd's of London told the BBC that it has experienced a recent surge in the demand for cyber-attack cover among energy sector companies. No reason was given for the surge, but some believe increased threats from the cyber terrorism community are partly to blame.  Energy companies afraid of computer and infrastructure damage relating to a cyber-attack might be hoping to lean on insurance should a devastating attack occur.

Insufficient Security


When an energy company applies for cyber-attack cover, its current systems are audited in order to determine the level of risk that the insurance underwriter will be exposed to.  If current security measures are deemed insufficient, insurance cover will not be granted.  Unfortunately, the state of the power industry is one where insufficient security is the norm.

According to the BBC, the biggest problem is with outdated software created to manage power utilities long before the Internet reached worldwide dominance.  One of the main pieces of management software now being used, known as Supervisory Control and Data Acquisition (SCADA), provides far too many loopholes for hackers thanks to insufficient networking defences.  Closing those loopholes is a nearly insurmountable task due to the age of the software.

Making matters worse is the drive to link multiple power stations to a single, remote control centre via Internet connections.  Treated individually, security management would be fairly straightforward and highly successful.  Nevertheless, once Internet connections are involved, every power station linked to the system becomes vulnerable.  Until the energy sector can address these serious security concerns, getting insurance is going to be challenging.

A Larger Issue


In our minds, the insurance troubles being experienced by the energy sector leads to questions of a larger issue.  What is that larger issue?  It is one of similar security concerns across nearly every sector where companies and stakeholders are still using outdated software and hardware without the capability to defend against large-scale cyber-attacks.  In other words, this issue is not limited only to energy companies.

While it's true the average data centre is more than equipped to handle even the most serious cyber-attacks, what about small companies with multiple locations connected to a central networking hub?  From the car repair chain to an attorney with multiple urban locations, any business or organisation that has not given serious consideration to upgrading computer systems could find itself at risk.

The threat of cyber-attacks is no longer something of films and night-time television.  It is very real.  Any entity utilising Internet connections of any type needs to take it seriously if it wants to protect itself, insurance cover notwithstanding.