Tuesday, 18 October 2016

Security Breaches to Cost More Beginning in 2018

The last thing any company wants is to find itself victimised by hackers. Between the poor publicity and the fines that may be imposed, suffering a security breach is bad for business. And it's about to get worse. Once new EU fines are implemented in 2018, being victimised by a security breach could cost a company millions.

The EU's General Data Protection Regulation is set to go into effect in 2018. The regulation not only increases fines for security failures but it also groups companies according to their size and revenues. Some of the largest companies in Europe could face fines of up to £18 million or 4% of global turnover. Computer Weekly reports that revenues from the fines could represent a 90-fold increase if the same level of security breaches in 2018 and beyond continues at the same level reported in 2015.

When looked at specifically through the lens of large UK corporations, Computer Weekly says the annual fines could increase some 130-fold. The fines collected among small and medium-sized businesses could rise as many as 57 times. All of this adds up to an awful lot of money.

Putting Companies at Risk

The EU regulator has established a two-tiered system that allows it to levy less severe fines on companies suffering security breaches considered not as serious. Still, a fine equal to 2% of global revenue could still be devastating to a large company. This leads to the obvious question of whether the new regulation puts companies at risk or not. It may do just that.

Payment Card Industry Security Standards Council Jeremy King told Computer Weekly that the new legislation is serious business. King wonders whether some businesses will be able to actually pay the fines assessed against them.

"The new EU legislation will be an absolute game-changer for both large organisations and SMEs as the regulator will be able to impose a stratospheric rise in penalties for security breaches," King said, "and it remains to be seen whether businesses facing these fines will be able to shoulder the costs."

The regulator's position is easy to understand in light of the fact that as many as 90% of large corporations and 74% of small- and medium-sized businesses were affected by a security breach in 2015. Regulators are attempting to force companies to take more drastic action to prevent security breaches by making it financially uncomfortable not to do so… but is the regulator going too far?

Only time will tell whether the increased fines will accomplish what the EU wants them to or not. It is quite possible that some companies caught off-guard in the early stages will suffer financially for it, but we can hope that companies will take this seriously enough to beef up security efforts before the new fines are imposed. That would be ideal. Europe's computer networks would be safer and businesses will not have to suffer the losses related to breaches.

Thursday, 13 October 2016

2015 French TV Attack Highlights Network Vulnerability

Do you remember the April 2015 cyber-attack against France's TV5Monde? If so, you may remember the immediate speculation that the attack was linked to the Islamic State and an attempt to further rattle the nation, just months after the Charlie Hebdo attack. Well, investigators have learned a lot since then.

First, the attack was not the work of the so-called Cyber Caliphate as first reported. Investigators now have strong reason to believe the attackers were Russian hackers who used malicious software to destroy the broadcast systems at the TV5Monde network.

More importantly, we have learned just how vulnerable networks are to well-designed software. The attack on the French network was not particularly sophisticated, but it moved very quickly and effectively, once it got started. According to the BBC, TV5Monde was within hours of a complete collapse when one of the network's engineers located the computer where the attack originated and removed it from the system.

A Combination of Organisation and Speed

TV5Monde had begun broadcasting hours earlier when, for no apparent reason, all 12 channels went black. It wasn't long before network officials figured out they were experiencing a serious cyber-attack. TV5 director-general Yves Bigot credits his engineering staff for identifying the problem and intervening before it was too late.

The attack was successful because it was targeted and because it combined organisation and speed. Investigators discovered that the hackers carried out sophisticated recon against the TV network to figure out the station’s system before launching the attack. They then created software that attacked the network's hardware in a sequential manner, corrupting the systems responsible for transmitting television signals.

Interestingly enough, the hackers did not use a single point of entry. In fact, the BBC says there were seven points of entry. Even more interesting is the fact that not all of those points were in France or even a direct part of the TV5Monde network. One was a Dutch company that sold TV5 some of their studio cameras.

A Potential Collapse Was Real

The attack on TV5 should be a reminder of the vulnerability of computer networks. Engineers could have completely shut down the system, wiped it clean and started over from scratch had it been necessary, but by that time the damage would have been done. As Mr Bigot explained to the BBC, any prolonged outage would likely have resulted in the cancelling of broadcast contracts en masse, leading to the collapse of the network under the financial strain.

In terms of cyber-attacks, this is where the real problem lies. A computer system can be repaired just like a building attacked in conventional warfare can be rebuilt. But any harm caused by a cyber-attack is capable of producing significant financial stress that could lead to a total collapse. 

Disaster was averted in France last year.  Next time, things might not go so well. Thus we need to be ever more diligent about protecting our networks at all costs.

Tuesday, 4 October 2016

Scientists Want More Research into Internet Use and Water

When scientists at Imperial College London claimed that downloading a single gigabyte of data could waste up to 200 litres of water, their claims generated one of two reactions. Those who follow such things were visibly shocked while those who do not went on with their lives completely unaffected. Little has changed a year later; not that anything should have.

According to the BBC, the Imperial College London researchers calculated that the 200 litres of water per gigabyte of data is probably used in keeping data centres cool and actually generating the power needed to operate them, but 'probably' is the operative word here. The researchers could not conclusively say how water was being wasted, nor did they provide any concrete evidence that their estimate of 200 litres per gigabyte was accurate.

Bora Ristic, one of the researchers involved in the project, told the BBC that there was quite a bit of uncertainty in the figures. He said water usage could be ‘as low as 1 litre per gigabyte’ rather than 200. What is important, Ristic said, is that their report highlighted the fact that water consumption in relation to internet usage has not been well researched.

A Crisis Where None Exists?

If there is a country in the ever-shrinking world that is cognisant of its responsibility toward the environment, it is the UK. We have been leaders in environmental issues since the Thatcher days, having spear-headed research into global warming and renewable energy. We know a thing or two about protecting the environment, both now and in the future. But are the concerns over water consumption and internet use legitimate? Are researchers creating a crisis where none exists?

Water used to cool data centres is not wasted as researchers contend. Some of that water can be recycled and sent back through the system for continued cooling; what is not recycled gets sent out to be treated before being released. As far as the water used to generate power, it is not wasted either. It evaporates as steam to become part of the natural water cycle.

The earth's water cycle is key to understanding this whole issue. The reality is that water consumption does not equal waste. Water that is consumed by living organisms is eventually transferred back to the atmosphere through respiration and perspiration, once again taking its place in the water cycle. Water that is not consumed (e.g. for data centre cooling) is also returned to the water cycle when released following treatment.

It is true that land masses can experience drought from insufficient rainfall, but the total volume of water on the planet is never diminished. Unless a particular area is suffering a drought, the issue of using water to cool data centres and generate power to run those data centres is really a non-issue after all. Let's research it if scientists want the data, but let us not put out alarming statistics that are likely invalid and irrelevant.

Tuesday, 27 September 2016

FCA IT Outage a Bit of Irony

A bit of irony struck this past weekend when the Financial Conduct Authority (FCA) was forced to announce late last Friday that an incident at one of their outsourced data centres caused a widespread outage that affected a number of the watchdog's IT services. The FCA described the outage as 'major' even as it was working with their vendor to restore inaccessible services.

The irony of the outage is related to comments made earlier in the week by FCA specialist supervision team director Nausicaa Delfas, who berated private sector companies for not having appropriate systems in place to prevent cyber-attacks and network failures. At a cyber security conference last Wednesday, Delfas made it clear that the FCA wants the companies it regulates to do better.

"Most attacks you have read about were caused by basic failings – you can trace the majority back to: poor perimeter defences, un-patched, or end-of-life systems, or just a plain lack of security awareness within an organisation," Delfas said. "So we strongly encourage firms to evolve and instil within them a holistic 'security culture' – covering not just technology, but people and processes too."

Confirmed Hardware Failure

In the FCA's defence, the incident was not the result of any sort of cyber-attack or internal systems shortcoming. It was a direct consequence of a hardware failure as confirmed by Fujitsu, the vendor responsible for the data centre in question. Nonetheless, having not restored all systems several days into the incident demonstrates to the FCA just how difficult it can be to maintain networks when things like this happen.

The FCA has long argued that the companies they regulate should be prepared for any sort of incident that could knock out network access for any length of time. To show just how serious they are, regulators fined the Royal Bank of Scotland a record £56 million after an IT failure in 2014 left millions of customers without access to their accounts. That has some critics of the agency ready to speak out against the regulator.

ACI Worldwide's Paul Thomalla is among those executives calling out the City watchdog. He told the Financial Times that the watchdog has to be held to the same standards they apply to the financial sector. He said that if the FCA expects the institutions it regulates to maintain high standards of security and network reliability they need to implement the same standards for themselves.

Only time will tell how devastating the weekend incident really turns out to be and if there is any long-term fallout at all. The lesson to be learned is that there is no such thing as a 100% safe and reliable network. Things can happen even with the best of intentions and rock solid contingency plans in place. Our job is to do the best we can to mitigate the adverse effects of those incidents. When they happen, we just have to do all we can to get things fixed as quickly as possible.


Thursday, 22 September 2016

The National GCHQ Firewall: Will It Work?

If you haven't heard the news yet, the Government Communications Headquarters (GCHQ) is taking aggressive action against cyber criminals with the establishment of a new division known as the National Cyber Security Centre (NCSC). The centre, which is slated to open sometime in October (2016), will be the first such government agency dedicated solely to defending the UK against cyber security threats. One of their first missions will be to build a 'national firewall' that would protect internet users from the most common cyber threats.

Thus far, GCHQ has not detailed how the national firewall will work, but they have said that the NCSC will not actually be responsible for filtering out suspect sites and emails. Instead, the primary mission of the firewall is to provide a national domain name system that internet providers and others can use to block access to computers via IP address.

The question on everybody's mind should be, will it work?

As explained by the Telegraph on its website, there are quite a few ISPs with IP blocking policies already in place. They have enjoyed some limited success in preventing malware attacks, phishing attacks and the like. They have also prevented British internet users from accessing sites with content that violates copyright protections.

Some Success Already

The Telegraph says the government has also enjoyed some measure of success with a tool that is capable of identifying and intercepting malicious emails that appear to come from government agencies. It is based on the identification of any emails purporting to come from government sources and then checking origin IP addresses against an existing database of known government addresses. Any email with an IP address that does not match is automatically blocked.

GCHQ has developed a tool to a point where they have been testing its effectiveness on a state tax refund site that was sending out as many as 58,000 emails per day. According to NCSC chief executive Ciaran Martin, that site is no longer sending their emails.

The fact that the government has seen modest success in large-scale email blocking seems to suggest that their plans for a national firewall could work. But there are still plenty of hurdles to overcome. Ultimately, the success or failure of the system is going to rely on how well government and private entities work together.

Every Tool Can Help

Knowing what we know about cyber security and network threats, we can say with a fair degree of confidence that a national firewall will not be a perfect solution all by itself. No single cyber security tool can protect us against every single threat. But every tool that does what it is designed to do adds to a much larger arsenal that is better able to defend against cyberattacks with every passing day.

We look forward to seeing what the GCHQ comes up with for a national firewall. Hopefully, their efforts will allow private organisations to take some much-needed strides in addressing cyber threats.

Tuesday, 13 September 2016

ING Data Centre Crash Caused by Loud Noise

ING Bank found itself apologising to customers this week after a data centre failure in Bucharest, Romania left them without most online services over the weekend. The good news in an otherwise disturbing situation is that the event could have been much worse. The outage led mostly to inconvenience, due to its occurrence on the weekend. Had it happened during the week, the results could have been much worse.

Numerous news reports say that ING Romania was running a standard fire suppression test at the Bucharest facility on 10th September. The facility's fire suppression system uses an inert gas that is designed to be harmless to equipment. In this case, the gas itself did not cause the problem. The catastrophic shut-down of the facility was a result of a loud noise emitted when the high-pressure gas was released.

One news source says that the gas was under a pressure that was too high for the system. When it was released, it emitted a loud booming noise that sent a shock wave throughout the facility. That shock wave created vibrations strong enough to damage hard drives and servers within the data centre.

Service Down for 10 Hours

Damage to the equipment was severe enough that the centre was down for about 10 hours. During that time, customers were unable to conduct online transactions, communicate with the bank online or conduct transactions at ATMs around Bucharest. Some transactions already in progress when the outage occurred were simply lost. The bank's website was also down for a time.

Bank officials say they brought in an extra 70 staff members to help recover the system and restore data. Although described as ‘exceptional’ and ‘unprecedented’, ING Bank maintains that service interruptions were merely a matter of convenience. They have not said whether all systems are up and running yet however it does not appear, at time of writing this article, that any critical data was lost or compromised.

Unfortunate but Important

ING Bank's misfortunes aside, the fire suppression test and subsequent shut-down are important events for the data centre community. Why? Because it has long been assumed that loud noises creating substantial shock waves could damage data centre equipment, but no one has known for sure because it has never happened before. Now that it has, we have a working example we can use to address what we now know is a possibility.

In the months ahead, we can expect testing and research designed to figure out what happened in Bucharest over the weekend. The more we learn about the incident, the better able we will be to protect data centres from similar events in the future. This is good for the data centre community despite the fact that the outage inconvenienced ING Romania customers.

Making the best use of the information collected on the outage will, of course, depend on ING Bank being willing to be forthcoming with their findings. Hopefully they will, for the good of the entire data centre industry.


Thursday, 11 August 2016

Delta Airlines Data Centre Fails – The Reason Why Is Still a Mystery

The second-largest airline carrier in the US is still struggling to regain normal operations after a data centre failure that grounded hundreds of flights and stranded thousands of passengers worldwide. Somewhere around 2.30am EDT on Monday, August 9 Delta staff in Atlanta were unable the access computer networks for some unknown reason. Operations around the country and, eventually, the world also suffered the same fate.

The US-based company, which is part of the SkycapTeam consortium that also includes Air France-KLM, has not offered any real concrete answers about what caused the problem. But, in the days following the outage, they have struggled to get their computer systems back online and all the data synced across their worldwide network. The airline says it is doing everything it can to return service to normal.

A Power Switch Problem

Initial reports suggest that Delta technicians were running a routine test of backup power procedures when a piece of equipment was inadvertently tripped. That failure ostensibly locked airline computers out of access to both Georgia Power and their own reserve backup generators. With no power, the system shut down.

However, another rumour has emerged suggesting a fire might have taken out the airline's main data centre in Atlanta. Some sources say that as technicians were attempting to switch computer networks to a backup generator, a fire broke out, destroying two generators in the process. In either case, Delta's computer networks went down due to a data centre failure related to a lack of power.

As of Wednesday, August 10th 2016, things were still not back to normal. A few thousand of Delta's flights were back on schedule, but airport information boards were not necessarily correct. Information on the company's website pertaining to arrivals and departures could also not be entirely trusted. Delta Airlines continues to investigate what went wrong.

Computer Networks Vulnerable Everywhere

Delta Airlines is sure to take a PR beating as a result of its data centre failure. And although there will be new strategies put in place to prevent future outages, the company's networks were already operating up to standards as far as we know. Their data centre had backup power in place for purposes of redundancy, just as would be expected, but the perfect storm occurred in just the right way to cause a big problem.

The lesson to be learned here is that no network is invulnerable. No matter how much technology we put in place, no matter how much redundancy we include, computer networks will always be at risk of failure. It is something we have to learn to live with. That does not help the thousands of Delta passengers stranded around the world, but it is the reality in which we live. Computer networks are not perfect.

Hopefully, Delta would be more forthcoming in the future as to what caused the failure. Their willingness to share information will help others avoid similar problems in the future.