Tuesday, 22 November 2016

Russia Blocks LinkedIn: A Sign of Things to Come?

It's official. After months of threatening LinkedIn with a block of its services in Russia, the Russian communications regulator has followed through. It all stems from a dispute over whether LinkedIn would comply with Russian laws requiring information pertaining to Russian users be hosted on Russian servers. One wonders if Russia's actions are a sign of things to come.

The push to bring all Russian user data home began in 2014 when the Duma passed the first of several bills aimed at doing just that. Under that first piece of 2014 legislation, Russia established that companies operating on an international scale would have to procure physical servers in Russia – whether contracting with existing data centres or building their own – in order to store data pertaining to Russian users.

The law equally applies to big names such as Facebook and Google and smaller companies with significantly less name recognition. Furthermore, it applies to Russian companies who have a practice of sending user data outside the country. They must cease doing so unless they can prove a certain level of domestic data security.

A Populist Mentality or Something Else?

One way to look at the Russian legislation is to compare it to the current wave of populism that seems to be sweeping the globe. Citizens growing ever more tired of globalism are demanding their nations return to a more populist way of doing things that preserve national identity and sovereignty. Populism was a big part of both Brexit and the recent US presidential election. It may grow in the near future with both the French and German elections.

Could Russia's move be as much about populism as security?  While it's true that protecting sensitive data is a lot easier when hosted domestically, it is also not terribly difficult to implement security strategies that are effective in a cross-border situation. So there has to be more to it than just security alone. Populism seems as if it could be a factor.

Still, there is another possibility. Some critics of Russia's move speculate that the regulator wants data stored at home so that certain government agencies can access it more easily. Think NSA and Edward Snowden here.

Where Do We Go from Here?

Now that Russia has taken steps to block LinkedIn, we would expect the regulator to take similar action against other companies as well. The floodgates are opened and water should begin pouring through rather quickly. Whether it spreads to other countries remains to be seen.

As per LinkedIn, they continue to stay committed to a global mindset. In their official statement, they expressed the following:

"Roskomnadzor's action to block LinkedIn denies access to the millions of members we have in Russia and the companies that use LinkedIn to grow their businesses. We remain interested in a meeting with Roskomnadzor to discuss their data localisation request."

Will Russia flinch? Probably not. So now it's up to LinkedIn to make it work.

Friday, 11 November 2016

Thoughts on a More Predictable & Reliable Data Centre Life Cycle

For a variety of reasons, there has been too much attention paid to the way data centres and infrastructure has been built, with comparatively little attention on the cost of operations throughout the life cycle.

As an industry we are 30-35 years old. We’ve grown very rapidly and so far we’ve been very technology driven and that is an extremely good thing. That focus has created the infrastructure that resides in hundreds of thousands of data centres around the world and it’s that infrastructure that gives us all the things that we take for granted, like the internet and applications such as messaging, streaming, two-way video communications and so on.

The next challenge is quite a different one and it’s making the transition from being engineering-focused to being operationally focused. What that really means is that we need to start to think much more carefully about how all the infrastructure we will have is going to be managed. How is it going to be run? How do we know how well we are running our infrastructure and doing our jobs to the best of our abilities?

Part of this, of course, is people related. But there is also a technical solution which requires giving thought to – what the infrastructure looks like and feels like throughout its life cycle, not just putting a data centre together from a design and build perspective and then moving on to the next project.

We need to think about ourselves as an industry that is maturing and as all industries mature they go through several stages of pain. The initial stages of pain are related to that change, in other words understanding where you are in the process and making the decision to change.
What that means is thinking very carefully about life cycle. How will the infrastructure that is built today perform throughout the phases of its life cycle? At some point in time we will refresh equipment. We will make capital reinvestments. We will make operational investments. We need to think those through throughout the life cycle.

What technology platform do we put in place so that we can manage our infrastructure better? The industry is still in a state of hyper growth so we’re still going to grow the number of facilities, although they may change size and shape. In fact, if the market does change in the way that we expect it to and makes a move towards Edge computing, the whole facility landscape will change dramatically.

To be able to manage the operation of those sites better we need to think about what the life cycle looks like. How do we want to run infrastructure in the best possibly way, ideally with the least amount of human intervention, and that’s where software and technology come in.

One of the things that we can do as an industry is to short-circuit that learning process by not going through the same pains that the other industries have already been through. Let’s look at oil & gas, pharmaceuticals, water and utilities and nuclear power stations. They’ve been through this exercise in varying time periods over the last 10 to 15 years. Let’s figure out what they did to change their operational best practice and use that knowledge.

So we don’t have to learn all those lessons for ourselves; we’ll make our own mistakes and learn our own lessons but let’s at least stand on the shoulders of our ancestors in the context of this particular maturity.

We need to do a better job in two ways. We need to describe our vision of the future and that this vision is not just about products and technology: it is really about the life cycle. Customers no longer buy product; they buy a system. They buy a solution. They buy an entire data centre. We would expect customers to say: “I’d like you to build me infrastructure that is predictable in terms of reliability and efficiency but is also incredibly easy to manage.”

Schneider Electric white paper 195, “Fundamentals of Managing the Data Centre Life Cycle for Owners” describes the five phases of the life cycle, identifies key tasks and pitfalls, and offers practical advice to the owners and management of legacy facilities.

Guest blog by Arun Shenoy, VP, IT & Data Centre Business, Schneider Electric

Monday, 31 October 2016

The Challenges of Going Global in Today’s Digital Economy

There are many popular brands here in the UK that are also well-known across the world, including fast-food restaurant chains, clothing shops, technology companies and many more. However, the vast majority of these companies didn’t start out with a global presence – for most, it happened gradually.

Businesses eyeing international expansion look to obtain the benefits of this growth through conventional uplift, including increased revenue, enhanced exposure and brand recognition, global partnerships and a more diversified product and/or service offering. But there are significant challenges to achieving this goal.

There are a lot of ambitious companies that want to grow, however, many of these companies’ data centres are typically far smaller than those of a multi-tenant colocation providers. Thus, it’s difficult for them to get the funding to pay for a facility with a high level of resiliency and a strong team to manage the mission critical environments.

Some have smaller, less resilient server rooms and data centres scattered about, which makes it more difficult to get applications that require more resiliency in the exact location they need. Others may have smaller, less redundant facilities along with perhaps one highly resilient, centralized facility. As a result, most want to move to a colocation provider that has a facility is resilient across all critical systems in multiple geographies and also enables their cloud applications to be as close to their customers as possible.

Most growing international companies like to deploy at a single point of presence to test a specific market, especially if there are some legal or regulatory hurdles and political concerns. They’ll go to an interconnection hot bed where the colocation may be a bit more expensive than going to a facility that’s away from a major city or a secondary city, allowing them to maximize their radius of coverage. For example, in the U.S., there are lot of European companies leveraging either Northern Virginia or New York City to get access to a plethora of carriers from one location. They can get access to as many different companies as possible rather than just going full force into a new market and deploying in multiple geographies.

Looking abroad in Europe, businesses are deploying in London, Amsterdam, Frankfurt and Paris, the hot beds of interconnection for the region. In Asia, it would be either Singapore for Southern Asia or Hong Kong for Northern Asia. In China, there are a lot of customers deploying a colocation environment in Hong Kong that’s directly tethered to an environment in China where they could deploy virtualized resources. In case anything goes wrong, such as a geo-political event, they can pull the virtualized environment back to Hong Kong.

Companies that want to move some of their activity outside their current boundaries might not take into account the cost for employees on the ground in a new market or for achieving connectivity between their domestic and international deployment. If they work with a reliable colocation provider with a global footprint, however, those data centre professionals can do all the racking and stacking along with managed services such as load balancing, patch management, server reboots and more. Most companies have a multitude of local colocation providers from which they can choose, but they should find a reliable one that can get them the most secure and effective point-to-point connections between the data centre and their corporate locations.

Another challenge for many businesses is their lack of knowledge with a variety of local country government regulations. Companies serving customers in certain international markets deal with data sovereignty issues and regional or regulatory compliance. For instance, if they are involved in any financial activity in Singapore, they have to make sure the colocation facility in which they are located is TBRA certified.

It’s very difficult to expand into new global markets for businesses that choose to build their own data centres, because it’s nearly impossible to move into any major cities that are regulated and unionized without having deep connections. Most enterprises that are looking at an international point of presence will not consider building, but instead, will look at tethering their proprietary data centres to their colocated international environment.

Companies have to be conservative and smart when they plan and execute on their global expansion. Small, incremental steps are key to success – maybe it’s just a cabinet or two so they can put some infrastructure in-region to better support business in that territory, whether it’s for internal customer and sales support systems, Web presence, etc. They’re often very risk-averse because expanding internationally for the first time is no small task. In this scenario, colocation allows them to use a couple of cabinets at first – likely to be in a virtualized fashion to be able to easily migrate out if needed – before they start to deploy physical servers.

Whatever route a company takes, they need to apply strong controls, rigid progress reviews and several checkpoints so they can overcome challenges and stay on course.

Guest blog by Steve Weiner, Senior Lead Product Manager, CenturyLink

Wednesday, 26 October 2016

DDoS Attack on the US: We Still Haven't Figured It Out

Every day, the world's future cyber security specialists attend classes wherein they learn the latest strategies for preventing network breaches. They learn from their instructors, practice defensive techniques on laboratory computers and take tests to earn their coveted certifications. Meanwhile, those professionals already on the front lines wage a valiant battle against hackers and cybercriminals that may be looking to wreak havoc on global networks. Yet, for all this cyber warfare and the significant advancements that it has led to, we still cannot figure out how to pro-actively stop a distributed denial of service (DDoS) attack.

This past weekend, the US East Coast discovered first-hand how debilitating a DDoS attack can be. Just after 7am (EDT), several big-name websites from companies located in this region of the States began experiencing outages. It wasn't long before security experts discovered a devistating DDoS attack was underway. The attack was levelled against internet traffic management specialist Dyn Inc, a New Hampshire-based company that provides domain name services to companies like Twitter and PayPal.

Dyn acknowledged fairly early in the day that service was being interrupted for a long list of sites that included CNN, Spotify, the New York Times, Reddit and the afore-mentioned Twitter. Service was eventually restored by mid-morning, but it went down again around noon. Dyn was forced to acknowledge that a second DDoS attack was under way, this one affecting the East Coast and moving west at the same time. It wasn't until later in the afternoon that Dyn was able to stop the attacks altogether.

Success Is in the Simplicity

A long-standing rule of technology is that, the more sophisticated something is, the easier it is to break. Common sense dictates the opposite is also true. Therein lies the key to the success of the typical DDoS attack.

A denial of service (DOS) attack is very simple. You set up a number of computers to bombard a server with ongoing and repeated requests for service in order to overwhelm the system so that it cannot process legitimate service requests. It's a lot like a flash mob. A large group of people can all assemble in front of a shop front simultaneously, thereby blocking access to legitimate patrons.

A DDoS attack is essentially a DOS attack taken to the next level. It uses hundreds, if not thousands, of unique IP addresses through a strategy known as IP address spoofing. With thousands of IP addresses to deal with, security experts have a hard time shutting down a DDoS attack quickly.

This simple strategy is not designed to steal information. It is intended to disrupt service so that people cannot access targeted websites. It is a very simple strategy for disruption that proves very effective when carried out strategically. It is so simple that we still don't have an effective way of dealing with it. And so, while we work to contain the world's cybersecurity threats, the DDoS beast remains elusive.



Tuesday, 18 October 2016

Security Breaches to Cost More Beginning in 2018

The last thing any company wants is to find itself victimised by hackers. Between the poor publicity and the fines that may be imposed, suffering a security breach is bad for business. And it's about to get worse. Once new EU fines are implemented in 2018, being victimised by a security breach could cost a company millions.

The EU's General Data Protection Regulation is set to go into effect in 2018. The regulation not only increases fines for security failures but it also groups companies according to their size and revenues. Some of the largest companies in Europe could face fines of up to £18 million or 4% of global turnover. Computer Weekly reports that revenues from the fines could represent a 90-fold increase if the same level of security breaches in 2018 and beyond continues at the same level reported in 2015.

When looked at specifically through the lens of large UK corporations, Computer Weekly says the annual fines could increase some 130-fold. The fines collected among small and medium-sized businesses could rise as many as 57 times. All of this adds up to an awful lot of money.

Putting Companies at Risk

The EU regulator has established a two-tiered system that allows it to levy less severe fines on companies suffering security breaches considered not as serious. Still, a fine equal to 2% of global revenue could still be devastating to a large company. This leads to the obvious question of whether the new regulation puts companies at risk or not. It may do just that.

Payment Card Industry Security Standards Council Jeremy King told Computer Weekly that the new legislation is serious business. King wonders whether some businesses will be able to actually pay the fines assessed against them.

"The new EU legislation will be an absolute game-changer for both large organisations and SMEs as the regulator will be able to impose a stratospheric rise in penalties for security breaches," King said, "and it remains to be seen whether businesses facing these fines will be able to shoulder the costs."

The regulator's position is easy to understand in light of the fact that as many as 90% of large corporations and 74% of small- and medium-sized businesses were affected by a security breach in 2015. Regulators are attempting to force companies to take more drastic action to prevent security breaches by making it financially uncomfortable not to do so… but is the regulator going too far?

Only time will tell whether the increased fines will accomplish what the EU wants them to or not. It is quite possible that some companies caught off-guard in the early stages will suffer financially for it, but we can hope that companies will take this seriously enough to beef up security efforts before the new fines are imposed. That would be ideal. Europe's computer networks would be safer and businesses will not have to suffer the losses related to breaches.

Thursday, 13 October 2016

2015 French TV Attack Highlights Network Vulnerability

Do you remember the April 2015 cyber-attack against France's TV5Monde? If so, you may remember the immediate speculation that the attack was linked to the Islamic State and an attempt to further rattle the nation, just months after the Charlie Hebdo attack. Well, investigators have learned a lot since then.

First, the attack was not the work of the so-called Cyber Caliphate as first reported. Investigators now have strong reason to believe the attackers were Russian hackers who used malicious software to destroy the broadcast systems at the TV5Monde network.

More importantly, we have learned just how vulnerable networks are to well-designed software. The attack on the French network was not particularly sophisticated, but it moved very quickly and effectively, once it got started. According to the BBC, TV5Monde was within hours of a complete collapse when one of the network's engineers located the computer where the attack originated and removed it from the system.

A Combination of Organisation and Speed

TV5Monde had begun broadcasting hours earlier when, for no apparent reason, all 12 channels went black. It wasn't long before network officials figured out they were experiencing a serious cyber-attack. TV5 director-general Yves Bigot credits his engineering staff for identifying the problem and intervening before it was too late.

The attack was successful because it was targeted and because it combined organisation and speed. Investigators discovered that the hackers carried out sophisticated recon against the TV network to figure out the station’s system before launching the attack. They then created software that attacked the network's hardware in a sequential manner, corrupting the systems responsible for transmitting television signals.

Interestingly enough, the hackers did not use a single point of entry. In fact, the BBC says there were seven points of entry. Even more interesting is the fact that not all of those points were in France or even a direct part of the TV5Monde network. One was a Dutch company that sold TV5 some of their studio cameras.

A Potential Collapse Was Real

The attack on TV5 should be a reminder of the vulnerability of computer networks. Engineers could have completely shut down the system, wiped it clean and started over from scratch had it been necessary, but by that time the damage would have been done. As Mr Bigot explained to the BBC, any prolonged outage would likely have resulted in the cancelling of broadcast contracts en masse, leading to the collapse of the network under the financial strain.

In terms of cyber-attacks, this is where the real problem lies. A computer system can be repaired just like a building attacked in conventional warfare can be rebuilt. But any harm caused by a cyber-attack is capable of producing significant financial stress that could lead to a total collapse. 

Disaster was averted in France last year.  Next time, things might not go so well. Thus we need to be ever more diligent about protecting our networks at all costs.



Tuesday, 4 October 2016

Scientists Want More Research into Internet Use and Water

When scientists at Imperial College London claimed that downloading a single gigabyte of data could waste up to 200 litres of water, their claims generated one of two reactions. Those who follow such things were visibly shocked while those who do not went on with their lives completely unaffected. Little has changed a year later; not that anything should have.

According to the BBC, the Imperial College London researchers calculated that the 200 litres of water per gigabyte of data is probably used in keeping data centres cool and actually generating the power needed to operate them, but 'probably' is the operative word here. The researchers could not conclusively say how water was being wasted, nor did they provide any concrete evidence that their estimate of 200 litres per gigabyte was accurate.

Bora Ristic, one of the researchers involved in the project, told the BBC that there was quite a bit of uncertainty in the figures. He said water usage could be ‘as low as 1 litre per gigabyte’ rather than 200. What is important, Ristic said, is that their report highlighted the fact that water consumption in relation to internet usage has not been well researched.

A Crisis Where None Exists?

If there is a country in the ever-shrinking world that is cognisant of its responsibility toward the environment, it is the UK. We have been leaders in environmental issues since the Thatcher days, having spear-headed research into global warming and renewable energy. We know a thing or two about protecting the environment, both now and in the future. But are the concerns over water consumption and internet use legitimate? Are researchers creating a crisis where none exists?

Water used to cool data centres is not wasted as researchers contend. Some of that water can be recycled and sent back through the system for continued cooling; what is not recycled gets sent out to be treated before being released. As far as the water used to generate power, it is not wasted either. It evaporates as steam to become part of the natural water cycle.

The earth's water cycle is key to understanding this whole issue. The reality is that water consumption does not equal waste. Water that is consumed by living organisms is eventually transferred back to the atmosphere through respiration and perspiration, once again taking its place in the water cycle. Water that is not consumed (e.g. for data centre cooling) is also returned to the water cycle when released following treatment.

It is true that land masses can experience drought from insufficient rainfall, but the total volume of water on the planet is never diminished. Unless a particular area is suffering a drought, the issue of using water to cool data centres and generate power to run those data centres is really a non-issue after all. Let's research it if scientists want the data, but let us not put out alarming statistics that are likely invalid and irrelevant.