Friday, 27 June 2014

Data Centre Failures: Literally Anything Can Happen

So, you thought things were bad the last time the data centre where you work went down? Perhaps things were pretty bad compared to other problems your facility has undergone in the past.  Nevertheless, there have been some pretty epic data centre failures that make others look fairly routine. Data Center Knowledge recently published their top 10 list of epic failures; we would like to share some of them with you:

1. The Yahoo! Squirrel

From our perspective, one of the qualifications for an epic data centre failure is that it be rooted in a seemingly harmless event that causes more damage than one would normally expect.  Such is the case with the 2010 failure of a Yahoo! Facility in Santa Clara, California.

Squirrel problems at data centres are not all that rare. After all, the furry little rodents love to chew. However, in this particular case a single squirrel took out half the Santa Clara data centre by chewing through some very important wires.  Knowing that it is so easy to disrupt Yahoo! operations, one wonders what some of the other search engine companies are up to.

2. No Smoking, Please!

We already know that cigarette smoking can cause serious health risks including lung cancer, emphysema and heart disease however apparently it can spark one of those memorable data centre events that qualifies as being epic.  A case in point is an Australian data centre that was brought down by a smouldering cigarette and a bed of mulch.

Apparently, the Perth iX data centre was down for about an hour when the facility's smoke detection equipment caught a whiff of burning mulch outside the building.  The system mistakenly thought there was a fire within the building, responding just the way it was designed to.

3. A Raging Storm

Superstorm Sandy, the 2012 hurricane that ended up being the second most expensive storm in American history, produced data centre failures up and down the US eastern seaboard.  Although few Americans are surprised by hurricane damage in the southern states, no one was prepared for how serious the storm was in the North East.  Need we say more?

4. It Only Takes a Second

Perhaps the most epic failure to interrupt data communications happened in 2012 as a result of the 'Leap Second Bug'.  When a leap second was added to the atomic clock the world relies on for accurate timekeeping, numerous data centres around the world did not know what to do with it.  Social media went down, torrent sites were affected and even a number of flights out of Australia were interrupted.  It's amazing what one second can do.

Lessons Learned

What can we learn from these four epic data centre failures?  We can learn that a failure can happen literally anywhere – and at any time.  You do not have to be operating a big commercial facility to be affected by the littlest of things.  In the end, it pays to be super diligent at all times.

Tuesday, 24 June 2014

London Surpasses Silicon Valley as Tech Capital of the World

Twenty years ago, California's Silicon Valley was the place to be if you were in the technology and information sector.  The Valley was home to the most important technology companies in the world and, as such, attracted the best talent and the most investment money for new start-ups.  How times change. Recent data shows that London and South East England have surpassed not only the Silicon Valley, but also the entire state of California.

The research comes from Bloomberg Philanthropies and South Mountain Economics, two American organisations specialising in tracking technology and business trends.  Their research shows that the technology sector in London and the South East currently employs roughly 744,000 individuals; the same sector in California boasts approximately 692,000 workers.  The research further shows London as leading the world in terms of the total number of financial tech firms located within its metropolitan area.

Former New York Mayor Michael Bloomberg told Real Business that much of the shift away from California could be traced to New York and London's “diversity, creative talent, and high quality of life” however his comments are likely nothing more than political speak from a man trying to defend the city he once presided over.  In truth, London and the South East are excelling because the UK has made a concerted effort to push the technology sector.

Whether you are talking about companies offering IT services and managed services, or firms developing new hardware and software, they have found an environment in London that allows them to thrive.  They have the infrastructure to support their businesses along with plenty of training opportunities to provide skilled workers.  They also have a friendly regulatory environment that does not hamper innovation and progress.  In fact, the environment is one that encourages both.

In many parts of the US, the exact opposite is happening.  Government regulation and taxation is stifling the technology sector and preventing it from moving forward.  Nowhere is this more easily seen than in California, where taxation and regulation are considered out of control among business experts.  Of course, technology companies are going to move to more friendly places.

Poised for Growth

If the United States wants to continue on a path that stifles technology growth, the UK will be more than happy to pick up the slack.  Separate research from London & Partners shows that the size of the digital technology sector in Britain's capital is set to grow by more than 5.1% annually over the next several years.  They expect the sector to generate approximately 46,000 new jobs and upwards of £12 billion of additional economic activity.

It is time to say goodbye to the Silicon Valley as the home of modern technology.  That distinction now belongs to London and the South East of England. As long as nothing changes, and we do not expect anything will, what happens in the UK will have the greatest impact on the technology industry for decades to come.  That is a very good place for London and the South East to be.

Thursday, 19 June 2014

Why the Hybrid Rack Transfer System is Superior to Both ATS and STS Systems

Guest Blog by Michael Bord,  Marketing Communications at Raritan Inc.

Lots of data center vendors offer transfer switches, but they normally fall into one of only two categories: relay-based Automatic Transfer Switches (ATS) or Static Transfer Switches (STS) that rely on silicon-controlled rectifiers.  Although effective in switching between power feeds, they both have inherent design flaws that hinder both performance and reliability – two areas where a hybrid rack transfer switch really excels.  But, to understand why the hybrid transfer switch is superior, you first need to understand the limitations of ATS and STS devices.

ATS switch transfer times vary from 10 to 16ms, and switching requires an additional 4 to 5ms, often exceeding the 20ms needed to keep IT equipment running making it a potential liability. Another disadvantage is that the contact and pole often fuse together due to electrical arcing; it’s one of the leading causes of relay-based ATS failure. And since ATS do not indicate when the relay has fused and is no longer able to switch power feeds, data center operators only know their transfer switch has failed when it’s too late!

STS systems offer very fast transfer times - normally 4 to 6ms.  But, there’s a huge trade-off here in terms of price; they are often over five and a half times more expensive than relay-based switches!  Another major downside is that they draw significantly more energy than relay-based switches and thereby produce more heat, forcing you to provide greater power and cooling resources than you may have the capacity for.

So, it’s easy to see why ATS and STS systems fall short of the mark.  But why is the hybrid rack transfer switch a better solution for your data center power management system? Consider the following points:

A hybrid automatic transfer switch combines electromechanical relays and silicon-controlled rectifiers providing a speedy transfer at 4 to 8ms, virtually identical to the STS, at a much lower cost.

The hybrid system offers oversized relays and air gaps that are nearly five and a half times wider than on the majority of relay based switches on the market – eliminating electrical arcing that leads to transfer failure.
Hybrids also offer surge protection, field replaceable fuses, and disaster recovery in case of shorted load. They’re also energy efficient and thereby produce less heat resulting in fewer considerations during installation and day-to-day use.

Thus, it’s clear to see how the hybrid transfer switch cleverly uses the qualities of both electromechanical relays and silicon-controlled rectifiers to maximize the positive and minimize the negative characteristics of each type of switch for the kind of performance and reliability you should demand in your data center power chain.

Wednesday, 18 June 2014

Are micro data centers the engine to power the IoT (Internet of Things)?

Guest blog by Steven Carlini, Senior Director of Data Center Marketing, Schneider Electric

I have not heard this much buzz in the IT industry since the early 1990s – well, since the original internet was introduced.  Now the Internet of Things (IoT) is used to denote wireless connectivity of devices, systems and services that go beyond the traditional M2M (machine to machine).

Today, the Internet is almost wholly dependent on human beings for information.  Nearly all of the roughly 50 petabytes of available data on the Internet was first captured and created by human beings by typing, pressing a video record or picture button, or scanning a bar code.

Data is being generated by everything from Coke machines to the 20-30 billion more devices hitting the market (many of them wearable).  This large quantity of data will need to be processed and analyzed in real time.  The IoT works by connecting these remote devices to a large centralized data center providing information on status, location, functionality, and so on.  IoT will generate massive amounts of input data from sources that are globally distributed so transferring that data to a single large data center thousands of miles away for processing will not be technically and economically viable.

This will have a transformational effect on the data center market and technologies in order to support it.  Processing large quantities of IoT data in real time will demand the data centers be located closer to the data source to deliver security, capacity and speed (reduced latency).

The best way to do this?  How about dropping large numbers of micro data centers closer to where the people are?  Seems to make sense to have a greater number of smaller data centers to support the real time need.  Since there will need to be hundreds of these micro data centers per large city, you don’t want every one to be different.  This begs for standardization and prefabrication of these smaller data centers to ensure reliability, lower cost and enhanced serviceability all achieved through economies of scale.

I know there are not that many people wearing Google Glass yet.  But before you know it there could be.  Plus you could also be wearing 5 other IoT devices you never even thought of that will improve or enhance your life in many ways.  But only if the internet is up and your connection is fast and secure….. brought to you by the micro data center near you.

Tuesday, 17 June 2014

Belfast Gives Approval of the Controversial Development Project

A controversial development project along the River Lagan in Belfast has won approval from officials after a thorough review of how the project would affect the historical value of the local neighbourhood.  Belfast Environment Minister Mark H Durkin announced approval of the plans late last week. Construction on the City Quays 1 project will begin later this year.

Officials were originally unsure of whether or not to allow the £250 million project to move forward.  At the core of their reluctance were concerns over how a 21st century development would affect the historical nature of many of the area's buildings.  The area in question, from the Clarendon wet dock to the site of the former Seacat ferry terminal is still home to a number of historical structures.

The completed project will include a mix of office buildings, hotels, restaurants, retail shopping and parking.  The most important aspect, according to proponents of the development, is the much-needed high-end office space the project will provide.  Durkin and others insist that the clients the office space will attract will be instrumental in revitalising an underused district that has been largely ignored in recent years.

It is hoped that developers will be very careful with the architecture during the design and build stage, so that it does not contrast too sharply with the historical nature of the district.  Obviously, new designs must be embraced in this new era of technology.  However, designs can be tempered so as not to ruin the overall atmosphere that the area is known for.

As for the infrastructure, certain portions will have to be upgraded in order to accommodate development.  This is likely to include new high-speed telecommunications equipment to accommodate the networking needs of modern business.  Exactly what that looks like it is not known yet, but details will be forthcoming as the project moves along.

Balancing the Old and New

Mr Durkin's decision was undoubtedly a difficult one to make.  It is something officials all across Europe deal with on a regular basis.  On the one hand is a desire to preserve the old for its historical and cultural value.  On the other hand, one wants to embrace the new for the simple fact that it will, whether we like it or not, carry us into the future.  It is a balancing act that is not always easy to get right.

Everyone involved obviously hopes the City Quays 1 is a commercial success.  It would go a long way toward improving the business environment in Belfast, especially along the waterfront regions.  Future developments could spur even more companies to set up shop in the Northern Irish capital.

At the same time, no one wants to forget the maritime history of the local area.  It is an important part of who and what Belfast is.  Perhaps the developers can keep that in mind by including maritime influences in their architectural plans.  It would be a wonderful recognition of the past, incorporated into the future.

Thursday, 12 June 2014

New SSL Vulnerability Uncovered Could Be 16 Years Old

Not too long ago, the Heartbleed SSL vulnerability was all over the news.  Software developers and data communications experts warned that the vulnerability could be exploited by creative hackers without leaving any trace behind and, although numerous patches were offered to close the vulnerability, we are not out of the woods yet.  A brand new problem has been uncovered by a Japanese researcher – an SSL vulnerability that could be 16 years old.

According to news reports, the newly discovered vulnerability allows attackers to intercept encrypted data by forcing SSL clients to use weak keys that are exposed to malicious nodes.  A competent hacker can easily decrypt any intercepted data.  Worst of all, research suggests the problem goes back as far as OpenSSL 0.9.8y.

The WHIR reports that Ubuntu, Debian, CentOS, Red Hat and FreeBSD have already released security updates to close the loophole.  Other vendors are likely to follow in the coming days.  Experts suggest that companies contact their individual vendors if they are unsure whether a security update has been released or not.  They are also warning data centres, hosting providers and software developers to be on the lookout for problems with OpenSSL.  Like Heartbleed, this new vulnerability can be exploited without leaving a trace.

The issue was apparently discovered just days after funding was provided by the Core Infrastructure Initiative to hire two core developers for the OpenSSL project.  Organisers say the two positions are necessary in order to maintain the type of code management policies that would prevent these kinds of issues from going undetected for so long.

Historically speaking, the problem with OpenSSL has been a lack of code reviews by experts in the field of TLS/SSL security.  Even when reviews were conducted, these were not given the proper scrutiny necessary to detect both Heartbleed and the newly discovered vulnerability.

Never Completely Secure

The most compelling part of this story from our perspective is the fact that the SSL vulnerability could be 16 years old.  If the researchers are right, it goes back to the earliest implementation of OpenSSL back in the late 1990s.  It is hard for us to even imagine the amount of damage that may have been done over the years.  It is damage that we may never know the true scope of.

With that in mind, the recent discovery of OpenSSL vulnerabilities is yet another reminder that modern Internet communications will never be completely secure. Those who make their living by hacking legitimate businesses and government entities will continue doing what they do for the foreseeable future.  All we can do to fight back is practice due diligence in security and, where required, go back and fix the mistakes of the past.

We hope that the OpenSSL project and other open source initiatives have learned a valuable lesson here.  Even the open source model requires full-time coders capable of reviewing new code as it is developed. Leaving it to chance is no longer an option any of us can afford.

Thursday, 5 June 2014

Google to Invest More Than $1 Billion in Wireless Internet

By now, you are probably aware that Google has no interest in just remaining the world's top search engine.  It wants to be the world's largest and most comprehensive technology company with operations spanning multiple sectors.  Therefore, it is no surprise that the company plans to invest more than US $1 billion in covering the entire globe with wireless Internet access capability.

One of Google's first forays into wireless Internet came by way of the 2010 funding of Greg Wyler's O3b Networks Ltd. Now Google and O3b are planning to fund a new company known as WorldVu Satellite Ltd, with the intention of purchasing and deploying 180 satellites capable of providing global wireless Internet access.  The project has already deployed four satellites; four more are scheduled for launch next year. Google and O3b hope to have WorldVu Internet service ready to go by 2019.

The satellites take advantage of a lower orbit and the Ku-band spectrum now already used by telecom companies.  Their biggest challenge is to find a way to make sure their broadcast signals do not interfere with others using the same spectrum from a higher orbit location.

If that's not enough, Google is also working on two other projects.  It recently purchased Titan Aerospace in order to get its hands on the company's advanced drone technology.  A Titan Internet drone can stay aloft for up to five years using only solar power.  The company hopes the Titan will be the first commercially manufactured drone to be used for wireless Internet.

Lastly, Google is also looking at the concept of using high-altitude balloons combined with technology from Titan Aerospace.  The ambitious project has been dubbed 'Project Loon' by Google.  It believes that the high-altitude balloons could be the future of inexpensive high-speed networking because the technology can be deployed very quickly and inexpensively as compared to satellites.

The Wireless Future

When you stop and think about what Google is trying to accomplish, it makes complete sense given the rate technology is advancing.  Satellites, drones, and high-altitude balloons can deliver Internet access nearly anywhere in the world without the need for expensive infrastructure.  For example, deploying balloons would allow remote areas of the African continent to have Internet access without the need for building an entire fibre-optic network.

As the world gets smaller, companies like Google are finding new ways to reach people who might otherwise be unreachable.  The advancement of wireless technology is making it all possible.  Perhaps we are only a few years away from being completely encircled by a vast net of satellites, solar powered drones and high-altitude balloons – all providing high-speed Internet access to anyone with a capable device.

Obviously, there is also the issue of making sure the people you are trying to reach have the devices to make use of the Internet, nevertheless we suspect Google will be working on that shortly as well. Between Google Chrome computers and Android handsets, we suspect Google already knows how it is going to connect everyone.

Tuesday, 3 June 2014

Google Rolls Out New 'Forget Me' Service

In response to the recent landmark European Union Court ruling establishing the 'right to be forgotten' amongst Internet users, Google has rolled out a new service intended to fulfil its legal obligations to the court.  The new service ostensibly protects privacy rights by giving Google users the option to have certain kinds of outdated personal data removed from Google's Internet servers.

The court said in its decision that links to outdated and irrelevant data should be immediately deleted by Google upon request of the user.  The Internet giant's new service promises to do that, but there are a number of sceptics wondering just how effective it will be.

In order for someone to exercise his or her right to be forgotten, he or she must:

  • provide links to the data in question
  • name his or her home country
  • explain why the links should be removed
  • provide a photo ID to prevent fraud

Google says the requirements have been put in place for two reasons.  First, it needs to make sure that the 'forget me' service is used legally in each country where links to data exist.  For example, the EU court ruling likely would not hold up in the United States against that country’s First Amendment protections to free speech.  It is not likely that Google will delete any information stored on servers physically situated in a US data centre.

Secondly, Google needs to protect itself against fraudulent requests that could lead to further legal complications.  The system is set up in such a way as to force human operators to review requests and delete data.  Those same human operators will have to determine whether the links submitted applied to outdated and irrelevant information or not.  For now, the system would not be a good candidate for automation.  Google employees will review all requests and subsequent communications in order to determine their legitimacy.

Potential Unintended Consequences

Those who are sceptical of the court ruling and Google's new service say the whole idea could have some unintended consequences in the future.  For example, the BBC has already revealed that more than half of the 'forget me' requests Google has thus far received from UK residents have come from convicted criminals.  Some of these individuals are requesting records of their criminal convictions to be deleted.  The possibilities attached to those kinds of requests could be disastrous.

Another potential problem is that Google may very well delete requested links only from local or regional searches, while keeping them intact internally.  The net result is that individuals will not truly be forgotten; they will merely be concealed from public view in their home country however, that does not stop hackers from working around the system and getting the information they want.

This is truly new territory for search engines, data centres, hosting companies and anyone else who deals with the personal data of customers online however, if this plays out, it will likely set the stage for future rulings on Internet privacy and security.