Monday, 12 December 2016

Google Approaching 100% Renewable Energy Target

For years, Google has been working on erasing its carbon footprint by powering more and more of its operations via renewable energy sources. Now it appears that the company is on the verge of reaching its 100% renewable energy target by sometime in 2017. Google is already the world's largest corporate buyer of renewable energy; it may soon become the biggest mega-corporation to be able to claim 100% renewable energy for all operations.

To be clear, reaching the 100% renewable energy goal does not mean all of Google's operations will literally be powered exclusively by green energy. Due to the complexity of power grids and energy production, that is just not possible at this time. What it does mean is that the amount of electricity Google purchases from green sources will be equal to the amount of power it consumes.

This is an important distinction to make given that the technology sector is now responsible for approximately 2% of greenhouse gas emissions, according to the Guardian. Furthermore, the world's data centres represent some of the single largest consumers of electricity. Aggressively pushing for more renewable energy use in technology is not only necessary but it is the right thing to do.

A Good Move for Business

Google established its 100% renewable energy target back in 2012. Google EU energy lead Marc Oman says it took the company five years to reach its goal because negotiating power purchase agreements is so complex. They have the size and resources to wade through the process while smaller companies may struggle to do so. This is why companies like Google and Amazon are leading the way in corporate renewable energy purchases.

Google purchased some 5.7 TW hours of green electricity in 2015. By contrast, all the renewable energy produced by solar panels in the UK that same year equalled about 7.6 TW hours. That tells you how much electricity Google is consuming between all its data centres and its US-based operations centre with 60,000 employees.

Despite the challenges of reaching their target, Oman says the decision to purchase 100% renewable power is a good business move for Google. He insists they are not merely greenwashing – giving the appearance of being environmentally responsible without actually taking steps to do so – but they are improving their own operations and profitability by concentrating on renewable energy.

Ironically, Google has also said that it would not rule out investing in nuclear power in the future. Such an investment would lead to the inevitable question of whether the company's claim of not greenwashing stacks up with an investment in a power source that does not meet the same green and renewable standards as wind and solar.

Only time will tell what happens to nuclear power. In the meantime, Google is closing in on its 100% renewable target. When the company actually achieves it, you can expect plenty of fanfare and self-promotion. And why not? When that day comes, Google will have achieved something it has been working on for quite some time.



Thursday, 8 December 2016

ICO Offers Holiday Security Tips to Government Agencies

It's not often that a blog post issued by a government agency is a combination of useful information and holiday festiveness. But, thanks to the Information Commissioners Office (ICO) and enforcement team manager Laura Middleton, government IT workers have a reason to smile during the hectic holiday season. Middleton's recent blog post offering IT security tips to government agencies is enough to put a smile on your face and remind you of the necessity for extra security at this time of year.

As Middleton so eloquently explains, time constraints and holiday activities often lead to workers cutting corners where security is concerned. Matters are made worse by the fact that IT departments tend to work on skeleton staffs during the festive season. The same is true in the private sector. Data centres and IT departments experience larger-than-normal volumes of people taking time off.

As a public service to all readers, we would like to take the opportunity to present some of Middleton's tips to government agencies, modified to be appropriate to the private sector too. Enjoy!

Keep Staff in the Loop

Middleton's suggestion of keeping staff in the loop is directly related to Freedom of Information Act (FOIA) requests. In the private sector, the same principle applies. Staff who will be required to pick up the slack during the holiday season need to be fully trained and prepared to do so. They all need to be kept up-to-date on any information that will affect how they do their jobs.

Alternate E-Mail Addresses and Responders

In cases where key staff members still need to be reachable while taking time off, consider providing alternate e-mail addresses. Those e-mail addresses can be set up on a temporary basis, then shut down once the affected staff members return to work. Along those same lines, automated responders should be established for all e-mail addresses that will not be monitored during the holiday period. People who try to contact and organisation need to know that their e-mail was received.

Adjust Security Procedures

Shutting down for the festive season means IT staff are lean and offices are empty. Workers required to work even as most of their colleagues are off may choose to work from home rather than coming into an empty building, so they may also need to be reminded of security procedures. If necessary, security may have to be adjusted to account for different ways of working at this time of year.

Have a Contingency Plan in Place

Data centres and IT departments should absolutely have a contingency plan in place just in case an IT emergency arises. There is no room for complacency, even during the festive season. Not having contingency plans in place is asking for real trouble.

Christmas is almost upon us, but that does not change the need for information and data access. You can make your festivities more enjoyable by making sure your networks and data systems keep running securely and uninterrupted.



Tuesday, 22 November 2016

Russia Blocks LinkedIn: A Sign of Things to Come?

It's official. After months of threatening LinkedIn with a block of its services in Russia, the Russian communications regulator has followed through. It all stems from a dispute over whether LinkedIn would comply with Russian laws requiring information pertaining to Russian users be hosted on Russian servers. One wonders if Russia's actions are a sign of things to come.

The push to bring all Russian user data home began in 2014 when the Duma passed the first of several bills aimed at doing just that. Under that first piece of 2014 legislation, Russia established that companies operating on an international scale would have to procure physical servers in Russia – whether contracting with existing data centres or building their own – in order to store data pertaining to Russian users.

The law equally applies to big names such as Facebook and Google and smaller companies with significantly less name recognition. Furthermore, it applies to Russian companies who have a practice of sending user data outside the country. They must cease doing so unless they can prove a certain level of domestic data security.

A Populist Mentality or Something Else?

One way to look at the Russian legislation is to compare it to the current wave of populism that seems to be sweeping the globe. Citizens growing ever more tired of globalism are demanding their nations return to a more populist way of doing things that preserve national identity and sovereignty. Populism was a big part of both Brexit and the recent US presidential election. It may grow in the near future with both the French and German elections.

Could Russia's move be as much about populism as security?  While it's true that protecting sensitive data is a lot easier when hosted domestically, it is also not terribly difficult to implement security strategies that are effective in a cross-border situation. So there has to be more to it than just security alone. Populism seems as if it could be a factor.

Still, there is another possibility. Some critics of Russia's move speculate that the regulator wants data stored at home so that certain government agencies can access it more easily. Think NSA and Edward Snowden here.

Where Do We Go from Here?

Now that Russia has taken steps to block LinkedIn, we would expect the regulator to take similar action against other companies as well. The floodgates are opened and water should begin pouring through rather quickly. Whether it spreads to other countries remains to be seen.

As per LinkedIn, they continue to stay committed to a global mindset. In their official statement, they expressed the following:

"Roskomnadzor's action to block LinkedIn denies access to the millions of members we have in Russia and the companies that use LinkedIn to grow their businesses. We remain interested in a meeting with Roskomnadzor to discuss their data localisation request."

Will Russia flinch? Probably not. So now it's up to LinkedIn to make it work.

Friday, 11 November 2016

Thoughts on a More Predictable & Reliable Data Centre Life Cycle

For a variety of reasons, there has been too much attention paid to the way data centres and infrastructure has been built, with comparatively little attention on the cost of operations throughout the life cycle.

As an industry we are 30-35 years old. We’ve grown very rapidly and so far we’ve been very technology driven and that is an extremely good thing. That focus has created the infrastructure that resides in hundreds of thousands of data centres around the world and it’s that infrastructure that gives us all the things that we take for granted, like the internet and applications such as messaging, streaming, two-way video communications and so on.

The next challenge is quite a different one and it’s making the transition from being engineering-focused to being operationally focused. What that really means is that we need to start to think much more carefully about how all the infrastructure we will have is going to be managed. How is it going to be run? How do we know how well we are running our infrastructure and doing our jobs to the best of our abilities?

Part of this, of course, is people related. But there is also a technical solution which requires giving thought to – what the infrastructure looks like and feels like throughout its life cycle, not just putting a data centre together from a design and build perspective and then moving on to the next project.

We need to think about ourselves as an industry that is maturing and as all industries mature they go through several stages of pain. The initial stages of pain are related to that change, in other words understanding where you are in the process and making the decision to change.
What that means is thinking very carefully about life cycle. How will the infrastructure that is built today perform throughout the phases of its life cycle? At some point in time we will refresh equipment. We will make capital reinvestments. We will make operational investments. We need to think those through throughout the life cycle.

What technology platform do we put in place so that we can manage our infrastructure better? The industry is still in a state of hyper growth so we’re still going to grow the number of facilities, although they may change size and shape. In fact, if the market does change in the way that we expect it to and makes a move towards Edge computing, the whole facility landscape will change dramatically.

To be able to manage the operation of those sites better we need to think about what the life cycle looks like. How do we want to run infrastructure in the best possibly way, ideally with the least amount of human intervention, and that’s where software and technology come in.

One of the things that we can do as an industry is to short-circuit that learning process by not going through the same pains that the other industries have already been through. Let’s look at oil & gas, pharmaceuticals, water and utilities and nuclear power stations. They’ve been through this exercise in varying time periods over the last 10 to 15 years. Let’s figure out what they did to change their operational best practice and use that knowledge.

So we don’t have to learn all those lessons for ourselves; we’ll make our own mistakes and learn our own lessons but let’s at least stand on the shoulders of our ancestors in the context of this particular maturity.

We need to do a better job in two ways. We need to describe our vision of the future and that this vision is not just about products and technology: it is really about the life cycle. Customers no longer buy product; they buy a system. They buy a solution. They buy an entire data centre. We would expect customers to say: “I’d like you to build me infrastructure that is predictable in terms of reliability and efficiency but is also incredibly easy to manage.”

Schneider Electric white paper 195, “Fundamentals of Managing the Data Centre Life Cycle for Owners” describes the five phases of the life cycle, identifies key tasks and pitfalls, and offers practical advice to the owners and management of legacy facilities.

Guest blog by Arun Shenoy, VP, IT & Data Centre Business, Schneider Electric

Monday, 31 October 2016

The Challenges of Going Global in Today’s Digital Economy

There are many popular brands here in the UK that are also well-known across the world, including fast-food restaurant chains, clothing shops, technology companies and many more. However, the vast majority of these companies didn’t start out with a global presence – for most, it happened gradually.

Businesses eyeing international expansion look to obtain the benefits of this growth through conventional uplift, including increased revenue, enhanced exposure and brand recognition, global partnerships and a more diversified product and/or service offering. But there are significant challenges to achieving this goal.

There are a lot of ambitious companies that want to grow, however, many of these companies’ data centres are typically far smaller than those of a multi-tenant colocation providers. Thus, it’s difficult for them to get the funding to pay for a facility with a high level of resiliency and a strong team to manage the mission critical environments.

Some have smaller, less resilient server rooms and data centres scattered about, which makes it more difficult to get applications that require more resiliency in the exact location they need. Others may have smaller, less redundant facilities along with perhaps one highly resilient, centralized facility. As a result, most want to move to a colocation provider that has a facility is resilient across all critical systems in multiple geographies and also enables their cloud applications to be as close to their customers as possible.

Most growing international companies like to deploy at a single point of presence to test a specific market, especially if there are some legal or regulatory hurdles and political concerns. They’ll go to an interconnection hot bed where the colocation may be a bit more expensive than going to a facility that’s away from a major city or a secondary city, allowing them to maximize their radius of coverage. For example, in the U.S., there are lot of European companies leveraging either Northern Virginia or New York City to get access to a plethora of carriers from one location. They can get access to as many different companies as possible rather than just going full force into a new market and deploying in multiple geographies.

Looking abroad in Europe, businesses are deploying in London, Amsterdam, Frankfurt and Paris, the hot beds of interconnection for the region. In Asia, it would be either Singapore for Southern Asia or Hong Kong for Northern Asia. In China, there are a lot of customers deploying a colocation environment in Hong Kong that’s directly tethered to an environment in China where they could deploy virtualized resources. In case anything goes wrong, such as a geo-political event, they can pull the virtualized environment back to Hong Kong.

Companies that want to move some of their activity outside their current boundaries might not take into account the cost for employees on the ground in a new market or for achieving connectivity between their domestic and international deployment. If they work with a reliable colocation provider with a global footprint, however, those data centre professionals can do all the racking and stacking along with managed services such as load balancing, patch management, server reboots and more. Most companies have a multitude of local colocation providers from which they can choose, but they should find a reliable one that can get them the most secure and effective point-to-point connections between the data centre and their corporate locations.

Another challenge for many businesses is their lack of knowledge with a variety of local country government regulations. Companies serving customers in certain international markets deal with data sovereignty issues and regional or regulatory compliance. For instance, if they are involved in any financial activity in Singapore, they have to make sure the colocation facility in which they are located is TBRA certified.

It’s very difficult to expand into new global markets for businesses that choose to build their own data centres, because it’s nearly impossible to move into any major cities that are regulated and unionized without having deep connections. Most enterprises that are looking at an international point of presence will not consider building, but instead, will look at tethering their proprietary data centres to their colocated international environment.

Companies have to be conservative and smart when they plan and execute on their global expansion. Small, incremental steps are key to success – maybe it’s just a cabinet or two so they can put some infrastructure in-region to better support business in that territory, whether it’s for internal customer and sales support systems, Web presence, etc. They’re often very risk-averse because expanding internationally for the first time is no small task. In this scenario, colocation allows them to use a couple of cabinets at first – likely to be in a virtualized fashion to be able to easily migrate out if needed – before they start to deploy physical servers.

Whatever route a company takes, they need to apply strong controls, rigid progress reviews and several checkpoints so they can overcome challenges and stay on course.

Guest blog by Steve Weiner, Senior Lead Product Manager, CenturyLink

Wednesday, 26 October 2016

DDoS Attack on the US: We Still Haven't Figured It Out

Every day, the world's future cyber security specialists attend classes wherein they learn the latest strategies for preventing network breaches. They learn from their instructors, practice defensive techniques on laboratory computers and take tests to earn their coveted certifications. Meanwhile, those professionals already on the front lines wage a valiant battle against hackers and cybercriminals that may be looking to wreak havoc on global networks. Yet, for all this cyber warfare and the significant advancements that it has led to, we still cannot figure out how to pro-actively stop a distributed denial of service (DDoS) attack.

This past weekend, the US East Coast discovered first-hand how debilitating a DDoS attack can be. Just after 7am (EDT), several big-name websites from companies located in this region of the States began experiencing outages. It wasn't long before security experts discovered a devistating DDoS attack was underway. The attack was levelled against internet traffic management specialist Dyn Inc, a New Hampshire-based company that provides domain name services to companies like Twitter and PayPal.

Dyn acknowledged fairly early in the day that service was being interrupted for a long list of sites that included CNN, Spotify, the New York Times, Reddit and the afore-mentioned Twitter. Service was eventually restored by mid-morning, but it went down again around noon. Dyn was forced to acknowledge that a second DDoS attack was under way, this one affecting the East Coast and moving west at the same time. It wasn't until later in the afternoon that Dyn was able to stop the attacks altogether.

Success Is in the Simplicity

A long-standing rule of technology is that, the more sophisticated something is, the easier it is to break. Common sense dictates the opposite is also true. Therein lies the key to the success of the typical DDoS attack.

A denial of service (DOS) attack is very simple. You set up a number of computers to bombard a server with ongoing and repeated requests for service in order to overwhelm the system so that it cannot process legitimate service requests. It's a lot like a flash mob. A large group of people can all assemble in front of a shop front simultaneously, thereby blocking access to legitimate patrons.

A DDoS attack is essentially a DOS attack taken to the next level. It uses hundreds, if not thousands, of unique IP addresses through a strategy known as IP address spoofing. With thousands of IP addresses to deal with, security experts have a hard time shutting down a DDoS attack quickly.

This simple strategy is not designed to steal information. It is intended to disrupt service so that people cannot access targeted websites. It is a very simple strategy for disruption that proves very effective when carried out strategically. It is so simple that we still don't have an effective way of dealing with it. And so, while we work to contain the world's cybersecurity threats, the DDoS beast remains elusive.



Tuesday, 18 October 2016

Security Breaches to Cost More Beginning in 2018

The last thing any company wants is to find itself victimised by hackers. Between the poor publicity and the fines that may be imposed, suffering a security breach is bad for business. And it's about to get worse. Once new EU fines are implemented in 2018, being victimised by a security breach could cost a company millions.

The EU's General Data Protection Regulation is set to go into effect in 2018. The regulation not only increases fines for security failures but it also groups companies according to their size and revenues. Some of the largest companies in Europe could face fines of up to £18 million or 4% of global turnover. Computer Weekly reports that revenues from the fines could represent a 90-fold increase if the same level of security breaches in 2018 and beyond continues at the same level reported in 2015.

When looked at specifically through the lens of large UK corporations, Computer Weekly says the annual fines could increase some 130-fold. The fines collected among small and medium-sized businesses could rise as many as 57 times. All of this adds up to an awful lot of money.

Putting Companies at Risk

The EU regulator has established a two-tiered system that allows it to levy less severe fines on companies suffering security breaches considered not as serious. Still, a fine equal to 2% of global revenue could still be devastating to a large company. This leads to the obvious question of whether the new regulation puts companies at risk or not. It may do just that.

Payment Card Industry Security Standards Council Jeremy King told Computer Weekly that the new legislation is serious business. King wonders whether some businesses will be able to actually pay the fines assessed against them.

"The new EU legislation will be an absolute game-changer for both large organisations and SMEs as the regulator will be able to impose a stratospheric rise in penalties for security breaches," King said, "and it remains to be seen whether businesses facing these fines will be able to shoulder the costs."

The regulator's position is easy to understand in light of the fact that as many as 90% of large corporations and 74% of small- and medium-sized businesses were affected by a security breach in 2015. Regulators are attempting to force companies to take more drastic action to prevent security breaches by making it financially uncomfortable not to do so… but is the regulator going too far?

Only time will tell whether the increased fines will accomplish what the EU wants them to or not. It is quite possible that some companies caught off-guard in the early stages will suffer financially for it, but we can hope that companies will take this seriously enough to beef up security efforts before the new fines are imposed. That would be ideal. Europe's computer networks would be safer and businesses will not have to suffer the losses related to breaches.

Thursday, 13 October 2016

2015 French TV Attack Highlights Network Vulnerability

Do you remember the April 2015 cyber-attack against France's TV5Monde? If so, you may remember the immediate speculation that the attack was linked to the Islamic State and an attempt to further rattle the nation, just months after the Charlie Hebdo attack. Well, investigators have learned a lot since then.

First, the attack was not the work of the so-called Cyber Caliphate as first reported. Investigators now have strong reason to believe the attackers were Russian hackers who used malicious software to destroy the broadcast systems at the TV5Monde network.

More importantly, we have learned just how vulnerable networks are to well-designed software. The attack on the French network was not particularly sophisticated, but it moved very quickly and effectively, once it got started. According to the BBC, TV5Monde was within hours of a complete collapse when one of the network's engineers located the computer where the attack originated and removed it from the system.

A Combination of Organisation and Speed

TV5Monde had begun broadcasting hours earlier when, for no apparent reason, all 12 channels went black. It wasn't long before network officials figured out they were experiencing a serious cyber-attack. TV5 director-general Yves Bigot credits his engineering staff for identifying the problem and intervening before it was too late.

The attack was successful because it was targeted and because it combined organisation and speed. Investigators discovered that the hackers carried out sophisticated recon against the TV network to figure out the station’s system before launching the attack. They then created software that attacked the network's hardware in a sequential manner, corrupting the systems responsible for transmitting television signals.

Interestingly enough, the hackers did not use a single point of entry. In fact, the BBC says there were seven points of entry. Even more interesting is the fact that not all of those points were in France or even a direct part of the TV5Monde network. One was a Dutch company that sold TV5 some of their studio cameras.

A Potential Collapse Was Real

The attack on TV5 should be a reminder of the vulnerability of computer networks. Engineers could have completely shut down the system, wiped it clean and started over from scratch had it been necessary, but by that time the damage would have been done. As Mr Bigot explained to the BBC, any prolonged outage would likely have resulted in the cancelling of broadcast contracts en masse, leading to the collapse of the network under the financial strain.

In terms of cyber-attacks, this is where the real problem lies. A computer system can be repaired just like a building attacked in conventional warfare can be rebuilt. But any harm caused by a cyber-attack is capable of producing significant financial stress that could lead to a total collapse. 

Disaster was averted in France last year.  Next time, things might not go so well. Thus we need to be ever more diligent about protecting our networks at all costs.



Tuesday, 4 October 2016

Scientists Want More Research into Internet Use and Water

When scientists at Imperial College London claimed that downloading a single gigabyte of data could waste up to 200 litres of water, their claims generated one of two reactions. Those who follow such things were visibly shocked while those who do not went on with their lives completely unaffected. Little has changed a year later; not that anything should have.

According to the BBC, the Imperial College London researchers calculated that the 200 litres of water per gigabyte of data is probably used in keeping data centres cool and actually generating the power needed to operate them, but 'probably' is the operative word here. The researchers could not conclusively say how water was being wasted, nor did they provide any concrete evidence that their estimate of 200 litres per gigabyte was accurate.

Bora Ristic, one of the researchers involved in the project, told the BBC that there was quite a bit of uncertainty in the figures. He said water usage could be ‘as low as 1 litre per gigabyte’ rather than 200. What is important, Ristic said, is that their report highlighted the fact that water consumption in relation to internet usage has not been well researched.

A Crisis Where None Exists?

If there is a country in the ever-shrinking world that is cognisant of its responsibility toward the environment, it is the UK. We have been leaders in environmental issues since the Thatcher days, having spear-headed research into global warming and renewable energy. We know a thing or two about protecting the environment, both now and in the future. But are the concerns over water consumption and internet use legitimate? Are researchers creating a crisis where none exists?

Water used to cool data centres is not wasted as researchers contend. Some of that water can be recycled and sent back through the system for continued cooling; what is not recycled gets sent out to be treated before being released. As far as the water used to generate power, it is not wasted either. It evaporates as steam to become part of the natural water cycle.

The earth's water cycle is key to understanding this whole issue. The reality is that water consumption does not equal waste. Water that is consumed by living organisms is eventually transferred back to the atmosphere through respiration and perspiration, once again taking its place in the water cycle. Water that is not consumed (e.g. for data centre cooling) is also returned to the water cycle when released following treatment.

It is true that land masses can experience drought from insufficient rainfall, but the total volume of water on the planet is never diminished. Unless a particular area is suffering a drought, the issue of using water to cool data centres and generate power to run those data centres is really a non-issue after all. Let's research it if scientists want the data, but let us not put out alarming statistics that are likely invalid and irrelevant.

Tuesday, 27 September 2016

FCA IT Outage a Bit of Irony

A bit of irony struck this past weekend when the Financial Conduct Authority (FCA) was forced to announce late last Friday that an incident at one of their outsourced data centres caused a widespread outage that affected a number of the watchdog's IT services. The FCA described the outage as 'major' even as it was working with their vendor to restore inaccessible services.

The irony of the outage is related to comments made earlier in the week by FCA specialist supervision team director Nausicaa Delfas, who berated private sector companies for not having appropriate systems in place to prevent cyber-attacks and network failures. At a cyber security conference last Wednesday, Delfas made it clear that the FCA wants the companies it regulates to do better.

"Most attacks you have read about were caused by basic failings – you can trace the majority back to: poor perimeter defences, un-patched, or end-of-life systems, or just a plain lack of security awareness within an organisation," Delfas said. "So we strongly encourage firms to evolve and instil within them a holistic 'security culture' – covering not just technology, but people and processes too."

Confirmed Hardware Failure

In the FCA's defence, the incident was not the result of any sort of cyber-attack or internal systems shortcoming. It was a direct consequence of a hardware failure as confirmed by Fujitsu, the vendor responsible for the data centre in question. Nonetheless, having not restored all systems several days into the incident demonstrates to the FCA just how difficult it can be to maintain networks when things like this happen.

The FCA has long argued that the companies they regulate should be prepared for any sort of incident that could knock out network access for any length of time. To show just how serious they are, regulators fined the Royal Bank of Scotland a record £56 million after an IT failure in 2014 left millions of customers without access to their accounts. That has some critics of the agency ready to speak out against the regulator.

ACI Worldwide's Paul Thomalla is among those executives calling out the City watchdog. He told the Financial Times that the watchdog has to be held to the same standards they apply to the financial sector. He said that if the FCA expects the institutions it regulates to maintain high standards of security and network reliability they need to implement the same standards for themselves.

Only time will tell how devastating the weekend incident really turns out to be and if there is any long-term fallout at all. The lesson to be learned is that there is no such thing as a 100% safe and reliable network. Things can happen even with the best of intentions and rock solid contingency plans in place. Our job is to do the best we can to mitigate the adverse effects of those incidents. When they happen, we just have to do all we can to get things fixed as quickly as possible.

Source:



Thursday, 22 September 2016

The National GCHQ Firewall: Will It Work?

If you haven't heard the news yet, the Government Communications Headquarters (GCHQ) is taking aggressive action against cyber criminals with the establishment of a new division known as the National Cyber Security Centre (NCSC). The centre, which is slated to open sometime in October (2016), will be the first such government agency dedicated solely to defending the UK against cyber security threats. One of their first missions will be to build a 'national firewall' that would protect internet users from the most common cyber threats.

Thus far, GCHQ has not detailed how the national firewall will work, but they have said that the NCSC will not actually be responsible for filtering out suspect sites and emails. Instead, the primary mission of the firewall is to provide a national domain name system that internet providers and others can use to block access to computers via IP address.

The question on everybody's mind should be, will it work?

As explained by the Telegraph on its website, there are quite a few ISPs with IP blocking policies already in place. They have enjoyed some limited success in preventing malware attacks, phishing attacks and the like. They have also prevented British internet users from accessing sites with content that violates copyright protections.

Some Success Already

The Telegraph says the government has also enjoyed some measure of success with a tool that is capable of identifying and intercepting malicious emails that appear to come from government agencies. It is based on the identification of any emails purporting to come from government sources and then checking origin IP addresses against an existing database of known government addresses. Any email with an IP address that does not match is automatically blocked.

GCHQ has developed a tool to a point where they have been testing its effectiveness on a state tax refund site that was sending out as many as 58,000 emails per day. According to NCSC chief executive Ciaran Martin, that site is no longer sending their emails.

The fact that the government has seen modest success in large-scale email blocking seems to suggest that their plans for a national firewall could work. But there are still plenty of hurdles to overcome. Ultimately, the success or failure of the system is going to rely on how well government and private entities work together.

Every Tool Can Help

Knowing what we know about cyber security and network threats, we can say with a fair degree of confidence that a national firewall will not be a perfect solution all by itself. No single cyber security tool can protect us against every single threat. But every tool that does what it is designed to do adds to a much larger arsenal that is better able to defend against cyberattacks with every passing day.

We look forward to seeing what the GCHQ comes up with for a national firewall. Hopefully, their efforts will allow private organisations to take some much-needed strides in addressing cyber threats.

Tuesday, 13 September 2016

ING Data Centre Crash Caused by Loud Noise

ING Bank found itself apologising to customers this week after a data centre failure in Bucharest, Romania left them without most online services over the weekend. The good news in an otherwise disturbing situation is that the event could have been much worse. The outage led mostly to inconvenience, due to its occurrence on the weekend. Had it happened during the week, the results could have been much worse.

Numerous news reports say that ING Romania was running a standard fire suppression test at the Bucharest facility on 10th September. The facility's fire suppression system uses an inert gas that is designed to be harmless to equipment. In this case, the gas itself did not cause the problem. The catastrophic shut-down of the facility was a result of a loud noise emitted when the high-pressure gas was released.

One news source says that the gas was under a pressure that was too high for the system. When it was released, it emitted a loud booming noise that sent a shock wave throughout the facility. That shock wave created vibrations strong enough to damage hard drives and servers within the data centre.

Service Down for 10 Hours

Damage to the equipment was severe enough that the centre was down for about 10 hours. During that time, customers were unable to conduct online transactions, communicate with the bank online or conduct transactions at ATMs around Bucharest. Some transactions already in progress when the outage occurred were simply lost. The bank's website was also down for a time.

Bank officials say they brought in an extra 70 staff members to help recover the system and restore data. Although described as ‘exceptional’ and ‘unprecedented’, ING Bank maintains that service interruptions were merely a matter of convenience. They have not said whether all systems are up and running yet however it does not appear, at time of writing this article, that any critical data was lost or compromised.

Unfortunate but Important

ING Bank's misfortunes aside, the fire suppression test and subsequent shut-down are important events for the data centre community. Why? Because it has long been assumed that loud noises creating substantial shock waves could damage data centre equipment, but no one has known for sure because it has never happened before. Now that it has, we have a working example we can use to address what we now know is a possibility.

In the months ahead, we can expect testing and research designed to figure out what happened in Bucharest over the weekend. The more we learn about the incident, the better able we will be to protect data centres from similar events in the future. This is good for the data centre community despite the fact that the outage inconvenienced ING Romania customers.

Making the best use of the information collected on the outage will, of course, depend on ING Bank being willing to be forthcoming with their findings. Hopefully they will, for the good of the entire data centre industry.

Sources:



Thursday, 11 August 2016

Delta Airlines Data Centre Fails – The Reason Why Is Still a Mystery

The second-largest airline carrier in the US is still struggling to regain normal operations after a data centre failure that grounded hundreds of flights and stranded thousands of passengers worldwide. Somewhere around 2.30am EDT on Monday, August 9 Delta staff in Atlanta were unable the access computer networks for some unknown reason. Operations around the country and, eventually, the world also suffered the same fate.

The US-based company, which is part of the SkycapTeam consortium that also includes Air France-KLM, has not offered any real concrete answers about what caused the problem. But, in the days following the outage, they have struggled to get their computer systems back online and all the data synced across their worldwide network. The airline says it is doing everything it can to return service to normal.

A Power Switch Problem

Initial reports suggest that Delta technicians were running a routine test of backup power procedures when a piece of equipment was inadvertently tripped. That failure ostensibly locked airline computers out of access to both Georgia Power and their own reserve backup generators. With no power, the system shut down.

However, another rumour has emerged suggesting a fire might have taken out the airline's main data centre in Atlanta. Some sources say that as technicians were attempting to switch computer networks to a backup generator, a fire broke out, destroying two generators in the process. In either case, Delta's computer networks went down due to a data centre failure related to a lack of power.

As of Wednesday, August 10th 2016, things were still not back to normal. A few thousand of Delta's flights were back on schedule, but airport information boards were not necessarily correct. Information on the company's website pertaining to arrivals and departures could also not be entirely trusted. Delta Airlines continues to investigate what went wrong.

Computer Networks Vulnerable Everywhere

Delta Airlines is sure to take a PR beating as a result of its data centre failure. And although there will be new strategies put in place to prevent future outages, the company's networks were already operating up to standards as far as we know. Their data centre had backup power in place for purposes of redundancy, just as would be expected, but the perfect storm occurred in just the right way to cause a big problem.

The lesson to be learned here is that no network is invulnerable. No matter how much technology we put in place, no matter how much redundancy we include, computer networks will always be at risk of failure. It is something we have to learn to live with. That does not help the thousands of Delta passengers stranded around the world, but it is the reality in which we live. Computer networks are not perfect.

Hopefully, Delta would be more forthcoming in the future as to what caused the failure. Their willingness to share information will help others avoid similar problems in the future.

Tuesday, 19 July 2016

Smart Cities and the SSD-Driven Data Centre

We have smartphones, smart cars, and smart homes filled with dozens of smart devices. So, are you now ready for “smart cities”??? They may have been a fanciful thing of the past for futurists and dreamers, but smart cities are now here. They are beginning to emerge thanks to billions of devices across the globe able to communicate via the internet. And yes, data centres are playing a big part.

The data centre of the future is likely to be the bedrock of the smart city for obvious reasons. But, before we get to discussing what that might look like, let us first consider where we are right now. ITProPortal's Laurence James recently wrote a very timely blog post in which he cited data suggesting that upwards of 1.6 billion devices will be connected to smart city infrastructure before 2016 is out. He mentions things such as smart transport, traffic management systems via connected cars and even the local rubbish bin that is capable of sending a message that it needs to be emptied.

James used the 2012 Olympics in London as an example of how smart cities are already working. Officials at TfL had to put a system in place to manage traffic that could support up to 18 million journeys per day. The system they settled on used data analytics to predict traffic patterns so that trains, buses and other options could move through London as efficiently as possible.

Data Centres at the Heart of Smart

At the heart of smart is the data centre. But here's the thing: in order to make smart cities a reality, we are going to need a lot more local data centres that are capable of processing tremendous volumes of data extremely quickly. Relying on regional data centres will simply not be enough.

This presents a problem; especially in an era when we are trying to reduce our carbon footprint while at the same time consuming less energy. As we already know, data centres are hungry consumers of power. We need to find a way to reduce power consumption if we are going to build enough data centres to support smart cities without completely obliterating our energy goals. The solution appears to be the Solid State Drive (SSD) 'flash' drive.

In his post, James explains that experts predict mechanical hard drives will be capable of supporting 40 TB of data by 2020. As tremendous as that number is, it is insufficient. The good news is that SSDs should be able to support 128 TB at 10% of the power and 6% of the volume required by mechanical hard drives. In other words, SSDs can handle more data at faster speeds, at a lower cost, and with a smaller footprint requirement.

Smart cities are here now. In the future, they will be driven by local data centres that rely on SSDs to handle the massive data flow. Who knew the technology behind the flash drive in your pocket would be so integral to powering the future?

Source:

Wednesday, 13 July 2016

UK Solar Power Reaches New Milestone

Most of us are fully aware of the fact that the UK is a world leader in clean energy – particularly in the area of solar - so it should be no surprise that a new analysis offered by the Solar Trade Association (STA) reveals that producers have recently hit the latest milestone in solar energy production, by generating nearly 24% of the total energy demand during the afternoon hours of 5 June 2016.

According to the STA, the UK is now home to almost 12 GW of solar power capacity that, at peak generation, can produce up to 25% of the nation's total energy needs. The STA is firmly behind solar as the best way to provide clean energy and reduce dependence on fossil fuels. Chief executive Paul Barwell was quoted by E&T magazine as saying, "This is what the country and the world needs to decarbonise the energy sector at the lowest price to the consumer."

Solar Farms and Rooftop Installations:

The popularity of solar power in the UK is evident by the rapid uptake of both solar farms and rooftop installations. According to E&T magazine, one particular rooftop installation in Telford consists of 14,000 solar panels on top of a commercial building operated by Lyreco. The magazine goes on to say that all of the clean energy sources currently in use in the UK, combine to provide more than 25% of the UK's total power generation.

Across the UK, more and more homes are being fitted with solar panels for two purposes. Consumers are utilising PV systems to generate direct electrical current and solar thermal systems for hot water and space heat. Commercial and industrial enterprises are also embracing solar for space heat, process heat and hot water.

The STA says that all the solar industry needs at this point is one more "push from the government" to reach its goal of being subsidy-free sometime early in the next decade. The government seems like it is on board, for now.

Solar for Data Centre Requirements:

We are thrilled to know that solar and other clean energy sources are doing so well and, to have UK solar capacity reach this most recent milestone is certainly encouraging. It leads us to wonder if we will ever see a viable solar application for powering data centres. Finding some sort of renewable solution is critical given the fact that data centres are among the most prolific power consumers in the world. If we can find a way to get data centres off fossil fuels, doing so would have a tremendous impact on meeting clean energy goals.

Solar isn't adequate for data centre needs in its present form. But we can envision a day when highly efficient solar thermal systems with sufficient storage capacity could be used to generate the power requirements of a data centre in order for it to operate 24/7. A development like this would certainly be exciting and one that all of us in the data centre industry would be absolutely thrilled to see.

Thursday, 30 June 2016

Lack of Security Taints EU Re-Vote Petition

The Brexit votes had barely been tallied and made official when opponents of the outcome established an online petition calling for a second vote. That much was expected in the days and weeks leading up to the vote, given that polling showed things to be extremely close. What was not expected is an almost ridiculous lack of security that has allowed the petition to be tainted by auto bots.

According to the BBC, the House of Commons petitions committee has said it has already removed 77,000 invalid signatures coming from people allegedly living in Antarctica, North Korea, the Sandwich Islands and even the Vatican. Although officials say that most of the remaining signatures now appear to be from people living in the UK, there is no way to know how many of those signatures were added legitimately as opposed to being placed on the petition through auto bots.

An Appalling Lack of Security

The re-vote petition is already the most active petition ever placed on the Parliamentary website. The BBC says it currently has 3.6 million signatures. However, one computer security expert told the BBC that any site like the House of Commons petition site needs to have security measures in place to defeat intrusions. We clearly agree.

What's most appalling about the lack of security in this case is the fact that stopping auto bots is relatively simple. It's not as if we are talking about encrypted malware or tough-to-detect rootkits that go to the heart of computer networking systems. Auto bots are nothing more than computer scripts that log onto a website and submit or retrieve data without any human intervention. They can be stopped with something as simple as a captcha script.

Because whoever designed the petition site was so careless, there is no way of knowing how many of the signatures on the petition calling for a second EU vote are legitimate. But it goes beyond just this petition. How many other petitions have been affected by the site's lack of security?

The BBC references a group that runs the 4chan message board as being one of the primary attackers of the re-vote petition. According to their report, one of the message boards members claims to have signed the petition some 33,000 times simply by running an auto bot.

Things Must Change Now

For the record, the House of Commons petitions committee says it will continue to monitor the situation for any additional evidence of auto bot activity. Meanwhile, Prime Minister David Cameron has said there would be no second vote, regardless of the petition and its signatures.

That's all well and good, but something must be done to improve the security of the petition site now. If we cannot trust something as simple as online petitions as being secure, we are left to wonder how many other government websites are equally vulnerable. Shame on the House of Commons and their web developer for such a stunning lack of security.



Tuesday, 14 June 2016

Apple Signs Deal to Make Energy from Methane Gas

We all know Apple as a maker of computers, smartphones, tablets and wearables. Now it appears that the California company is getting into the renewable energy business thanks to a deal signed with a US landfill to utilise methane gas. This could be a precursor to other similar projects around the country…

According to various news sources, Apple has reached an agreement with Catawba County in North Carolina, one of the southern states along the US East Coast. Catawba County will lease 3.7 acres to Apple for 16 years. At the conclusion of the lease, Apple will have an opportunity to vacate the premises or sign for an additional five years.

Apple has not detailed what it plans to do with the renewable energy that it creates at the Blackburn Resource Recovery Facility. It could be used to generate green electricity or be sold as-is to customers who need gas fuel.

How It Works

Landfills in the US typically deal with the methane produced via waste decomposition by simply venting it into the air. But a growing number of operators are now installing energy plants to trap the methane gas, process it and then use it for other purposes. This is exactly what Apple will be doing.

Catawba County plans to harness 40% of the methane produced by the landfill and sell it to Quadrogen Power Systems for treatment and processing. They will then pass the processed gas along to Apple. The remaining 60% of the methane will be used by the county to supply some of its energy requirements.

Speculation abounds that Apple will use the methane gas to produce electricity for a data centre it also operates in the county. But that remains to be seen. Such a use would make complete sense given Apple's commitment to eventually powering as many of its facilities as possible using renewable sources, but how much benefit the company will realistically get from the methane harvested from the county landfill may not do much in the grand scheme of things. It may be too little in the end.

Another Piece of the Puzzle

Irrespective of how much power Apple actually generates from the new deal, it is less important than the fact that its plans are yet another piece of the puzzle. As the world's data centre needs expand, the amount of energy consumed by bigger and more robust facilities will only increase. We have to find ways to power the data centres of the future without relying on fossil fuels. That may mean a combination of renewable sources that include sun, wind, water and biomass.

Harnessing methane is a particularly exciting prospect because we are already producing the gas anyway. Just by burying our rubbish and letting it decompose, we are creating a gas that can be harnessed for multiple purposes. Indeed, methane is one of the greenest biomass energy sources available to us. Apple's decision just helps it take one step closer to eventually using only renewables.

Monday, 6 June 2016

Are Data Centres Able to Operate in Tropical Environments?

Given that a large percentage of the power used to run the average data centre is directly related to cooling, builders and designers do their best to locate new facilities in locales with cooler climates and lower humidity. The idea is to save money by reducing the amount of power used for temperature and humidity control. Still, the curious among us want to know if a data centre could still operate at peak performance under conditions twice the current norms.

We are about to find out thanks to a test to get under way shortly in Singapore. News reports say the world's first tropical data centre is now in the planning stages and involves a number of big-name partners including Dell, Hewlett-Packard Enterprise, Intel, ERS, Fujitsu and others. The consortium will set up a controlled test environment in an existing Keppel data centre for the test.

Current standards dictate that data centres not be allowed reach temperatures in excess of 20°C with a relative humidity of no more than 50-60%. Those numbers will be almost doubled for the test. The test centre will be allowed to reach 38°C and the relative humidity upwards of 90%.

Researchers appear to be at least somewhat optimistic that their test will prove data centres do not have to be kept under such tight controls. If they are proven correct, the test will open the door to a much larger geographic area in which data centres could be built without compromising performance.

Temperature, Humidity or Contaminants?

Current standards for temperature and humidity at data centres have not really been questioned over the last 30 to 40 years. As with so many other things in the digital arena, there is even considerable debate as to how the industry arrived at the current standards and if these are even scientific at all. Indeed, a number of studies several years ago suggested that air-borne contaminants were more damaging to sensitive data centre equipment than ambient temperature and humidity.

Some researchers have gone as far as to speculate that purifying the air circulating through data centres would do far more to achieve maximum performance than tightly controlling temperature and humidity. Whether that is true or not is a matter for future tests. But if the tropical data centre being established in Singapore does turn out to be successful, it would be worth repeating the test under identical circumstances that would also include air purification controls.

Building Greener Data Centres:

The Singapore test is, at the end of the day, all about learning how we can build greener data centres that do not consume nearly as much power. As the digital world grows, more and more of our energy resources will have to be put toward powering the data centres that make modern life possible. If data centres can truly operate at nearly twice the current standards for temperature and humidity, imagine how much money we could save by not having to control the data centre environment so tightly.

Monday, 23 May 2016

Myth-Busting: Why Water Can Be Used to Suppress Fires in Data Centres

The risk of an electrical fire in data centres is an ever-increasing concern for owners and operators. It has been an ongoing battle for when it comes time to decide what type of fire suppression system should be utilised in their facilities. The use of a water-based system could risk destruction to all electronics housed in the facility, resulting in thousands of dollars of damage. CO2 based systems release toxic gas that can be detrimental to the health of your employees and is considered a greenhouse pollutant. 

The myth that data centre owners have to choose between water and / or gas systems to protect their space no longer is the case. Unique hybrid fire extinguishing systems - that use a mix of water and nitrogen gas to extinguish the fires - have arrived in the fire protection / suppression market. This technology uses the best characteristics of both water mist and inert gas to extinguish a fire. Among the benefits of this type of system include: life safety, enclosure integrity, environment safety, cooling capacity and no costly clean up or equipment replacement.

Made entirely of non-toxic agents, all personnel are safe even during activation. The reduction of oxygen in the space is at levels within safe breathing tolerances. Hybrid systems are designed specifically for information technology spaces. Providing the best capabilities of both water mist and inert gas systems the technology is also 100% environmentally safe. There is no costly cleanup or equipment replacement after the system is activated. Immediately after a fire, the system rapidly recharges and is ready for use that same day, which is extremely important for information technology facilities such as data centres.
           
The success of a hybrid technology is its unique ability to extinguish fires via heat absorption and oxygen deprivation and with minimal water presence. This system works by combining nitrogen and water, a homogenous suspension of nitrogen and sub 10-micron water droplets penetrate through vented type enclosures to extinguish a fire without significant water residue. When the mixture enters the enclosure, both the nitrogen and the water attack the fire simultaneously. The water cools the space and the nitrogen reduces the oxygen content and generating steam.

By installing a hybrid fire extinguishing system you no longer will have to worry about damaged property, loss of money or the health and safety of your personnel.


This post was written by Cedric Verstrynge, Victaulic Sales Engineer Vortex, Victaulic Company http://www.victaulic.com/ or http://www.victaulicfire.com

Thursday, 19 May 2016

Faster Wi-Fi May Be On Its Way

If Ofcom has its way, you could be in store for faster wi-fi in your UK home or office within the next couple of years. The independent regulator and competitive authority recently unveiled plans to improve wi-fi speeds by opening up additional channels by which routers and wireless devices can communicate. Now it's up to the regulator to implement those goals in a sensible way.

Right now, most wi-fi routers in the UK use the 2.4 GHz band of radio frequencies for wireless communications. While this band has been reliable for more than a decade, it is becoming increasingly more congested due to the high demands of modern internet use. Between streaming, cloud computing, and other means of high-capacity networking, the 2.4 GHz band is having trouble keeping up. This means slower speeds between the router and wireless device.

The Ofcom plan calls for opening up the 5 GHz band with a few additional sub-bands that are significantly less congested and offer a wider frequency range. This band was chosen because Ofcom believes it can be utilised without interfering with other technologies, such as satellite television for example.

Faster Wi-Fi Means Better Broadband

Opening up additional wi-fi frequencies may not mean much to the average broadband user who has no idea how wi-fi routers work. That being the case, we believe it is appropriate to offer a brief explanation.

When data communications enter a property through a wi-fi router, that router must then pass signals to mobile devices using radio frequencies. Seldom do routers transmit data as quickly as it is received. Comparing this to how we access water in the home is very helpful.

The typical kitchen sink fixture does not dispense water nearly as quickly or powerfully as the municipal water supply feeding the home. Instead, pressure and volume are scaled down using a number of devices between municipal connections and homes. Wi-fi routers work in a similar fashion. That's why advertised data transfer rates rarely line up with reality. Wi-fi technology is just not fast enough to keep up with current broadband speeds.

If you were to do a data transfer test on your home network using a modern laptop, you would likely find that both download and upload speeds do not match the speeds advertised by your broadband provider. Some of the slowdown may actually occur as data is moving across networking channels to reach your property, but not all of it. Much of the slowdown has to do with the wi-fi connection between your router and your laptop.

As Ofcom works to implement its proposals for faster wi-fi networking, we can assume router manufacturers will be getting on board to help them figure out how to make the best use of the 5 GHz band and its sub-bands. By the time Ofcom is ready to implement its proposals officially, there should be an ample supply of equipment capable of performing up to standard.  Finally, it looks as though faster wi-fi might be on the way..

Wednesday, 11 May 2016

Is It Time to Rethink Password Protocols?

For more than a decade, networking and security experts have recommended that people with access to online accounts use long and complicated passwords that make hacking difficult. In light of those recommendations, government organisations and businesses the world over have built into their password protection systems a requirement for users to change their passwords on a regular basis. Now there are questions from government security experts about whether forced password changes are a wise idea.

A recent publication released by the government's CESG group suggests that forced password expiration is outdated and counterproductive to security. The group is recommending against forcing account holders to change their passwords regularly, offering the following reasons for the new guidance:

·        New Password Selection – With the average consumer now having access to dozens of online accounts all requiring separate passwords, CESG experts say that forcing users to select new passwords too often will likely result in many choosing less complicated passwords so they do not forget them. Less complicated usually means more vulnerable.

·        Easy to Hack – Experts say that users are more likely to choose passwords similar to the ones they are replacing when forced by expiration to do so. They say that, in effect, this makes the new passwords no more secure than the old ones. If a hacker gets hold of an old password, it is relatively easy to figure out the new one.

·        Little Security Value – CESG also claims that there is little security value in changing passwords as long as users are making their original choices lengthy and with a random combination of letters, numbers and symbols. For the amount of benefit that exists, it is simply not worth forcing users to change passwords and hoping that those passwords will be remembered.

·        Help Desk Support – Lastly, forgetting passwords is one of the more common reasons for contacting help desk support. Help desk professionals have to spend time resetting passwords, knowing that those same users will be contacting them 30 days down the road for another reset. This is simply not a wise use of resources given the little benefit that forced password changes offer.

In the modern world of networking and security, CESG says there are other ways to accomplish what password expiry used to accomplish a decade ago. The group offers the example of using system monitoring tools that present users with past login information every time they access one of their accounts. This information may help a user when it comes to the possibility of a hacker previously trying to access the account.

CESG says it is time for us to rethink our password protocols. Is password management better left to the preferences of individual users, with administrators finding other ways to keep accounts secure, or do we need to stick with the way we have been doing things for so long? It will be interesting to see how security experts and system administrators respond to the new guidance.

Thursday, 28 April 2016

Logging in with Your Head May Not Be Far Away

Security continues to be among the top concerns of the information age thanks to creative hackers who are always finding new ways to break into systems. In response, security experts are aggressively pursuing an extensive list of new measures, including using biometric information to log in to accounts. The use of such information would eliminate the need for usernames and passwords that can be easily hacked. If all works out well for German researchers, logging in to your computer with your head and a sound file may not be far off.

The biometric security protocols we have seen thus far rely on things such as fingerprints and iris scans. Now, researchers at the University of Stuttgart, Saarland University and the Max Planck Institute for Informatics are working on a new way to take advantage of the architecture of the human skull for biometric identification. They have modified a Google Glass device capable of identifying minute structural differences that enable software to tell the difference between human beings.

Researchers explain that the shape, size and other features of the human skull are different from one person to the next. So unique are we in this regard that sound resonates within the skull in a way that is distinct in every person. The German system takes advantage of this by playing a sound through a headset that is then measured according to how it resonates in the skull. Resonance data can then be stored and compared at a later time. So far, the researchers say their system works with an accuracy of about 97% which is good but, of course, needs to be honed before it is feasible to use on a commercial basis.

Blurring the Lines of Integration

Should the German researchers manage to bring their project to commercial fruition, it will change basic computing and networking forever. Gone will be the days of remembering passwords that have to be changed every 4 to 6 months for security purposes. Access to networks will be granted to specific people, and those people only, based on unique markers present in their biometric information.

What we are now witnessing is a blurring of the line between human and machine. As so many science fiction films have predicted in the past, we could be moving ever-closer to that time when humans and computers are so tightly integrated that identifying any line between them is nigh on impossible.

As for the German system, dubbed ‘SkullConduct’, there really are no practical limits once it is perfected and made marketable. Any function for which traditional usernames and passwords are used can easily be adapted to the new biometric solution. It can be used locally, in the cloud, across international networks and so on.

What will they think of next? No one knows, but logging into your computer using your head and a resonating sound wave is certainly intriguing. The only thing the rest of us ask is that the chosen sound be somewhat pleasant. No loud sirens, shrieks, bells, or whistles, please!

Thursday, 21 April 2016

Cambridge Looking to Lead Superconductor Research


The University of Cambridge in Cambridgeshire, England has announced an ambitious plan to become a world leader in superconductor research that will hopefully make Britain a world leader in developing the computing technologies of the future. The Cambridge plan involves a GBP £2.7 million investment from the Engineering and Physical Sciences Research Council.

In an official news release from the University, Cambridge officials explain that their ground-breaking project is focused squarely on a new technology known as 'spintronics'. This technology is based on using a property of superconducting materials known as 'spin' to process large volumes of information at ever-increasing speeds. Their biggest challenge is overcoming the magnetism associated with spin. Right now, magnetism interferes with data conduction, thereby negating any gains made by manipulating spin to increase speed.

Previous research conducted by the University in 2010 demonstrated that, at least in theory, it is possible to power a spintronic device using a superconductor. Actually doing so is the main priority of Cambridge's new project. But the university hopes to go above and beyond that as well. University officials say the scope of their project is larger than anything else currently being worked on. They have plans to solve the magnetism problem, devise ways the technology can be used for future computing, address storage needs as they relate to supercomputing and, ultimately, build workable spintronic devices that deliver the desired results.

Energy Savings a Priority

Driving the computing world toward spintronics technology is the need to reduce the power needs of data centres while still increasing data transfer speeds and storage capabilities. Cambridge experts say that almost 3% of all the power now produced in Europe goes directly to data centre operations. And with every data centre build, more power is needed to keep data flowing.

Superconductor design is such that when properly deployed, it can increase, or at least maintain, data transfer speeds at low power. Combining current superconductors with spintronic devices makes it possible to radically reduce the amount of power that data centres need to do what they do. Cambridge researchers believe that successful development of their spintronics technology could be the most important thing to shape the future of computing worldwide.

Officials at Cambridge say the difference between their research and what is happening in other places around the world boils down to the scope of the research. Apparently, other projects are mainly focused on discovering the intricacies of the spin phenomenon as an isolated course of study. They say Cambridge is the only institution looking beyond the basics of spin to develop a comprehensive plan for utilising its properties in a real-world setting.

The Cambridge project is certainly ambitious if nothing else. We look forward to seeing what researchers are able to come up with over the next several years of study. If their theories are correct, Cambridge University could be on the verge of initiating a brand new era of supercomputing that has real world implications for the average consumer, business and data centre.

Tuesday, 12 April 2016

Protect Your Electronics from Damaging Fires

Have you ever wondered what should be considered when deciding which fire suppression system to install your information technology space? This blog lays out exactly what to look for and how hybrid fire extinguishing systems are changing the landscape of fire suppression for data centres.

Today’s data centre and electronics industries look for fire suppression systems that are safe, cost-effective, and mindful of the environment. Traditional water mist, gaseous, and inert gas technologies can present design, performance, and maintenance challenges. A hybrid combination, however, of water mist and nitrogen gas extinguishes fires with zero to little wetting. A hybrid-designed fire suppression system avoids excess water damage, maintains room integrity, and is 100% environmentally safe and easy to maintain.

Avoiding excess water damage is a main concern for data centre and electronic facilities. These facilities house thousands of dollars of various electronics and, if a fire incident occurred, they want to preserve as many of their resources as possible. Traditional, single-agent water mist systems, if discharged, would cause extreme wetting conditions and damage to the equipment housed in the facility. Hybrid systems use as little water as possible to put out flames, saving the present electronics. 

Keeping it green is something that should definitely be considered when choosing a fire protection system. Inert gas systems present many problems in today’s sustainability-focused marketplace. These types of systems utilize chemical agents and halocarbons to achieve fire suppression, raising environmental concerns. Hybrid fire extinguishing systems are environmentally safe and use non-toxic gas.

Following a fire event, or during regular maintenance, data centres and electronics facilities should consider a fire suppression system that is easy to maintain. Traditional agent storage cylinders may end up needing to be returned to the manufacturer for refill with property agents or removed from the system for weighing, both of these costing time and money. You need a system that recharges rapidly so you can use it almost immediately after a fire incident. 

A unique solution for data centres uses hybrid technology to deliver a high velocity, low pressure blend of water and nitrogen to both cool the hazard area and remove the oxygen that sustains the fire. Electronics are kept dry and there are absolutely no toxic agents or chemicals involved. This type of system has been proven effective for small, smoldering enclosed fires and large, heat-releasing fires in open spaces.

A hybrid fire extinguishing system provides an innovative, safe, and effective fire suppression solution for installations that contain electronic equipment. Facility owners and operators benefit from the technology’s minimal wetting operation, simple maintenance, environmentally-friendly design, and rapid return to normal operations after system discharges.


This post was written by Cedric Verstrynge, Victaulic Sales Engineer, Vortex, Victaulic Company, http://www.victaulic.com/