Tuesday, 17 March 2020

HOW POST-CONSTRUCTION INNOVATIONS CAN PREVENT COSTLY DAMAGE


The Internet of Things (IOT) is showing no signs of slowing down. By constantly evolving and improving technology, one challenge businesses face is how to keep up. When looking at the data centre industry, a new technology that is set to improve services and successfully prevent downtime, is sensors.

Transmitting data to a 3G plug source every fifteen minutes, these minuscule sensors have an impressive battery life of 15 years. Specifically, in the final stage of post-construction, introducing sensors could drastically prevent downtime by alerting teams to issues in real time.

A recent case study of a data centre in Holland highlighted the importance of humidity monitoring by sensors. While onsite, teams were unaware of the levels of humidity because of increased footfall and the area hadn’t been fitted with doors. The humidity sensors alerted the team and it was immediately investigated, resolving the problem. The consequence of a rise in humidity levels going unnoticed is significant cost to the business if equipment were to be damaged and timings to construction delayed.

So, how exactly do sensors prevent downtime?

The power of data

In the wrong environment, data centres stand little chance of survival, so testing for external contaminants is key to preventing downtime. The main application of sensors are for; temperature, humidity, proximity, leak detection and touch.

During the post-construction phase, new - and expensive - equipment is being installed. If equipment is to be placed in a contaminated area, it could lead to system failure or major damage – something no business wants to be faced with. The same rule applies to humidity, as demonstrated in the data centre in Holland.

Data received from proximity sensors has additional security benefits as they can alert to doors left open near cleared or restricted rooms, exposing the space to contaminants. If doors are accidentally left open - even for five minutes – the space will require an audit, possibly followed by an additional clean. While leaks are few and far between and depend heavily on the location of the data centre, introducing sensors to a critical space acts as a safeguarding procedure to prevent downtime.

Touch sensors are used mainly for fire walks around a critical space. Usually checks are signed off on paper or verbally confirmed whereas with the touch sensors, this is proof of presence that the check has been completed. Again, this safeguarding measure ensures teams are following protocol and therefore downtime is avoided. This method also saves time and admin work as teams can generate reports from the live dashboard at the touch of a button.

Fail to prepare, prepare to fail

Sensors act as a core element of a risk mitigation strategy. By taking a proactive approach, customers can rest a little easier knowing everything is being done to reduce the probability of downtime or damage to installed equipment. Introducing sensors into a critical space requires in-depth strategic planning from start to finish.

To get the most value from sensors, strategic placement and management of information are paramount to their success. Information received is only useful when it forces an action. Due to the thickness of walls in data centre infrastructure, this proves difficult for signals to get through. To resolve this, signal boosting equipment is used and luckily, it is cheap and easy to install.

Getting the timing right is another consideration for critical teams. Working alongside hundreds of contractors at different paces can prove very challenging so introducing sensors will help critical teams to foresee any delays in timing and report back to the customer. Using this collaborative approach, critical teams can work closely with the customer and contractors to ensure work is completed at an optimum level. Sensors have undoubtedly made a positive impact to the way critical teams monitor and prevent issues resulting in downtime. As we know, system failure can impact a business considerably by putting customer loyalty at risk, loss of earnings and additional costs to repairing equipment. The bottom line is that sensors ensure the correct precautions are in place and are a valuable efficiency tool for teams to utilise.

Looking ahead, testing air quality and energy usage would be welcomed by critical teams.  The future looks bright for sensors and as we continue to see new tests introduced, so will we be able to further optimise the service available to customers.

Monday, 20 January 2020

NEW YEAR, NEW IMPETUS – WHY WE NEED 2020 VISION ON REDUCING OUR INDUSTRY’S ENVIRONMENTAL IMPACT

As we rang in another New Year – and the start of a new decade – the bushfires spreading across Australia showed little sign of fizzling out.
Shortly after, meteorologists including the UK Met Office and NASA confirmed that the decade up to the end of 2019 was the hottest the world has ever seen. In fact, 2019 was the second warmest year globally since records began in 1850.
While we shouldn’t jump to conclusions and automatically link the two statements above, it’s indisputable that the debate about our environment has reached a tipping point. Many institutions and governments now accept we’re in the midst of a climate emergency.
There’s no place to hide for any of us working in the data centre industry. Despite significant progress in recent years improving efficiency across the entire server room, from power through to cooling, we’re already seen as something of a “bogeyman”.
Data centres across the world burn through 400-500 terawatt-hours (TWh) of electricity a year. That’s roughly the same as the whole of the UK.
It’s a much-repeated statistic, but it’s predicted the industry will account for a fifth of global energy use by 2025. While processing and storing data produces 3-4% of all CO2 emissions, more than global aviation (2-3%).
Energy Use On The Rise?
According to the Uptime Institute, 65% of the power used in data centre IT accounts for just 7% of the processing work due to the inefficiencies of ageing equipment. Recent gains in mechanical and electrical efficiency have stalled.
As we enter the 5G era, the already stratospheric demands placed on data centres will continue to grow. Just think, every time football superstar Cristiano Ronaldo posts an update on Instagram, his 195 million-plus followers consume 30 megawatt-hours (Mwh) of electricity to view the content.
The Uptime Institute warns that this growing demand will “substantially outpace the gains from efficiency over the next five years, resulting in steadily increasing energy use” across the sector.
You might think this article has taken something of a “glass half empty” approach so far. It’s important to acknowledge the many steps our sector have taken to try and minimise our environmental impact. There have been huge technological gains with uninterruptible power supplies alone.
There’s the growth of more efficient transformerless models. Then there’s the rise in modular UPS systems that reduce the risk of wasteful oversizing. In addition, many modern UPS now incorporate several special operating modes and features designed to minimise energy use.
We shouldn’t forget the positive role UPS can also play in smart grids and energy storage. It’s up to businesses like us to convince data centre operators that embracing battery storage will deliver environmental and performance benefits.
Small Changes Add Up To Big Gains
When it comes to the environment, much of the focus inevitably goes on the bigger picture – the ground-breaking innovations with the potential to have a huge impact.
But it’s often the lower-level adjustments that drive the fundamental behavioural change that we as a society must embrace.
Here at Riello UPS, we take our environmental responsibilities seriously and commit to cutting our carbon footprint wherever it is practical to do.
Before Christmas, we banned plastic drinks bottles and gave all our 70-plus team reusable stainless steel bottles as an alternative.
We’ve also swapped our milk supplier and now have a special chiller that holds fresh milk in a recyclable cardboard box lined with a Low-Density Polyethylene (LDPE) bag that can be used to generate energy from waste. This means we don’t need to buy plastic cartons or bottles. It also helps to cut milk waste.
Based on our current milk consumption, we’ll save nearly 600 bottles and 30 kg of plastic waste a year.
Our efforts to eliminate single-use plastics stretch to us ditching disposable plastic cups in favour of ceramic containers, swapping plastic water bottles for visitors with a water dispenser and glasses and replacing plastic stirrers with an organic bamboo alternative.
Moving forward, we’ll also withdraw advertising support from any magazine publishers who don’t switch from plastic polybags to sustainable alternatives such as potato starch-based wrapping or biodegradable paper.
In isolation, these measures might appear small. But it’s a start. A single plastic bottle takes up to 450 years to fully decompose. It’s simply not an option to sit on our hands and do nothing.
And if 10% of the data centre industry, or 20%, or even 50% follow our example and make practical changes to their day-to-day operations, then we’re talking about something that’ll add up and make a massive difference.
From the biggest technological innovations to the smallest changes in daily routines, 2020 is the year for the data centre sector to take our fair share of the responsibility for safeguarding our planet for many years to come.

Guest blog by Leo Craig, Managing Director of Riello UPS Ltd


Tuesday, 17 September 2019


A DATA-CENTRIC APPROACH TO MANAGING DATA CENTRES

“Data centres owned and operated by data-centre landlords, cloud services and other technology firms is expected to increase to roughly 9,100 this year, up from 7,500 last year, and are expected to reach 10,000 by 2020, IDC estimates.”  Source: The Wall Street Journal.

If all other data centres including hyperscale and enterprise were added, the total figure could be in the billions.  Businesses around the world rely upon data centres being available.  There is also more focus on the environment and climate change, so there is more focus on efficiency, and carbon-neutral designs – ergo yet more complexity to manage.

There is a reason that DCIM hasn’t been replaced with something new.  It has had a bad rep for many reasons, but it is necessary to help us manage ever more complex, hybrid environments, and so it has to evolve.  It needs to connect to facilities systems, network systems, IT systems and orchestrate changes as they are required.  No longer can the M in DCIM represent ‘monitoring’.  Perhaps the metamorphosis of DCIM more accurately should be DNIO: Data centre, Network and Infrastructure Orchestration.

DCIM is now moving into the IT stack, and integrating with systems, such as ITAM, CMDB and cloud-based systems.  It now offers the ability to analyse data across sites, and provide AI based solutions to controlling the data centre throughout the IT stack – from the BMS, through to application performance.

One of the hardest elements in a DCIM implementation has been integration, and figuring out how processes and procedures should work, and then how to automate them.  This integration piece – in the past - has either been technically challenging, or financially challenging, or seen as scope creep, or it has been something that a vendor or stakeholder has discouraged.

What is really required is an open integration suite that would allow enterprises to pull their own bespoke solutions together, without racking up expensive development bills. It seems this vision is slowly becoming a reality after some M&A activity in the DCIM space, and clients and vendors steadfastly staying the course behind the DCIM vision.

This brings with it a different way of looking at managing the data centre: it’s a data-centric view.  Instead of worrying about whether an integration is possible, it’s reasonable nowadays to assume that it is. Therefore, it is possible to design the system in the most efficient way and make use of automation where it makes sense.

Here are six encouraging areas of progression where more integration is enabling positive leaps forward:

Broader scope of infrastructure managed by DCIM:

The links to CMDB, ITAM and other systems on the IT side are bringing more data analysis opportunities, with a broader scope of data points.

Use of Artificial Intelligence:

AI is being used more readily in a number of areas within the DC.  For example, cooling optimisation, and security.  AI can learn normal network behaviour and detect cyber threats based on deviation from that behaviour. 

Open platform approach:

Instead of a silo’d approach both internally and externally, the data-centric view of the DC should take priority, which means that IT, Facilities, and vendors are all working together.

SDK / Open  API:

A number of vendors are providing SDKs or Open APIs, which are a good step forward to making integrations between systems work, and it shows that they are open to working with other companies.

CMDB and Asset Management:

There is a recent move to focus on asset management and aligning assets in ERP systems too, to provide a single source of the truth.  From a data centre perspective, having the assets managed well, is an essential building block to DCIM and data centre management.

Processes and Procedures:

Data centre operators are viewing the system as a whole and are finding areas where technology can automate processes.  For example, adds, moves and changes can be streamlined, saving around 30% of resource time by using accurate DCIM data and integrated workflows.

In a world where IT systems are becoming more distributed, and IoT is making its mark, data centres must take a data-centric approach to managing the system of systems housed under their roofs.  Silo’d thinking no longer has a place in the modern data centre:  DC and IT managers need to work together, alongside a multitude of vendors who also need to align and integrate their offerings to the clients’ needs.

This open platform approach enabling integration brings many benefits to life.  An integrated workflow capability facilitates automation, reducing resource time required for operational tasks.  With more visibility of systems, capacity management from the CRAC unit through to ports in the meet me rooms, is a reality allowing the DCIM to assist with intelligent commissioning of new assets and patching routes.  Energy optimisation now involves data from the servers themselves, allowing them to shift workloads when compute requirements are low, thus allowing a server to potentially stand down.

With this data-centric approach, the return on investment should not only be better, it should come in sooner as well.  The software-defined data centre is now in view. 


Guest Blog written by:



Assaf Skolnik, CEO, RiT Tech



Venessa Moffat, Head of Product Marketing, RiT Tech

Marketing, Strategy and Growth Hacking specialist, with 20 years’ experience in the Data Centre and tech industries. Venessa holds a BSc in Computer Science, a Post Grad Diploma in Business Administration, as well as an MBA from Essex University, where she specialised in agile IT architectures for maximum business value. She has successfully led strategy development and implementation programmes in multiple international data centre organisations. 



Tuesday, 16 July 2019


LEFT IN THE DARK – WHAT IS THE CHANCE OF A UK-WIDE  ELECTRICITY BLACKOUT?

In the middle of June, nearly 50 million people across South America were plunged into darkness after a massive power failure wiped out supplies across virtually all of Argentina, Paraguay and Uruguay.  Could something similar ever happen here in the UK and, if so, what’s likely to cause such a fundamental failure?

The source of the blackout was said to be an issue with two 500 kV transmission lines that disrupted electricity from the Yacyret√° hydroelectric plant.  Alleged system design flaws then turned what should have been merely a localised problem into a complete grid failure branded as “unprecedented” by Mauricio Macri, the President of Argentina. 

Our new investigation the Blackout report explores the likelihood of a UK-wide electricity network failure and what the consequences of such a severe incident could be. While data centres are probably as well-prepared as any business, with built-in redundancy and backup supplies in the form of UPS systems and generators, they certainly wouldn’t be immune to severe disruption.

We discovered that high-level contingency planning states that a complete power grid shutdown within the next five years is a 1-in-200 possibility. While very unlikely, there’s still a 1-in-240 chance that the average Brit will die in a road accident during the course of their lifetime, so it’s certainly not out of the question.

So, what are the biggest threats to the electricity supply here in the UK?

•             Climate Change & Extreme Weather

The top 10 hottest years recorded in the UK have taken place since 1990, while sea levels around the coast rise by 3mm a year as warm water expands and ice caps melt.
In the coming years, the effects of climate change mean we’re likely to experience more weather at the extreme ends of the spectrum – torrential rain, storm-force winds, scorching heatwaves and prolonged cold snaps.

Such weather events pose significant harm to the network.  Winds bring down trees that take out transmission lines. Floods damage crucial infrastructure and make it harder for engineers to fix faults.

There are numerous such examples of severe weather here in the UK: the Great Storm of October 1987; the 2013 St Jude Storm, which left 850,000 homes without power; winter floods caused by Storm Desmond in winter 2015-16.

We’re likely to experience far more of these sorts of incidents in the future.

•             Space Weather

“Space weather” collectively describes the series of phenomena originating from the Sun. These include asteroids, solar flares, meteors and geomagnetic storms.

Because of modern society’s reliance on GPS and other satellite signals, the potential impact of any space weather incident is huge – even a weak solar flare can knock satellites out of action.
The biggest ever incident of space weather recorded on Earth took place in 1859. Named after astronomer Richard Carrington, the Carrington Event was a massive magnetic storm that disrupted telegraph systems and electrical equipment.

Today, there’s a 1% annual probability for a repeat occurrence of such an event.
Back in 1989, a smaller storm took down the Hydro-Québec electricity network in Canada, leaving nine million people in the dark for up to nine hours.

•             Accidents & Systems Failures

There are a wide range of events that could fall under this category. It could be a component failure or software crash, basic human error, or accidental fires and explosions.

In reality, most of these incidents will produce an impact limited to a specific location. However, even these events could cause disruption to significant numbers of businesses, service and people.

•             Infrastructure Attacks

The threat of terrorism – in its many forms – is something the UK is all too familiar with. Various state and non-state agents could deliberately target a country’s power supplies using explosives or other means to destroy essential infrastructure such as transmission lines or electricity substations.

In recent years terrorists have carried out major attacks on energy infrastructure in places such as Algeria and Yemen while, this spring, anti-government forces were said to have taken out one of Venezuela’s hydroelectric plants, which contributed to a blackout that left 30 million residents without electricity.

•             Cyber-Attacks

You’re probably aware of the incident just before Christmas 2015, when Russian hackers used special malware to shut down 30 substations in Ukraine, leaving 250,000 people without electricity but did you know the network here in the UK was also compromised on 7 June 2017;  the day of the General Election?

While this spring saw the first USA case of electricity-related cyber hacking, with control systems of grids in California and Wyoming penetrated.

These days, it’s not just an elite band of state-sponsored hackers that pose a threat. Anyone armed with a laptop and a degree of know how could use high-grade malware to launch a potentially harmful attack. 

The UK’s energy network is shifting fundamentally to smart grids, while our day-to-day lives are dominated by supposedly ‘smart’ devices such as virtual assistants, smart phones, or energy meters.

These trends offer hackers many more vulnerabilities to exploit. Could hackers gain access to thousands – potentially millions – of smart devices, powering them up in the middle of the night when the grid isn’t prepared for such a power surge?  Or, more subtly, could incorrect data be fed back into smart grids, either inflating or understating the real demand for electricity?

The Blackout report is free to download from www.theblackoutreport.co.uk

Guest blog by Leo Craig, General Manager of Riello UPS Ltd



Tuesday, 15 January 2019

LOOK FORWARD TO 2019 BUT DON’T LOSE THE LESSONS OF THE PAST


At the beginning of every new year, it is the time for predictions and NTT Group have been sharing their thoughts on what will affect the business world over the next year or so (here).  In particular, they have focused on digital transformation and the impact this is having on how we work, live and play.

However, we mustn’t lose sight of the basics, as we build our resilient cyber defence architecture. The digital agenda is a pressing one for all businesses and one that they cannot afford to ignore – the customer is king and the General Data Protection Regulation (GDPR) puts increased pressures on the board to ensure that not only business data is secure but personal data too.

So, while we stand by our predictions, it is also advisable to reflect on some of the basics that we continually see overlooked by organisations as they try and protect their business from constantly evolving cyber threats:

1. Assess the baseline

With an increasing focus on “platforms”, it is crucial that this fits into a resilient cybersecurity architecture and to ensure efficiency in reducing potential threats and vulnerabilities. Performing a baseline assessment will ensure the correct security foundations are in place to help you get the best from your security investments.

2. Scan the environment 

One of the most important basic practices is vulnerability scanning but running a vulnerability scan on its own is not enough. The results should be analysed and assessed against your critical assets.  This approach ensures that risks are put in context and valuable resources are focused on mitigating the right risk.

3. Plan for a breach

Incident response plans are critical for minimising the impact of a breach. Complex cyber threats are difficult and time-consuming to unpick and may require specialist knowledge and resources to comprehensively resolve. By having a well-defined plan, and testing it regularly, as well as recognising that security incidents will happen, organistions will be better prepared to handle incidents in an effective and consistent way.

4. Collaboration 

Most business recognise the shortage in cybersecurity skills and the industry as a whole is collaborating more. We work closely with our technology partners and industry and government bodies to share intelligence. We now focus on prediction and prevention to get ahead of the potential threats. Collaboration will allow businesses to actively manage the threats before it impacts them.

5. Support the basics 

Clearly it is now on the board’s agenda but we need to ensure that everyone is aware of the risks. It is everyone’s responsibility in our digital economy to be responsible for cybersecurity.  This is why we support training and education programmes to ensure that everyone supports the basics of cybersecurity.

6. Reduce the noise

There is the potential for huge amounts of data to be collated and analysed across the enterprise. Focus should be on the quality of this data and the reduction in false positives. Too often organisations are drowning under the wealth of un-actionable security data. Technologies aren’t configured correctly or are simply too complex to manage effectively. Configuring, tuning and managing the security technology either directly or through a trusted partner is also a basic requirement that many organisations are failing to master.
So, while we always start to look forward at this time of year, we should not lose the lessons of the past and ensure that we get the basics right.

About NTT Security:

NTT Security is the specialised security company and the centre of excellence in security for NTT Group.  With embedded security we enable NTT Group companies (Dimension Data, NTT Communications and NTT DATA) to deliver resilient business solutions for clients’ digital transformation needs.  NTT Security has 10 SOCs, seven R&D centres, over 1,500 security experts and handles hundreds of thousands of security incidents annually across six continents.

Guest Blog written by Garry Sidaway, SVP Security Strategy & Alliances, NTT Security

Friday, 16 November 2018

LOOKING BEYOND INITIAL SPEND

The draft withdrawal agreement in relation to Brexit - set out earlier this week (14 November 2018) by UK Prime Minister Theresa May - has been approved by the UK political cabinet and is now waiting for the stamp of approval from MPs followed by the other European Union members, of which there are 27.

However, with still no trade deal in place, the strength of the UK economy is under serious threat. Businesses are having to remain vigilant when it comes to capital investment. Realising long term strategies in this extremely volatile market is becoming untenable.

Unstable trading stocks, global economic slowdown and the prediction of a dramatic drop in the pound are making it inherently difficult to clearly establish true investment values, especially in the case of total cost of ownership (TCO). Defining the TCO for a capital investment must take into consideration all environmental market factors but with very few reassurances from the government on the energy climate, it is unsurprising that considerable caution is being taken where any type of investment is concerned.

As a prominent and highly influential power protection specialist, Power Control Ltd knows only too well about how the cloudy outlook of the country’s economic future can impact businesses. Commenting on this subject, Power Control’s managing director Martin Hanson said: “Buying behaviours towards UPS investment have changed significantly over recent years. It has become apparent that owner/operators are having to account for more complex physical environments in terms of sophisticated data storage, whilst also considering much longer-term financial impacts of their investments.

“The approach to initial spending has changed. It seems that decision makers are becoming more shrewd when it comes to investing and forecasting TCO. Looking at UPS investment in particular, business owners cannot afford to be flippant. The number of power disturbances continues to rise making mains power sources more volatile. This inherently leads to data loss and can cost companies £millions in lost revenue.

“So not only are there pressures to select the most technologically suitable solutions but the need to make the best long-term commercial decisions is becoming increasingly crucial. Despite the economic pressures, resilience must remain the top priority when it comes to selecting UPS.”

Leading UPS manufacturers have anticipated the need for resilience, greater efficiency and more flexibility and have responded with advanced technologies that achieve the highest criteria levels.

Taking a look at solid state UPS for example – these systems have been the root of power protection for many years and where once their efficiencies were poor, advances in technology now mean these models boast ultra-high efficiencies combined with unfailing power protection.

It is the evolution of modular UPS that has muddied the waters further when it comes to power protection selection. In recent years the term modular has been making big waves in the UPS industry and offer a flexible and scalable approach when it comes to UPS investment.

Modular UPS systems also present reduced operating costs and easier overall maintenance. Engineering works can be quickly undertaken and can mean a more reliable power supply.

Additionally, the modular approach offers a smaller footprint, greater flexibility, easy manageability, inherently greater availability, and scalability throughout its operational life.

A glowing outlook for modular UPS so far but this would not be a fair evaluation without considering resilience. A subject that is very often over simplified to the detriment of the end user.

Modular UPS allow for redundancy with spare modules, therefore it is important to ensure that the system is prudently monitored to make sure that there are spare modules at all times, because if all modules are in use, the redundancy will be lost and this would leave no capacity for backup modules. This simplistic view of the protective nature of modular UPS would make many question how resilient a modular solution can be and if it is worth the risk.

It is important to remember that UPS manufacturers design, develop and manufacture power protection solutions to do exactly that – deliver reliable resilience. Other features such as industry leading efficiency, operational performance and flexibility are all additional benefits that come with investing in leading edge technology.

Specialists in the industry are urging businesses to approach UPS investment judiciously, by looking at the complete power protection landscape, environmental factors and physical infrastructure. This will deliver a solution that is exactly what a business needs not just now but in the future with a clear TCO outlook.

Guest blog written by Rob Mather, Solutions Director, Power Control. 

For more information please visit www.powercontrol.co.uk, email info@powercontrol.co.uk or contact Becky Duffield on bduffield@powercontrol.co.uk / +44 (0) 7402 113222


Alternatively please visit https://powercontrol.co.uk/product-category/ups-systems/ for specific product information or email Power Control’s solutions director direct at rather@powercontrol.co.uk

Thursday, 9 August 2018

Tokyo Data Centre Fire Kills 5 and Injures Dozens

A tragic fire that filled the Tokyo sky with thick, black smoke late in July 2018 has tragically resulted in five fatalities and dozens of injuries. The blaze extensively damaged a building believed to be a data centre possibly belonging to Amazon Web Services (AWS).

Fire officials were unable to confirm ownership of the building due to confidentiality restrictions, but numerous Japanese news outlets are claiming they had been told by industry insiders that Amazon is the owner. Construction on the incomplete building began in 2017 and was expected to be finished by October 2018.

A Devastating Fire

The fire, which occurred in the Tokyo suburb of Tama, began during the early afternoon hours on 26th July. It is believed that the blaze started in the third of four basement levels. The building has a total of seven floors – four underground and three above.

Reports say that some 300 workers were on site when the fire broke out. Unfortunately, four bodies were found in the basement and a fifth on the third-above-ground floor. In addition to the deaths, a total of 50 workers were treated for injuries. Nearly two dozen are said to be in serious condition.

As fires go, this one was particularly devastating in that it raged for eight hours. Reports say that one-third of the building suffered damage. However, assessments are still ongoing more than a week after the blaze. As for Amazon's ownership of the building, it has still not been confirmed. Amazon has been contacted by both Japanese and American news organisations but has yet to respond.

Fire officials have still not released the exact cause of the blaze pending the outcome of their investigation. However, initial reports suggest that workers cutting steel beams in the third basement level may have ignited urethane insulation materials. One news report out of Tokyo indicated that fire investigators are considering professional negligence among steelworkers as the main cause of the fire.

Amazon in Japan

Speculation of Amazon's ownership of the damaged building is fuelled in part by the success the company has enjoyed in Japan. AWS first entered the Japanese market with a data centre built in 2011. They followed that with a second installation in Singapore. According to The Stack, AWS maintains a concentration of four ‘availability zones’ in the greater Tokyo area, rivalling their operations in northern Virginia.

From a business standpoint, AWS is doing very well in Japan. The number of customers accessing AWS services has increased some 500% over the last five years. Experts attribute the company's success to deals with Sony and a number of big names in the Japanese financial sector.

The fire in Tokyo is truly a tragedy for the dozens of families affected by it. Investigators will hopefully pinpoint the exact cause of the blaze and make recommendations as to how future incidents can be avoided. In the meantime, all eyes are on Amazon to see if they will offer any kind of official response.

Sources: