Tuesday, 11 April 2017

Keeping Sensitive Data Hidden

Network troubleshooting, performance monitoring, and security are daily tasks in the data centre. Add data privacy and other regulations in the healthcare, government, education, finance and other sectors and you are adding another level of complexity to your network monitoring. Network visibility solutions that recognise data patterns can help reduce business risks by inspecting the packet payload, providing insights on specific data patterns, masking data to improve data privacy and support compliance to HIPAA1, PCI2 and internal best practices or recognising patterns that alert security. 

Pattern matching uses regular expressions to define search patterns. These patterns can then be used to find strings of characters in files, databases and network traffic. One of the earliest uses for pattern matching was text editing. A user could use a regular expression to search and replace a particular string throughout an entire document using a single command.

An example of a regular expression is “\b\d{5}\b.” This expression can be used to find any five digit US zip code, such as 49017. This regular expression can be expanded to search for a nine digit zip code like 49017-3822. The expanded version of the expression is “\b\d{5}-\d{4}\b.”

After a desired string of characters is matched by a regular expression, several types of actions can be taken. Depending on the system, these actions can include:

·        Generate an alert message
·        Highlight the data
·        Mask the data by replacing each of its characters with a different character
·        Remove the data altogether

An example use for masking data is complying with privacy regulations like HIPAA or PHI. These regulations require companies and organization to protect private information, such as social security numbers, credit card numbers, and health related information.


Pattern Matching Applications:

Today, pattern matching is used in numerous applications like text editing, compiling computer programs, and protecting private data during network monitoring activities.

Protecting private data, while monitoring networks, represents one of the growing uses for pattern matching. In order to solve a network problem, a trouble shooter must monitor network traffic and examine its packet headers (e.g. Ethernet Header, IP Header, etc.). However, the payload portion of a packet may include a person’s personal information that needs to be protected.

Pattern matching can be used to mask personal data in the payload portion of each packet prior to the packet being examined. This capability assists organizations with complying with regulations like HIPPA and PHI.

Another use for pattern matching is filtering. When a match occurs, the action can be to either drop the packet or pass it. This type of application is applicable when a virus or malware is identified in a packet. In some cases, the action may include dropping the entire network session.


Typical Regular Expressions:

A typical regular expression library could include the ability to search for the following types of data:

·        Credit Card Numbers
·        Phone Numbers
·        Zip Code Numbers
·        Email Addresses
·        Postal Addresses


Typical Pattern Matching Features:

A user should easily be able to perform the following functions with a pattern matching system:

·        Have commonly used regular expressions available in a library.
·        Add additional regular expressions to the regular expression library by copying them from the plethora of expressions found on the Internet.
·        Test whether a regular expression matches a particular string without having to configure a network to send the string through the system.
·        Allow the user to mask data using a user selectable character.

APCON delivers a pattern matching feature as part of its network and security visibility solution. This allows the inspection of the packet payload to look for specific data patterns and masks the matched data, improving data privacy and supporting compliance to HIPAA, PCI and internal best practices. For an example of a network pattern matching system, check out Apcon’s new pattern matching feature on the HyperEngine packet processor blade or contact Kevin Copestake, UK & Ireland Sales Manager kevin.copestake@apcon.com / +44 (0) 7834 868628 for more information.


Compliance Regulations
1Health Insurance Portability and Accountability Act (HIPAA)
2Protected Health Information (PHI)

Guest blog by APCON.  For a link to the original blog plus related diagrams, please visit https://www.apcon.com/blog-entry/keeping-sensitive-data-hidden

Wednesday, 5 April 2017

Edge Data Centres have arrived but how resilient are they?

The massive migration of critical applications from traditional data centres to the cloud has garnered much attention from analysts, industry observers, and data centre stakeholders.  However, as the great cloud migration transforms the data centre industry, a smaller, less noticed revolution has been taking place around the non-cloud applications that have been left behind. These “edge” applications have remained on-premise and, because of the nature of the cloud, the criticality of these applications has increased significantly.

Let me explain:  The centralized cloud was conceived for applications where timing wasn’t absolutely crucial.  As critical applications shifted to the cloud, it became apparent that latency, bandwidth limitations, security, and other regulatory requirements were placing limits on what could be placed in the cloud.  It was deemed, on a case-by-case basis, that certain existing applications (e.g. factory floor processing), and indeed some new emerging applications (like self-driving cars, smart traffic lights, and other “Internet of Things” high bandwidth apps), were more suited for remaining on the edge.

Considering the nature of these rapid changes, it is easy for some data centre planners to misinterpret the cloud trend and equate the decreased footprint and capacity of the on-premise data centre with a lower criticality.  In fact, the opposite is true.  Because of the need for a greater level of control, adherence to regulatory requirements, low latency, and connectivity, these new edge data centres need to be designed with criticality and high availability in mind.

The issue is that many downsized on-premise data centres are not properly designed to assume their new role as critical data outposts.  Most are organized as one or two servers housed within a wiring closet.  As such, these sites, as currently configured, are prone to system downtime and physical security risks, and therefore, require some rethinking.

Systems redundancy is also an issue.  With most of the applications living in the cloud, when that access point is down, employees cannot be productive.  The edge systems, when kept up and running during these downtime scenarios, help to bolster business continuity.


Steps that enhance edge resiliency:

In order to enhance critical edge application availability, several best practices are recommended:
Enhanced security – When you enter some of these server rooms and closets, you typically see unsecured entry doors and open racks (no doors). To enhance security, equipment should be moved to a locked room or placed within a locked enclosure.  Biometric access control should be considered.  

For harsh environments, equipment should be secured in an enclosure that protects against dust, water, humidity, and vandalism.  Deploy video surveillance and 24 x 7 environmental monitoring.
Dedicated cooling – Traditional small rooms and closets often rely on the building’s comfort cooling system. This may no longer be enough to keep systems up and running.  Reassess cooling to determine whether proper cooling and humidification requires a passive airflow, active airflow, or a dedicated cooling approach.

DCIM management – These rooms are often left alone with no dedicated staff or software to manage the assets and to ensure downtime is avoided. Take inventory of the existing management methods and systems.  Consolidate to a centralized monitoring platform for all assets across these remote sites.  Deploy remote monitoring when human resources are constrained.

Rack management – Cable management within racks in these remote locations is often an after-thought, causing cable clutter, obstructions to airflow within the racks, and increased human error during adds/moves/changes. Modern racks, equipped with easy cable management options can lower unanticipated downtime risks.

Redundancy – Power (UPS, distribution) systems are often 1N in traditional environments which decreases availability and eliminates the ability to keep systems up and running when maintenance is performed. Consider redundant power paths for concurrent maintainability in critical sites.  Ensure critical circuits are on emergency generator.  Consider adding a second network provider for critical sites.  Organize network cables with network management cable devices (raceways, routing systems, and ties).  Label and color-code network lines to avoid human error.

A systematic approach to evaluating small remote data centres is necessary to ensure greatest return on edge investments.  To learn more, download Schneider Electric White Paper 256, “Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge”.  This paper reviews a simple method for organizing a scorecard that allows executives and managers to evaluate the resiliency of their edge environments.

Guest blog by Wendy Torell, Senior Research Analyst at Schneider Electric’s Data Center Science Centre