10
May 16

IT Business Edge – Setting the Right Tone for Risk Management

Without one person in an organization responsible for managing third-party risk, companies face a serious barrier to achieving effective third-party risk management, according to a new study. The study, “Tone at the Top and Third-Party Risk,” was conducted by the Ponemon Institute and sponsored by Shared Assessments, a member-driven, industry-standard body specializing in third-party risk assurance. “Tone at the Top” describes an organization’s environment, as established by its board of directors, audit committee and senior management. It is set by all levels of management and trickles down to all employees. “If management is committed to a culture and environment that embraces honesty, integrity and ethics, employees are more likely to uphold those same values,” according to the report. ”

More of the IT Business Edge post by Karen Frenkel


09
May 16

Continuity Central – Expanded NIST disaster and failure data repository aims to improve resilience

NIST has announced that data from the February 27th 2010 Chile earthquake has now been added to the NIST Disaster and Failure Studies Data Repository, providing a great deal of useful information for regional and global resilience planning.

The repository was established in 2011 to provide a place where data collected during and after a major disaster or structural failure, as well as data generated from related research, could be organized and maintained to facilitate study, analysis and comparison with future events. Eventually, NIST hopes that the repository will serve as a national archival database where other organizations can store the research, findings and outcomes of their disaster and failure studies.

Initially, the NIST Disaster and Failure Studies Data Repository was established to house data from the agency’s six-year investigation of the collapses of three buildings at New York City’s World Trade Center (WTC 1, 2 and 7) as a result of the terrorist attacks on Sept. 11, 2001. With the addition of the 2010 Chile earthquake dataset, NIST is broadening the scope of the repository to begin making it a larger collection of information on hazard events such as earthquakes, hurricanes, tornadoes, windstorms, community-scale fires in the wildland urban interface, storm surges and man-made disasters (accidental, criminal or terrorist).

More of the Continuity Central article


03
May 16

Continuity Central – The top mistakes that businesses make in a disaster

When the unexpected happens to a business, delayed action – or the wrong action – can cause as much harm as the initial incident itself. That’s the message of John Bresland, former board member and chairman of the US Chemical Safety and Hazard Investigation Board , who will be a keynote presenter at the 2016 World Conference on Disaster Management, to be held June 7th-8th at The International Centre in Toronto.

“The last thing you want to do is be taken by surprise,” says Bresland, who now consults large organizations on chemical process safety. “There are practical steps every business should take to effectively learn, communicate and plan for future disasters to which the organization may be vulnerable.”

Bresland cites the following as the five top mistakes businesses make when preparing to respond to, mitigate and move forward from disaster:

Failing to define worst-case scenarios

What might be considered a relatively small incident can quickly become a very expensive one if a company fails to look beyond the immediate safety issues and consider business impacts. For example, even a small event like a fire can lead to significant loss of production and profits long after the fire is extinguished. “Ask yourself what’s the worst possible scenario and prepare for that,” advises Bresland.

More of the Continuity Central article


20
Apr 16

TechTarget – Google cloud outage highlights more than just networking failure

Google Cloud Platform went dark this week in one of the most widespread outages to ever hit a major public cloud, but the lack of outcry illustrates one of the constant knocks on the platform.

Users in all regions lost connection to Google Compute Engine for 18 minutes shortly after 7 p.m. PT on Monday, April 11. The Google cloud outage was tied to a networking failure and resulted in a black eye for a vendor trying to shed an image that it can’t compete for enterprise customers.

Networking appears to be the Achilles’ heel for Google, as problems with that layer have been a common theme in most of its cloud outages, said Lydia Leong, vice president and distinguished analyst at Gartner. What’s different this time is that it didn’t just affect one availability zone, but all regions.

“What’s important is customers expect multiple availability zones as reasonable protection from failure,” Leong said.

More of the TechTarget post from Trevor Jones


19
Apr 16

Continuity Central – Dealing with the risk of DDoS ransom attacks

We are all familiar with the disruptive consequences of a distributed denial of service (DDoS) attack when a website is forced offline because it has been swamped with massive levels of traffic from multiple sources. The cost in terms of lost business to companies while their website is offline can be significant.

Cyber criminals are now taking the process a step further by tying ransom demands to their DDoS attacks, threatening to keep company websites permanently offline until they pay up. In effect, DDoS attacks are coming with an invoice attached.

What are DDoS ransom attacks?

Given the stakes, it makes sense for organizations to try and learn as much as they can about DDoS ransom demands: what do they look like, how can businesses work out if their site is at genuine risk and how can they protect their online presence?

Potential DDoS attacks, usually by criminal groups, start with a test attack on a website or service. The preferred method is to send increasing levels of traffic to the site to ascertain whether it could be vulnerable to an attack. Sometimes, the site can be knocked out with a small attack (from 1-2Gb of bandwidth) or it may require a much larger scale onslaught (from 10-100Gb), depending on the robustness of the security technology the service provider hosting the site has in place.

More of the Continuity Central post from Jake Madders


27
Jan 16

Continuity Central – Six tips for successful IT continuity

Andrew Stuart offers some IT-focused experience-based business continuity tips :

1. Understand the threat landscape

Storms, ransomware viruses and fires are only some of many real threats for which all businesses should proactively prepare. Your IT department needs a full understanding of all of the threats likely to hit your building, communications room or servers in order to help prepare for the worst. This can be done by assessing risks based on the location and accessibility of your data centres / centers, as well as any malicious attacks that could occur. When planning to mitigate a disaster, treat every incident as unique: a local fire may affect one machine, whereas human error may lead to the deletion of entire servers.

2. Set goals for recovery

While some companies assume they are protected in the wake of a disaster if they duplicate their data, many learn the hard way that their backup stopped functioning during a disaster or their data is inaccessible afterwards. The IT team needs to define criteria for recovery time objectives (RTO), or how long your business can continue to run without access to your data, and recovery point objectives (RPO), which is the maximum age of data that will still be useful to back up. The IT team will also need to identify critical systems and prioritise recovery tasks.

More of the Continuity Central article from Andrew Stuart


14
Jan 16

Continuity Central – More than half of organizations have had data-related incidents in the past 12 months: AIIM report

51 percent of organizations have had data-related incidents in the past 12 months, including 16 percent suffering a data breach, according to new AIIM research.

The new report, ‘Information Governance – too important for humans’, revealed that 45 percent of respondents feel a lack of information governance leaves their organization wide open to litigation and data protection risks. Furthermore, 41 percent of respondents admit that their email management is ‘chaotic’ and 22 percent are reporting a negative financial impact from cases around electronic records.

“The sheer volume of data in business is a major asset for most organizations,” said Doug Miles, chief analyst, AIIM. “But without effective information governance, that data also carries a potentially huge risk, both in terms of reputation and the bottom line. Lots of organizations are talking about information governance, but far less are actually doing it properly – that has to change in 2016.”

The severity and frequency of data incidents reported in the research has meant that information governance has never had more interest in it. For 28 percent of organizations, information governance is very high on the senior management agenda and more than half (53 percent) have recently launched new information governance initiatives.

More of the Continuity Central post


05
May 14

Net-Security.org – 44% of companies don’t have a cloud app policy in place

After interviewing 120 RSA Conference attendees, Netskope announced the results of the survey on information security professionals’ use of cloud apps.

Netskope found that despite widespread adoption of cloud apps in the enterprise, most IT security professionals are either unaware of their company’s cloud app policy or don’t have one. In the absence of cloud app policies, more than two-thirds of attendees surveyed said they would consider their company’s privacy policy before downloading an app.

As cloud apps proliferate in the enterprise, the security and privacy risks associated with use of these apps at work is on the rise. According to the recent Netskope Cloud Report, the typical enterprise is using 397 apps, or as much as 10 times the number that IT typically has within its purview. Furthermore, 77 percent of cloud apps are not enterprise-ready, leaving IT with the challenge of securing these apps and putting policies in place to guide their use.

More of the Net-Security.org post