09
Aug 17

Continuity Central – To BIA or not to BIA is not the question…

Continuity Central recently conducted a survey to seek the views of business continuity professionals on whether it is feasible to omit the business impact analysis (BIA) from the BC process. Mel Gosling, FBCI, explains why he believes this is the wrong question to ask…

The Big Picture

It’s always useful to step back and see the big picture, and with the question of ‘To BIA or not to BIA?’ this bigger picture is that the BIA is an integral part the business continuity management (BCM) process specified in ISO 22301 and promoted by business continuity professional associations such as the BCI in its Good Practice Guidelines. Rather than looking closely at the detailed question, we should look at the bigger picture and ask ourselves whether or not we should use this specific BCM process at all.

More of the Continuity Central article


08
Aug 17

IT Business Edge – Second Quarter Reported DDoS Attacks Lasting Days, Not Minutes

What is your DDOS strategy?

What would you do if your company was hit with a DDoS attack that lasted 11 days? Perhaps a large organization could withstand that kind of outage, but it could be devastating to the SMB, especially if it relies on web traffic for business transactions.

That 11-day – 277 hours to be more exact – attack did happen in the second quarter of 2017. Kaspersky Lab said it was longest attack of the year, and 131 percent longer than the longest attack in the first quarter. And unfortunately, the company’s latest DDoS intelligence report said we should expect to see these long attacks more frequently, as they are coming back into fashion. This is not the news businesses want to hear.

More of the IT Business Edge post from Sue Marquette Poremba


04
Aug 17

CIO Insight – A Practical Alternative to Two-Speed IT (Part 2)

In part one of this series, we explored a pair of competing requests many modern IT leaders receive from their stakeholders:

We investigated one “buzzwordy” solution—two-speed IT—and how implementing this solution often creates more problems than it solves. We proposed an alternate five-step framework for handling these requests. In steps one and two of this framework, we revealed how the above two competing requests are old problems, best solved with an old, proven solution—and not buzzwords.

E-Signatures 201: Get the Details on Integration, Customization and Advanced Workflow Register
In part two of this series, we will walk you through the remaining steps in our practical framework and lead you down a path toward implementing this proven solution: the technology lifecycle.

Step 3: Think technology lifecycle, not “innovation” vs. “operations.”

To better understand why the good-on-paper “two-speed IT” approach often produces problems when implemented in the real world, look at Gartner’s two speeds (or modes) in which they shoehorn all technology systems and services:

Mode 1: Development projects related to core system maintenance, stability or efficiency. These require highly specialized programmers and traditional, slow-moving development cycles. There is little need for business involvement.

Mode 2: Development projects that help innovate or differentiate the business. These require a high degree of business involvement, fast turnaround and frequent updates. Mode 2 requires a rapid path (or IT fast lane) to transform business ideas into applications.

More of the CIO Insight post from Lee Reese


03
Aug 17

CIO Insight – Two-Speed IT: Juggling Competing Agendas (Part 1)

How can IT leaders juggle seemingly competing agendas: to meet the business’ demands for increased innovation, while cutting costs and slashing budgets?

With the ever-increasing interest in technology solutions, IT’s stakeholders are giving them two competing demands:
1. Produce new innovative, strategic technology-based capabilities.
2. Do so with reduced resources.

How can IT leaders step up to the plate and juggle these seemingly competing agendas: to meet the business’ demands for increased innovation, including new digital systems and services, all while cutting costs and slashing budgets?

Unleash Your DevOps Strategy by Synchronizing Application and Database Changes Register
One popular solution has emerged within IT thought leadership. Often called “two-speed IT,” this idea proposes that the IT organization does not attempt to resolve the tension between these two ideas. Instead, IT lumps all of its technology into one of two broad buckets: operational technology and innovative technology. Do this, and operations won’t slow down innovation, and expensive innovation investments won’t inflate operations’ budgets.

More of the CIO Insight post from Lee Reese


02
Aug 17

IT World – 7 things your IT disaster recovery plan should cover

Enterprise networks and data access can be knocked out without warning, thanks to natural and man-made disasters. You can’t stop them all from happening, of course, but with a good disaster recovery plan you can be better prepared for the unexpected.

Hurricanes. Tornadoes. Earthquakes. Fires. Floods. Terrorist attacks. Cyberattacks. You know any of these could happen to your business at any time. And you’ve probably got a disaster recovery (DR) plan in place to protect your enterprise’s data, employees and business.

But how thorough is your DR plan? When was it last updated and tested? Have you taken into account new technologies and services that can make it easier to recover from disaster? The following are 7 things your IT disaster recovery plan should include.

1. An analysis of all potential threats and possible reactions to them

Your DR plan should take into account the complete spectrum of “potential interrupters” to your business, advises Phil Goodwin, research director of data protection, availability and recovery for research firm IDC. (IDC is part of IDG, which publishes CSO.)

More of the IT World post from James A Martin


01
Aug 17

HBR – Research: Being in a Group Makes Us Less Likely to Fact-Check

Since the 2016 U.S. Presidential election, concerns over the circulation of “fake” news and other unverified digital content have intensified. As people have grown to rely on social media as a news source, there has been considerable debate about its role in aiding the spread of misinformation. Much recent attention has centered around putting fact-checking filters in place, as false claims often persist in the public consciousness even after they are corrected.

We set out to test how the context in which we process information affects our willingness to verify ambiguous claims. Results across eight experiments reveal that people fact-check less often when they evaluate statements in a collective setting (e.g., in a group or on social media) than when they do so alone. Simply perceiving that others are present appeared to reduce participants’ vigilance when processing information, resulting in lower levels of fact-checking.

Our experiments surveyed over 2,200 U.S. adults via Amazon Mechanical Turk. The general paradigm went as follows: As part of a study about “modes of communication on the internet,” respondents logged onto a simulated website and evaluated a series of statements.

More of the Harvard Business Review article from Rachel Meng, Youjung Jun, and Gita V. Johar


28
Jul 17

The Register – Healthcare dev fined $155 MEEELLION for lying about compliance

A health records software company will have to pay $155m to the US government to settle accusations it was lying about the data protection its products offered.

The Department of Justice said that eClinicalWorks (eCW), a Massachusetts-based software company specializing in electronic health records (EHR) management, lied to government regulators when applying to be certified for use by the US Department of Health and Human Services (HHS).

According to the DoJ, eCW and its executives lied to the HHS about the data protections its products use. At one point, it is alleged that the company configured the software specially to beat testing tools and trick the HHS into believing the products were far more robust and secure than they actually were.

More of The Register article from Shaun Nichols


27
Jul 17

SearchDataCenter – Distributed data centers boost resiliency, but IT hurdles remain

Distributed data center architectures increase IT resiliency compared to traditional single-site models, with networking, data integrity and other factors all playing critical roles.

Architectures that span distributed data centers can reduce the risk of outages, but enterprises still must take necessary steps to ensure IT resiliency.

Major data center outages continue to affect organizations and users worldwide, most recently and prominently at Verizon, Amazon Web Services, Delta and United Airlines. Whether it’s an airline or cloud provider that suffers a technical breakdown, its bottom line and reputation can suffer.

More of the SearchDataCenter article from Tim Culverhouse


26
Jul 17

ZDNet – Overspending in the cloud: lessons learned

One of the reasons virtualization (the precursor to cloud computing) gained popularity in the early 2000s is that companies had too many servers running at low utilization. The prevailing wisdom was that every box needed a backup and under-utilization was better than maxing out compute capacity and risking overload.

The vast amounts of energy and money wasted on maintaining all this hardware finally led businesses to datacenter consolidation via virtual machines, and those virtual machines began migrating off-premises to various clouds.

The problem is, old habits die hard. And the same kinds of server sprawl that plagued physical datacenters 15 years ago are now appearing in cloud deployments, too.

More of the ZDNet article from Michael Steinhart


25
Jul 17

IT Business Edge – AMD and Intel Declare War on the Data Center: Why This Is a Good Thing

This month, anything that doesn’t have me looking up to see if North Korea has lobbed a missile at the West Coast is a positive event. But this week, Intel responded to AMD’s Epyc launch with an epic launch of its own: the Purley version of its Xeon processor architecture. It clearly has come to play hard ball. Years ago, because things tended to be more generic, the processor played a far bigger role in servers and workstations. Today, a server can rely more heavily on the GPU than the CPU, can bottleneck on memory, storage, or internal transport rather than the processor more often, and just as often, must be purpose-built for whatever task it is being positioned in.

More of the IT Business Edge post from Rob Enderle