19
Apr 16

Continuity Central – Dealing with the risk of DDoS ransom attacks

We are all familiar with the disruptive consequences of a distributed denial of service (DDoS) attack when a website is forced offline because it has been swamped with massive levels of traffic from multiple sources. The cost in terms of lost business to companies while their website is offline can be significant.

Cyber criminals are now taking the process a step further by tying ransom demands to their DDoS attacks, threatening to keep company websites permanently offline until they pay up. In effect, DDoS attacks are coming with an invoice attached.

What are DDoS ransom attacks?

Given the stakes, it makes sense for organizations to try and learn as much as they can about DDoS ransom demands: what do they look like, how can businesses work out if their site is at genuine risk and how can they protect their online presence?

Potential DDoS attacks, usually by criminal groups, start with a test attack on a website or service. The preferred method is to send increasing levels of traffic to the site to ascertain whether it could be vulnerable to an attack. Sometimes, the site can be knocked out with a small attack (from 1-2Gb of bandwidth) or it may require a much larger scale onslaught (from 10-100Gb), depending on the robustness of the security technology the service provider hosting the site has in place.

More of the Continuity Central post from Jake Madders


14
Apr 16

HBR – Which industries are the most digital and why?

When business leaders talk about going digital, many are uncertain about what that means beyond buying the latest IT system. Companies do need assets like computers, servers, networks, and software, but those purchases are just the start. Digital leaders stand out from their competitors in two ways: how they put digital to work, especially in engaging with clients and suppliers, and how intensively their employees use digital tools in every aspect of their daily activities.

Recent research from the McKinsey Global Institute (MGI) looked at the state of digitization in sectors across the U.S. economy and found a large and growing gap between sectors, and between companies within those sectors. The most digital companies see outsized growth in productivity and profit margins. But what are the key attributes of a digital leader? And how can companies benchmark themselves against competitors? We looked at 27 indicators that fall into three broad categories: digital assets, digital usage, and digital workers. Our research shows that the latter two categories make the crucial difference.

More of the Harvard Business Review article from Prashant Gandhi, Somesh Khanna, and Sree Ramaswamy


12
Apr 16

CIO Insight – How CIOs Can Lead a Digital Transformation

The majority of CIOs now perceive of themselves as their organization’s primary driver of a business and IT transformation, according to a recent survey from Cognizant. The report, titled “Being Digital: How and Why CIOs Are Reinventing Themselves for a New Age,” indicates that CIOs must make key contributions to strategic planning, while emerging as a “chief inclusion officer” who fosters an open culture of innovation. CIOs also need to take a hands-on role in leading digital programs/initiatives, while articulating clearly how these efforts contribute to company goals. And to ensure successful transformations, CIOs have to build a strong relationship of trust with their CEO. “Being digital will require a strong push from the CIO; it won’t happen on its own,” according to the report.

More of the CIOInsight post from Dennis McCafferty


08
Apr 16

Baseline – Many IT Pros Ignore Corporate Security Policies

One of the inescapable realities of enterprise cyber-security is that a huge gulf exists between what companies should do to protect their IT systems and data and what actually takes place. A recent research report released by Absolute Software, “IT Confidential: The State of Security Confidence,” illustrates the extent of the problem. The endpoint security and data risk management firm polled more than 500 U.S. employees who work in an IT or information security role and asked them about their security practices. The study found that, among other things, a shockingly high percentage of IT professionals admitted that they did not follow the same security protocols that they enforce on other employees. Many said they also intentionally circumvent key security policies. Consequently, many organizations—while placing a premium on security—expose themselves to significant risks.

More of the Baseline article from Samuel Greengard


07
Apr 16

HBR – When Was the Last Time You Asked, “Why Are We Doing It This Way?”

During a time when many retailers are struggling, business is booming at Target. But it wasn’t too long ago that the discount retailer’s future didn’t glow so bright. When CEO Brian Cornell took the reins two years ago, he inherited a company that had been struggling for years, taking far too few risks, and sticking too close to the core.

Since then the world has fallen in love with a far edgier Target, which has expanded its offerings through collaborations with such power brands as Lilly Pulitzer, Toms, Neiman Marcus, and SoulCycle, and updated product lines that break the status quo, like its latest gender-neutral kids home brand Pillowfort. But Cornell didn’t start right out of the gate making any big changes like these. Instead, he took time to carefully contemplate his approach, listen to his team, and ask questions.

At the MIT Leadership Center, I recently spoke with another leader, Guy Wollaert, chief exploration officer at Loggia Strategy & Design, about similar experiences he encountered at another highly visible brand, Coca-Cola. During his 20-plus year tenure with the global beverage brand, most recently serving as its chief technical and innovation officer, Wollaert made it a point to seek — and surround himself with — new ideas and people who challenged him to reflect and question first, then act later.

More of the Harvard Business Review post from Hal Gregersen


06
Apr 16

The Register – Successful DevOps? You’ll need some new numbers for that

Dark launches, feature flags and canary launches: They sound like something from science fiction or some new computer game franchise bearing the name of Tom Clancy.

What they are is the face of DevOps – processes that enable projects to run successfully.

And their presence is set to be felt by a good many as numerous industry surveys can attest.

With DevOps on the rise, then, the question becomes one of not just how to implement DevOps but also how to measure the success of that implementation.

Before I get to the measurement, what about how to roll out DevOps? That brings us back to that Tom Clancy trio.

Let’s start with dark launches. This is a technique to which a new generation of enterprises have turned and which is relatively commonplace among startups and giants like Facebook alike.

It’s the practice of releasing new features to a particular section of users to test how the software will behave in production conditions. Key to this process is that the software is released without any UI features.

Canary releases (really another name for dark launches) and feature flags (of feature toggles) work by building in conditional “switches” to the DevOps code using Boolean logic, so different users see different code with different features. The principle is the same as with dark launches: companies can get an idea as to how the implementation is handled without running full production.

More of The Register article from Maxwell Cooter


05
Apr 16

IT Business Edge – Diverse Infrastructure Requires Diverse Efficiency Metrics

Achieving data center efficiency is not only challenging on a technology level, but as a matter of perspective as well. With no clear definition of “efficient” to begin with, matters are only made worse by the lack of consensus as to how to even measure efficiency and place it into some kind of quantifiable construct.

At best, we can say that one technology or architecture is more efficient than another and that placing efficiency as a high priority within emerging infrastructural and architectural solutions at least puts the data industry on the path toward more responsible energy consumption.

The much-vaunted PUE (Power Usage Effectiveness) metric is an unfortunate casualty of this process. The Green Grid most certainly overreached when it designated PUE as the defining characteristic of an efficient data center, but this was understandable given that it is a simple ratio between total energy consumed and the portion devoted to data resources rather than ancillary functions like cooling and lighting. And when implemented correctly, it does in fact provide a good measure of energy efficiency. The problem is that it is easy to game and does not take into account the productivity of the data that low-PUE facilities provide nor the need for some facilities to shift loads between resources and implement other practices that could drive up their ratings.

More of the IT Business Edge article from Arthur Cole


04
Apr 16

Baseline – How Shadow IT Can Generate Huge Savings

The majority of organizations are allowing—and some are even encouraging—employees to create mobile business apps without any involvement from the IT department, according to a survey from Canvas. The company’s “3rd Annual Mobile Business Application” survey reveals that corporate and IT executives no longer fear such shadow IT practices, especially when they’ve demonstrated the ability to boost productivity and innovation, while driving down operating costs. Many company decision-makers, in fact, are comfortable with this emerging trend and are investing in tablet acquisitions to encourage work teams to expand such efforts. “Innovation is occurring at such a rapid pace in the enterprise that employees do not want to wait around for overwhelmed IT departments, so plug-and-play cloud services are transforming everyday employees into citizen developers,” said James Robins, CMO at Canvas. “Business decision-makers and IT departments recognize this evolution, and are shifting their perspective of shadow IT from a perceived liability to an invaluable tool for rapid innovation and cost management.” Nearly 400 business and IT decision-makers took part in the research. – See more at: http://www.baselinemag.com/it-management/slideshows/how-shadow-it-can-generate-huge-savings.html#sthash.1JbQwy1Q.dpuf

More of the Baseline article from Dennis McCafferty


01
Apr 16

The Register – SMBs? Are you big enough to have a serious backup strategy?

One of the TLAs* we come across all the time in IT is CIA. It’s not, in this context, a shady American intelligence force: as far as we’re concerned it stands for Confidentiality, Integrity and Availability – the three strands you need to consider as part of your security and data management policies and processes.

Most organisations tend to focus on confidentiality. And that’s understandable because a guaranteed way for your company to become super-famous is for confidential data to be made publicly available and for the Press to find out – just ask TalkTalk. On the other hand, site outages will often make the news (particularly if you’re a prominent company like DropBox or Microsoft) but they’re generally forgotten the moment that the owner puts out a convincing statement saying that their data centre fell into a sinkhole or they were the subject of a type of DDoS attack never previously seen – as long as that statement says: “… and there was never any risk of private data being exposed”.

Internally, though, you care about the integrity and availability of your data. By definition, the data you process needs to be available and correct – otherwise you wouldn’t need it to do your company’s work. And guaranteeing this is a pain in the butt – for companies of all sizes.

More of The Register post from Dave Cartright


31
Mar 16

Data Center Knowledge – How to Avoid the Outage War Room

Most IT pros have experienced it. The dreaded war room meeting that immediately starts after an outage to a critical application or service, but how do you avoid it? The only reliable way is to avoid the outage in the first place.

First, you need to build in redundancy. Most enterprises have already done much of this work. Building redundancy and disaster recovery into systems has been a best practice for decades. Avoiding single points of failure (SPOF) is simply mandatory in mission critical, performance sensitive, highly distributed and dynamic environments.

Next, you need to assess spikes in load. Most organizations have put in place methods to “burst” capacity. This most often takes the form of a hybrid cloud where the base system runs on premise, and the extra capacity is rented as needed. It can also take the form of hosting the entire application on public cloud like Amazon, Google or Microsoft, but that carries many downsides including the need to re-architect the applications to be stateless so they can run on an inherently unreliable infrastructure.

More of the Data Center Knowledge article from Bernd Harzog