19
Apr 16

Continuity Central – Dealing with the risk of DDoS ransom attacks

We are all familiar with the disruptive consequences of a distributed denial of service (DDoS) attack when a website is forced offline because it has been swamped with massive levels of traffic from multiple sources. The cost in terms of lost business to companies while their website is offline can be significant.

Cyber criminals are now taking the process a step further by tying ransom demands to their DDoS attacks, threatening to keep company websites permanently offline until they pay up. In effect, DDoS attacks are coming with an invoice attached.

What are DDoS ransom attacks?

Given the stakes, it makes sense for organizations to try and learn as much as they can about DDoS ransom demands: what do they look like, how can businesses work out if their site is at genuine risk and how can they protect their online presence?

Potential DDoS attacks, usually by criminal groups, start with a test attack on a website or service. The preferred method is to send increasing levels of traffic to the site to ascertain whether it could be vulnerable to an attack. Sometimes, the site can be knocked out with a small attack (from 1-2Gb of bandwidth) or it may require a much larger scale onslaught (from 10-100Gb), depending on the robustness of the security technology the service provider hosting the site has in place.

More of the Continuity Central post from Jake Madders


06
Apr 16

The Register – Successful DevOps? You’ll need some new numbers for that

Dark launches, feature flags and canary launches: They sound like something from science fiction or some new computer game franchise bearing the name of Tom Clancy.

What they are is the face of DevOps – processes that enable projects to run successfully.

And their presence is set to be felt by a good many as numerous industry surveys can attest.

With DevOps on the rise, then, the question becomes one of not just how to implement DevOps but also how to measure the success of that implementation.

Before I get to the measurement, what about how to roll out DevOps? That brings us back to that Tom Clancy trio.

Let’s start with dark launches. This is a technique to which a new generation of enterprises have turned and which is relatively commonplace among startups and giants like Facebook alike.

It’s the practice of releasing new features to a particular section of users to test how the software will behave in production conditions. Key to this process is that the software is released without any UI features.

Canary releases (really another name for dark launches) and feature flags (of feature toggles) work by building in conditional “switches” to the DevOps code using Boolean logic, so different users see different code with different features. The principle is the same as with dark launches: companies can get an idea as to how the implementation is handled without running full production.

More of The Register article from Maxwell Cooter


05
Apr 16

IT Business Edge – Diverse Infrastructure Requires Diverse Efficiency Metrics

Achieving data center efficiency is not only challenging on a technology level, but as a matter of perspective as well. With no clear definition of “efficient” to begin with, matters are only made worse by the lack of consensus as to how to even measure efficiency and place it into some kind of quantifiable construct.

At best, we can say that one technology or architecture is more efficient than another and that placing efficiency as a high priority within emerging infrastructural and architectural solutions at least puts the data industry on the path toward more responsible energy consumption.

The much-vaunted PUE (Power Usage Effectiveness) metric is an unfortunate casualty of this process. The Green Grid most certainly overreached when it designated PUE as the defining characteristic of an efficient data center, but this was understandable given that it is a simple ratio between total energy consumed and the portion devoted to data resources rather than ancillary functions like cooling and lighting. And when implemented correctly, it does in fact provide a good measure of energy efficiency. The problem is that it is easy to game and does not take into account the productivity of the data that low-PUE facilities provide nor the need for some facilities to shift loads between resources and implement other practices that could drive up their ratings.

More of the IT Business Edge article from Arthur Cole


01
Apr 16

The Register – SMBs? Are you big enough to have a serious backup strategy?

One of the TLAs* we come across all the time in IT is CIA. It’s not, in this context, a shady American intelligence force: as far as we’re concerned it stands for Confidentiality, Integrity and Availability – the three strands you need to consider as part of your security and data management policies and processes.

Most organisations tend to focus on confidentiality. And that’s understandable because a guaranteed way for your company to become super-famous is for confidential data to be made publicly available and for the Press to find out – just ask TalkTalk. On the other hand, site outages will often make the news (particularly if you’re a prominent company like DropBox or Microsoft) but they’re generally forgotten the moment that the owner puts out a convincing statement saying that their data centre fell into a sinkhole or they were the subject of a type of DDoS attack never previously seen – as long as that statement says: “… and there was never any risk of private data being exposed”.

Internally, though, you care about the integrity and availability of your data. By definition, the data you process needs to be available and correct – otherwise you wouldn’t need it to do your company’s work. And guaranteeing this is a pain in the butt – for companies of all sizes.

More of The Register post from Dave Cartright


29
Mar 16

Baseline – Data Center Outages Result in Shocking Expenses

The average cost of data center outages has increased by tens of thousands of dollars in recent years, according to recent research published by the Ponemon Institute and Emerson Network Power. The accompanying report, “2016 Cost of Data Center Outages,” reveals that unplanned outages usually last longer than a typical two-hour movie and cost organizations thousands of dollars for every minute of downtime. An uninterruptible power supply (UPS) system failure and, of course, hackers account for most of these incidents, causing business disruption, lost revenue and a slowdown in productivity. With continued growth in cloud computing and the Internet of things (IoT)—which is expected to grow to a $1.7 trillion market by 2020, up from about $656 billion in 2014—the data center will continue to be crucial in leveraging business-benefiting opportunities. So IT departments are under pressure to reduce these outages. “As organizations … invest millions in data center development, they are exploring new approaches to data center design and management to both increase agility and reduce the cost of downtime,” according to the report.

More of the Baseline article from Dennis McCafferty


23
Mar 16

SearchDataCenter – The right infrastructure for fast and big data architectures

The newer fast data architectures differ significantly from big data architectures and the tried-and-true online transaction processing tools that fast data supplements. Understanding big data and fast data’s requirement changes will inform your foray into the hardware and software choices.

Big data architectures

Big data is about analyzing and gaining deeper insights from much larger pools of data than enterprises typically gathered in the past. Much of the data (e.g., social-media data about customers) is accessible in public clouds. This data, in turn, emphasizes speedy access and deemphasizes consistency, leading to a wide array of Hadoop big data tools. Thus, the following changes in architecture and emphasis are common:

More of the SearchDataCenter article from Wayne Kernochan


18
Mar 16

SearchCloudComputing – Verizon Cloud joins casualty list amid public IaaS exodus

Why do YOU think the big guys are shutting down their cloud operations?

Verizon is the latest large-scale IT vendor to quietly shutter its public cloud after its splashy entry to the market several years ago.

Customers this week received a letter informing them that Verizon’s public cloud, reserved performance and marketplace services will be closed on April 12. Any virtual machines running on the public Verizon Cloud will be shut down and no content on those servers will be retained.

The move isn’t particularly surprising. Despite once-lofty ambitions, Verizon acknowledges its public cloud offering is not a big part of its cloud portfolio and, a year ago, the firm began to emphasize its private cloud services even before its public cloud became generally available. Other large vendors such as Dell and Hewlett Packard Enterprise similarly have been shutting down their public clouds.

More of the SearchCloudComputing article from Trevor Jones


04
Mar 16

Baseline – Hybrid Clouds: The Long Road Ahead

The challenge facing IT leaders is that there are so many forms of hybrid clouds that they don’t realize how extended a journey their organization may be on.

When it comes to enterprise IT these days, just about everything involves some form of hybrid cloud computing. The challenge is that there are so many forms of hybrid clouds that many IT leaders don’t realize just how extended a journey their organization may be on.

The typical IT organization usually embraces cloud computing first with a few software-as-a-service (SaaS) applications. In that regard, the hybrid cloud scenario that emerges is relatively simple: IT leaders need to find ways to share data between existing on-premise applications and SaaS applications that are most often servicing the needs of a specific department or line of business.

IT leaders also find themselves trying to manage infrastructure-as-a-service (IaaS) environments. IT usually starts out with a few developers taking advantage of platforms such as Amazon Web Services to build and test applications.

More of the Baseline article from Mike Vizard


02
Mar 16

CloudExpo – Hybrid Cloud Versus Hybrid IT: What’s the Hype?

Once again, the boardroom is in a bitter battle over what edict its members will now levy on their hapless IT organization. On one hand, hybrid cloud is all the rage. Adopting this option promises all the cost savings of public cloud with the security and comfort of private cloud. This environment would not only check the box for meeting the cloud computing mandate, but also position the organization as innovative and industry-leading. Why wouldn’t a forward-leaning management team go all in with cloud?

On the other hand, hybrid IT appears to be the sensible choice for leveraging traditional data center investments. Data center investment business models always promise significant ROI within a fairly short time frame; if not, they wouldn’t have been approved. Shutting down such an expensive initiative early would be an untenable decision. Is this a better option than the hybrid cloud?

Hybrid Cloud Versus Hybrid IT
The difference between hybrid cloud and hybrid IT is more than just semantics. The hybrid cloud model is embraced by those entities and startups that don’t need to worry about past capital investments. These newer companies have more flexibility in exploring newer operational options.

More of the CloudExpo blog post from Kevin Jackson


25
Feb 16

CIOInsight – Why IT spending is all about the business

Dennis McCafferty does it again. He nails the key topics associated with getting IT spending right in this slideshow from CIOInsight.

While most CIOs, IT execs and senior managers agree that it’s important to link tech investments to key business outcomes, relatively few feel their organization does a very good job of doing so, according to a recent survey from Datalink. The report, titled “The Importance of Linking Business Outcomes to IT Investment Strategy,” indicates that these leaders clearly believe that tech spending is all about supporting the business these days—as opposed to merely benefiting IT operations. To get more out of their buck here, companies are looking to streamline operational processes and launch standardization initiatives.

More of the CIOInsight slideshow from Dennis McCafferty