24
Feb 16

Baseline – What Worries IT Organizations the Most?

IT employees and leaders have a lot to worry about these days, according to a recent survey from NetEnrich. For starters, they’re spending too much money on technology that either doesn’t get used or fails to deliver on its promises, findings show. They devote too many hours to “keeping the lights on” rather than innovating. And the increase of tech acquisition decisions being made outside of the IT department (shadow IT) elevates existing risks about cyber-security and business app performance. Meanwhile, tech departments are still struggling with a lack of available talent to support agility and business advances. “Corporate IT departments are in a real bind,” said Raju Chekuri, CEO at NetEnrich.

More of the Baseline slideshow from Dennis McCafferty


23
Feb 16

The Register – Add ‘Bimodal IT’ to your buzzword bingo card: Faster… more stable… faster. But stable

Thanks to Gartner, we have a new buzzword: bimodal IT. It’s nothing special actually, just a new way to describe common sense, and the fact that the world – the IT world in this case – is not black or white.

In practice, in modern IT organisations it is better to find a way to integrate different environments instead of trying to square the circle all the time. This means that you can’t apply DevOps methodology to everything, nor can you deny its benefits if you want to deploy cloud-based applications efficiently. (Gartner discovers great truths sometimes, doesn’t it?)

But here is my question, “Does bimodal IT need separate infrastructures?”

Bimodal IT doesn’t mean two different infrastructures
In the past few weeks I published quite a few articles talking about Network, Storage, Scale-out, and Big Data infrastructures. Most of them address a common problem: how to build flexible and simple infrastructures that can serve legacy and cloud-like workloads at the same time.

From the storage standpoint, for example, I would say that a unified storage system is no longer synonymous with multi-protocol per se, but it’s much more important if it has the capability of serving as many workloads as possible at the same time. Like a bunch of Oracle databases, hundreds of VMs and thousands of container accessing shared volumes concurrently. The protocol used is just a consequence.

To pull it off, you absolutely need the right back-end architecture and, at the same time, APIs, configurability and tons of flexibility. Integration is another key part, and the storage system has to be integrated with all the different hypervisors, cloud platforms and now orchestration tools.

More of The Register post from Enrico Signoretti


17
Feb 16

VMTurbo – What’s the Promise of Orchestration?

In my conversations over 2015, I have found that one of the top of mind goals for many Directors and CIOs for this year is the goal of fully automating the orchestration of the environment. It is a common pain felt across the IT staff, the lack of agility and automation when it comes to provisioning new workloads for the environment.

Whether the plan is to expand the VMWare suite through vRealize Automation, pursue a third party technology like Chef, Puppet, CloudForms, or move into a full IaaS or PaaS environment through OpenStack or Cloud Foundry, the objective is to speed up the auto-provisioning capabilities of the data center to meet the rapidly growing needs for faster, more responsive applications at a quicker delivery time. However, the benefits of moving to automated orchestration, also create new challenges.

Why Orchestrate?

To answer this question, let me throw out a scenario that many can probably relate to today. An administrator logs into his Outlook first thing Friday morning, and at the top of his inbox is a request for a new VM from a coworker, who plans to begin testing a new application in the next couple of weeks per the CIO’s initiative.

More of the VMTurbo post from Matt Vetter


16
Feb 16

Data Center Knowledge – How Data Center Trends Are Forcing a Revisit of the Database

Ravi Mayuram is Senior Vice President of Products and Engineering at Couchbase.

Data centers are like people: no two are alike, especially now. A decade of separating compute, storage, and even networking services from the hardware that runs them has left us with x86 pizza boxes stacked next to, or connected with, 30-year-old mainframes. And why not? Much of the tough work is done by software tools that define precisely how and when hardware is to be used.

From virtual machines to software-defined storage and network functions virtualization, these layers of abstraction fuse hardware components into something greater and easier to control.

More of the Data Center Knowledge post from Ravi Mayuram


15
Feb 16

ZDNet – A call for more cloud computing transparency

In a recent research note, Gartner argued that the revenue claims of cloud vendors are increasingly hard to digest. Gartner said enterprises shouldn’t take vendor cloud revenue claims at face value and evaluate them based on strategy and services (naturally using tools from the research firm).

A week ago, I argued that Google should provide some kind of cloud run rate just so customers can get a feel for scale and how it compares to Amazon Web Services, Microsoft’s Azure and IBM. Oh well. Unlike Gartner, I think the revenue figures matter somewhat, but are far from the deciding factor.

But debating revenue run rates and nuances between the private and public cloud variations misses the point. What’s missing from the cloud equation today is better transparency.

With that issue in mind, here’s where I think we need to go in terms of cloud transparency:

PUBLIC FACING
Revenue reporting from cloud vendors. Amazon Web Services breaks out its results and they’re straightforward earnings and revenue. IBM has an “as-a-service” run rate. Microsoft has a commercial cloud run rate. And Oracle to its credit has line-by-line breakdowns of the various flavors–infrastructure-, platform- and software—of as-a-service sales.

More of the ZDNet post from Larry Dignan


03
Feb 16

Baseline – Why IT Pros Give Tech Transformation a Weak Grade

Few front-line technology workers give their companies high marks for adapting to new, transformative tech, according to a recent survey from Business Performance Innovation (BPI) and Dimension Data. The resulting report, “Bringing Dexterity to IT Complexity: What’s Helping or Hindering IT Tech Professionals,” indicates that most organizations haven’t even begun to transform IT—or are just getting started. A major sore spot: A lack of collaboration and/or alignment with the business side, as most tech staffers said business teams wait too long to bring IT into critical planning processes. This, combined with a lack of funding and other resources, results in tech departments spending too much time on legacy maintenance and far too little on essential advances that bring value to the business. “Instead of ushering their companies into a new age of highly agile innovation, IT workers are hindered by a growing list of maintenance tasks, staff cutbacks and aging infrastructure,” according to the report.

More of the Baseline Magazine article from Dennis McCafferty


02
Feb 16

Arthur Cole – Weighing the Pros and Cons of Commodity Infrastructure

Data infrastructure built on commodity hardware has a lot going for it: lower costs, higher flexibility, and the ability to rapidly scale to meet fluctuating workloads. But simply swapping out proprietary platforms for customizable software architectures is not the end of the story. A lot more must be considered before we even get close to the open, dynamic data environments that most organizations are striving for.

The leading example of commodity infrastructure is Facebook, which recently unveiled plans for yet another massive data center in Europe – this time in Clonee, Ireland. The facility will utilize the company’s Open Compute Project framework, which relies on advanced software architectures atop low-cost commodity hardware and is now available to the enterprise community at large in the form of a series of reference architectures that are free for the asking. The idea is that garden variety enterprises and cloud providers will build their own modular infrastructure to support the kinds of abstract, software-defined environments needed for Big Data, the Internet of Things and other emerging initiatives.

More of the IT Business Edge post from Arthur Cole


27
Jan 16

Continuity Central – Six tips for successful IT continuity

Andrew Stuart offers some IT-focused experience-based business continuity tips :

1. Understand the threat landscape

Storms, ransomware viruses and fires are only some of many real threats for which all businesses should proactively prepare. Your IT department needs a full understanding of all of the threats likely to hit your building, communications room or servers in order to help prepare for the worst. This can be done by assessing risks based on the location and accessibility of your data centres / centers, as well as any malicious attacks that could occur. When planning to mitigate a disaster, treat every incident as unique: a local fire may affect one machine, whereas human error may lead to the deletion of entire servers.

2. Set goals for recovery

While some companies assume they are protected in the wake of a disaster if they duplicate their data, many learn the hard way that their backup stopped functioning during a disaster or their data is inaccessible afterwards. The IT team needs to define criteria for recovery time objectives (RTO), or how long your business can continue to run without access to your data, and recovery point objectives (RPO), which is the maximum age of data that will still be useful to back up. The IT team will also need to identify critical systems and prioritise recovery tasks.

More of the Continuity Central article from Andrew Stuart


25
Jan 16

CloudExpo blog – Cloud and Shadow IT – An Inevitable Pairing?

You can’t seem to have a conversation about cloud technology and its impact on the business without the topic of Shadow IT coming up. The two concepts at times seem so tightly intertwined, one would think there is a certain inevitability, almost a causal linkage between them. Shadow IT tends to be an emotional topic for many, dividing people into one of two camps. One camp tends to see Shadow IT as a great evil putting companies, their data and systems at risk by implementing solutions without oversight or governance. Another camp sees Shadow IT as the great innovators that are helping the company succeed by allowing the business to bypass a slow and stagnant IT organization. Does going to the cloud inherently mean there will be Shadow IT? If it does, is that necessarily a bad or good thing?

More of the CloudExpo blog post by Ed Featherston


22
Jan 16

About Virtualization – The Network is Agile, but Slower with SDN and Microservices

Have you ever moved something in your kitchen because it fits better, only to find that you spend more time having to go and get it where you used to have it closer at hand? This is a simple analogy, but does relate to some confusion that is happening around SDN and microservices implementations.

As new methodologies and technologies come into your organization, we assess what it is that they are meant to achieve. You’ve worked out a list of requirements that you want to see, and from that wish list, you check off which are attained by the product of choice. As we look towards microservices architectures, which I fully agree we should, we have one checklist for the applications. As we look at the challenges that SDN solves, which I fully agree that we should, we have another checklist.

Let’s first approach this by dealing with a couple of myths about SDN and microservices architectures:

More of the About Virtualization post