03
Aug 16

ZDNet – Today’s cloud computing projects are missing something – a strategy

Everyone at some level is exploring or considering public cloud options for a range of functions — from automating IT functions to enhancing business processes.

The survey of 500 executives, published by Softchoice, finds a lack of strategic thinking when it comes to cloud implementations. A majority, 54 percent, report their teams struggle to form an effective cloud strategy, and 52 percent lack a formalized cloud strategy altogether.

Having a cloud strategy makes a big difference, the survey suggests. Compared to IT leaders with no public cloud strategy in place, those with a formal strategy are less likely to grapple with cloud skills gaps, the cloud procurement model, and cloud budgeting. Fifty-eight percent of companies without strategies have experienced cloud failures, compared to only 22 percent of strategy-minded organizations. Seventy-five percent say they are struggling to find the right skills, for example — compared to 41 percent of those with strategies. At the same time, while 70 percent of companies without strategies ran over budget, only 52 percent of those with strategies have had such issues. If anything, transitioning to public cloud is a slow-moving process for most businesses. A new survey of 500 IT and business executives finds 61 percent “still experimenting with or making limited use of public cloud”.

More of the ZDNet article from Joe McKendrick


02
Aug 16

The Server Side – Managed services model addresses cloud-based analysis paralysis

It can be a tad disconcerting when a popular trend pushes its way through the industry and you and your organization are yet to jump on the bandwagon. For enterprises that haven’t yet moved their applications onto the Azure, Google or Amazon cloud, it would be understandable for managers and C-level executives to be questioning both why it hasn’t happened yet and when it actually will. But according to Jordan Jacobs, vice president of products at SingleHop, the Azure, Amazon and Google cloud models are being oversold, and for many core business functions, a managed services approach to application hosting is often a better model.

Public clouds vs. managed services model

“The thing that wows a lot of people is the market share discrepancy between public clouds and managed services, especially when compared to the press each one gets,” said Jacobs. “Amazon, Azure and Google get all of the press, but they’re actually only about a third of the managed services and managed hosting market.”

Unfortunately, the love affair the press is having with the dominant cloud providers is causing a great deal of consternation with decision makers. On the one hand, decision makers feel that they need to catch up with the latest trend; on the other hand, they are having a hard time rationalizing, in terms of cost efficiencies, security, and business benefits, the porting of their core business applications into the public cloud. It’s creating a sort of analysis paralysis, where decision makers are unsure of whether using the public cloud is the right move, whether the managed services model makes more sense or if they should just keep everything on premises.

More of The Server Side post from Cameron McKenzie


01
Aug 16

TheWHIR – Nearly Half of All Corporate Data is Out of IT Department’s Control

Many organizations are not responding to the continuing spread of “Shadow IT” and cloud use with appropriate governance and security measures, and more than half do not have a proactive approach, according to research released Tuesday. The 2016 Global Cloud Data Security Study, compiled by the Ponemon Institute on behalf of Gemalto, shows that nearly half of all cloud services (49 percent) and nearly half of all corporate data stored in the cloud (47 percent) are beyond the reach of IT departments.

The report is drawn from a survey of more than 3,400 IT and IT security practitioners from around the world. It shows only 34 percent of confidential data on SaaS is encrypted, and members of the security team are only involved in one-fifth of choices between cloud applications and platforms.

IT departments are making gains in visibility, with 54 percent saying the department is aware of all cloud applications, platforms, and infrastructure services in use, up from 45 percent two years ago. Also, the number of respondents saying it is more difficult to protect data using cloud services fell from 60 to 54 percent, however those gains were offset by more broadly reported challenges in controlling end-user access.

More of the WHIR post from Chris Burt


27
Jul 16

ITWorld – Disaster recovery in a DevOps world

Organizations that are adopting DevOps methodologies are realizing actual benefits from taking that approach.

According to a 2015 survey by IT Revolution Press in conjunction with Puppet Labs, organizations using DevOps deploy code 30 times faster than others, doing deployments multiple times per day. Moreover, change failure gets cut in half with DevOps and services are restored up to 168 times faster than they are at non-DevOps organizations.

Let’s focus on those last two points for a moment. One thing is for certain: Embracing DevOps also pays off from a disaster recovery standpoint, because the tools and procedures that you use to move applications from development to testing to production and back to development again can also be applied to failing over and recovering from disasters and service interruptions. The same tools that automate the entire DevOps life cycle can also help you make the most use of the resources you already have for recovery purposes.

There are indeed plenty of open-source tools to help with this automation, like Chef and Puppet, which create, launch, and deploy new virtual machine instances in an automated way and configure them appropriately. They even work across security boundaries, deploying on your private laptop, in your own data center, or even up in the public cloud — Amazon Web Services and Microsoft Azure are two major public cloud providers that support Chef and Puppet.

More of the ITWorld article from Jonathan Hassell


26
Jul 16

Arthur Cole – Navigating the Challenges in IoT Infrastructure

The speed at which the enterprise has embraced IoT infrastructure has been impressive. But with most deployments still in a nascent stage, many organizations are only just now starting to encounter some of the challenges associated with scaling up to production levels.

According to Strategy Analytics, nearly 70 percent of businesses have deployed IoT solutions, and that is expected to increase to 80 percent within the year. But as Datamation’s Pedro Hernandez points out, many organizations are struggling with the analytics side of the equation. While gleaning insight into complex environments is the main driver of IoT, it isn’t always easy to determine exactly how the analytics should be done. As the data coming into the enterprise mounts, so too will the complexity of the analytics process, which can deliver vastly different results based not only on what data is collected and how it is conditioned it but what questions are asked and even how they are phrased.

Perhaps not altogether surprising, the most effective use of IoT is not happening in the enterprise or in commercial operations but on the manufacturing floor, says tech journalist Chris Neiger. Recent research from BI Intelligence shows that industrial manufacturers are well ahead of verticals like banking, telecom and energy in their deployment of IoT solutions. The field is being led by General Electric, which is leveraging IoT for everything from industrial assembly lines to navigation and fuel management systems. Company executives say an IoT-supported Industrial Internet could contribute $10 trillion to $15 trillion to global GDP in the next two decades.

More of the IT Business Edge article from Arthur Cole


22
Jul 16

ManageEngine – Bimodal IT- Double the action, twice as fun

Christopher Reeve, Brandon Routh, and Henry Cavill are all big names and share one thing in common. What connects them is the fictional superhuman bimodal character they have all embodied. And who doesn’t love that character? He’s Superman. He can do it all.

In one mode, he falls well within most conventional norms and fits perfectly into a world of indifference and acceptance. In his other mode, though, he’s a symbol of change. He’s something the world has never seen before, and something the world agrees with. His kind of change is good. His kind of change brings hope.

Now let’s bring IT into this picture. What can IT folks learn from him? And how can they harness that hope? It’s simple—go bimodal. Stability is a must and change is unavoidable. But that doesn’t mean that both can’t coexist. In fact, Gartner predicts that by 2017, 75 percent of IT organizations will have a bimodal approach. In this approach, mode one is about legacy and predictability, leading to stability and accuracy. Mode two is about innovation and exploration, which lead to agility and speed.

More of the ManageEngine article from Ravi Prakash


19
Jul 16

Baseline – Cloud-First—Except When Performance Matters

Many companies have a cloud-first policy that hosts as many applications as possible in the cloud, but apps that are latency-sensitive are staying on premise.

In the name of achieving increased IT agility, many organizations have implemented a cloud-first policy that requires as many application workloads as possible to be hosted in the cloud. The thinking is that it will be faster to deploy and provision IT resources in the cloud.

While that’s true for most classes of workloads, those applications that are latency-sensitive are staying home to run on premise.

Speaking at a recent Hybrid Cloud Summit in New York, Tom Koukourdelis, senior director for cloud architecture at Nasdaq, said there are still whole classes of high-performance applications that need to run in real time. Trying to access those applications across a wide area network (WAN) simply isn’t feasible. The fact of the matter, he added, is that there is no such thing as a one-size-fits-all cloud computing environment.

More of the Baseline article from Mike Vizard


12
Jul 16

ZDNet – Cloud computing pushes enterprise vendors closer to their customers

Cloud computing may help make running enterprises a little bit easier (allegedly), but it has not made running an enterprise software business any easier. If anything, things have gotten more difficult for vendors lately.

The most challenging piece of the rapidly accelerating migration to cloud for enterprise software providers is delivering a superior customer experience.

That’s the gist of a recent analysis produced by Bain and Company, which points out that in the era of cloud connectivity, the era of shoddy releases and so-so customer service is coming to an end. “For many years, enterprise technology companies got along fine with pretty low customer experience ratings–just about the lowest, in fact, of the industries we measured,” the report’s authors, Chris Brahm, James Dixon and Rob Markey, state. But it never seemed to matter, they continue: “Once software or hardware was installed and running, companies were reluctant to go through the expense and hassle of changing vendors, even if the technology wasn’t delivering a superior experience.”

More of the ZDNet article from Joe McKendrick


06
Jul 16

CIOInsight – The Heavy Cost of System Downtime

IT system outages have emerged as fairly routine issues for companies today—and the resulting downtime amounts to a five-figure financial hit every day, according to recent research from CloudEndure. The resulting “2016 Disaster Recovery Survey” report reveals that while the majority of IT professionals say they’ve set service availability goals of 99.9% (a.k.a., the industry standard “three nines” mark), far fewer say they’re capable of achieving this “most of the time.” As for the culprits? Either human error or network failures are usually to blame, not to mention app bugs, storage failures and (of course) the ever-troublesome hacker. Disaster recovery solutions would help. However, only a minority of businesses use disaster recovery for the majority of their servers.

More of the CIO Insight slideshow from Dennis McCafferty


30
Jun 16

Baseline – IT Struggles to Meet Network Capacity Demands

An insatiable need for access to data and digital technologies is causing organizations to expand their network capacity to staggeringly high levels, according to a recent survey from Viavi Solutions. The resulting “Ninth Annual State of the Network Global Study” indicates that most companies will soon be running the majority of their apps in the cloud, seeking to lower expenses while provisioning network resources more effectively. In addition, most enterprises are deploying some form of software-defined networking (SDN). At the same time, they’re investing in state-of-the art unified communications (UC) tools, including VoIP and Web collaboration apps—all of which are contributing to a need for more bandwidth. “Data networks of all types around the globe are being strained by an explosion of traffic, from bandwidth-hungry video today to the Internet of things tomorrow,” said Oleg Khaykin, president and CEO at Viavi Solutions.

More of the Baseline slideshow from Dennis McCafferty