13
Oct 16

CIO Insight – Why Adaptability Is Critical for State CIOs

To keep up with tech shifts and changing business demands, today’s state government CIOs must constantly redefine the way they manage a wide range of IT systems and applications, according to a recent survey from the National Association of State Chief Information Officers (NASCIO), Grant Thornton LLP and CompTIA. The accompanying report, titled “The Adaptable State CIO,” indicates that most state CIOs, for example, are moving toward outsourcing, managed services and shared services models for IT infrastructure and operations. Most are exploring or adopting agile software development approaches. They’re also looking to modernize the wealth of legacy systems that account for a substantial portion of their overall tech portfolio. In addition, many are focusing on ongoing innovations in mobility and the internet of things (IoT). In other words, our nation’s state CIOs face very similar challenges—and opportunities—as those in private industry. “(State government) CIOs are adapting to changing circumstances and expectations,” according to the report. “This requires agility to respond quickly to the unexpected, but also the strategic vision to anticipate and to plan for a future that cannot be easily predicted.

More of the CIO Insight slide show from Dennis McCafferty


11
Oct 16

The Register – Inside the Box thinking: People want software for the public cloud

Analysis On-premises file sync and share and collaboration is yesterday’s story. The future is the public cloud with dedicated software service suppliers, like Box.

File sync, share and collaboration is not a feature, but a product, best expressed as a service (SaaS) through Box’s three data centres and the public cloud, and not subsumed into part of an on-premises storage array offering. The company says it is now a content platform for the modern enterprise.

That’s the Box message and it’s working, though not dramatically, given that Box is growing and increasing its services.

Box has grown its base service with specific offerings for, for example, IBM, Salesforce, Microsoft Office, and Google Android for Work. It has also announced its Box Platform, an open API set for authentication, user management and content access.

More of The Register post from Chris Mellor


10
Oct 16

ZDNet – Is the IT budget ready to power digital transformation? The journeys of four CIOs

The digital transformation is upon us, with many CIOs expected to lead the charge. These technology leaders must determine how much of next year’s budget will drive internal and external innovation to meet staff and customer needs — and we’ve found a wide variety in investment levels across different industries.

While 72 percent of CXOs report that it is ‘critical’ or ‘very important’ for an organization to turn to a digital business model, only 15 percent said their company is agile enough to build such a system, according to an August survey from Unisys and IDG Research.

Another recent study found that 52 percent of companies surveyed looked to their CIO and CTO to lead their organization’s digital transformation, but only half said they actually had a business-wide digital transformation strategy.

More of the ZDNet post from Alison DeNisco


06
Oct 16

AFCOM – Dissecting the Data Center: What Can – and Can’t – Be Moved to the Cloud

Practical approaches on cloud migration from the AFCOM folks. Re-platforming is a great opportunity for the move, but there are others as well, including staff changes, entering new lines of business, and financial drivers.

According to the results of a recent survey of IT professionals, 43 percent of organizations estimate half or more of their IT infrastructure will be in the cloud in the next three to five years. The race to the cloud is picking up steam, but all too often companies begin implementing hybrid IT environments without first considering which workloads make the most sense for which environments.

The bottom line is your business’s decision to migrate workloads and/or applications to the cloud should not be arbitrary. So how do you decide what goes where?

The best time to consider migrating to the cloud is when it’s time to re-platform an application. You should not need to over-engineer any application or workload to fit the cloud. If it’s not broken, why move it? For the purposes of this piece, let’s assume your organization is in the process of re-platforming a number of applications and you are now deciding whether to take advantage of the cloud for these applications. There are a few primary considerations you should think through to determine if moving to the cloud or remaining on-premises is best.

Evaluating What Belongs on the Ground or in the Cloud

First, ask yourself: Is our application or workload self-contained or does it have multiple dependencies? Something like the company blog would be considered a self-contained workload that can easily be migrated to the cloud. At the other extreme, an in-house CRM, for example, requires connectivity to your ERP system and other co-dependent systems. Moving this workload to the cloud would introduce more risk in terms of latency and things that could go wrong.

More of the AFCOM article from Gerardo Dada


04
Oct 16

Continuity Central – The IT DR program: a crucial, but not well understood, aspect of disaster recovery

This is the shortest, most complete treatment I’ve ever seen of what it takes to be successful with IT Disaster Recovery. Worth the read.

While the hardware and software costs for disaster recovery are well understood many organizations do not fully realize that, in order to be assured of successfully executing the plan in the event of an outage or disaster, a comprehensive IT DR program must first be in place. An organization can have all the right IT DR hardware and software, but without a properly managed program, its efforts will fail.

Even the organizations that do have this understanding often underestimate the complexities involved in creating an IT DR program and the associated costs.

The DR program consists of the people, processes and tools necessary to implement the IT DR solution and manage its lifecycle. Because this implementation process requires considerable expertise and experience, organizations must carefully consider the costs of developing their in-house skill sets as well as those of purchasing, implementing, and maintaining their own hardware and software in house. They should then compare this expertise and the hardware and software costs to those they could access by going to a third-party managed recovery provider that specializes in providing IT disaster recovery services.

Only by understanding what goes into a full IT DR program and the complete total cost of ownership (TCO) of both an in-house versus a ‘selectively outsourced’ solution can organizations make the right choice.

The DR program consists of five processes: application mapping; developing disaster recovery procedures; test planning and execution; post-test analysis; and recovery lifecycle management. The discussion below will address what each step involves.

More of the Continuity Central post


28
Sep 16

CIO Insight – Why Enterprise Still Matters

In today’s economy, executives must account for market pressure while keeping focused on the evolution of innovation in technology. This new reality presents both challenges and opportunities for businesses and IT to align on IT strategy and finding balance between the desire to seek value and manage for risk. Due to the difficulty in finding this balance, business leaders are increasingly contracting with cloud-based service providers for the creation of applications, integrations and custom development, with or without the support of enterprise IT. These leads are essentially acting as CIOs by providing their own technology-led business solutions, which leads to fragmentation and delays in accomplishing business initiatives

More of the CIO Insight article from Mike Sommer


19
Sep 16

ZDNet – 5 ways cloud computing is transforming software vendors

It’s never easy being a software vendor. Demanding users, incredibly smart competitors, and rapidly evolving technology mean constantly being on top of one’s game. Now, cloud and Software as a Service have added a whole new dimension to what it means to be a software vendor.

For starters, it means more, much more, than simply shifting the delivery model from on-premises installation to online download. A new report from PwC — its Global 100 Software Leaders report — states “cloud computing changes how software vendors run their companies. Sure, there are technical issues such as reliability and security. But there are also business and cultural issues affecting all phases of a company, from product development to marketing and sales, extending to customer service and support.”

This shift has accelerated since PwC issued a similar report two years ago. At that time, the report’s authors state, “it was clear that cloud computing was already starting to change the software industry. It wasn’t clear how much it was going to change the industry.”

This year, cloud is sweeping into every corner of the industry. “SaaS/ PaaS revenues of the Top 50 software vendors now approaches 10% of their total,” PwC reports. The cloud model, of course, means lower revenues, and perhaps cannibalizing existing business. But market realities are pushing this transition. “Software vendors who’ve made the transition are well on their way to restructuring their operations to the new realities of lower average sales prices and margins,” according to Mark McCaffrey, PwC global software leader. “The companies that haven’t done so may not be on the 100 list anymore — and we haven’t seen the effects shake out yet.”

More of the ZDNet article from Joe McKendrick


15
Sep 16

ComputerWeekly – The pros and cons of cloud bursting

It’s fun to think about the possibilities of bursting and brokering, but countless barriers stand in the way of enterprise customers. Dynamic porting of workloads is an interesting concept, but not yet an agenda item.

Brokering refers to dynamic relocation of cloud workloads based on the lowest-cost platform at that time, whereas cloud bursting looks to optimise the cost and performance of an application at any time. For average use, an enterprise can pay for persistent usage in its own virtual machine (VM) environment, and it can use public cloud resources for additional capacity.

In 2011, the idea of dynamically sourcing and brokering cloud services based on real-time changes in cost and performance was the future vision of cloud’s pay-as-you-go pricing – and it remains a vision.

The first tools are only just emerging and the use cases are limited, especially since costs for public clouds don’t vary enough to drive significant brokerage demand.

More of the ComputerWeekly article from Lauren Nelson


12
Sep 16

IT Business Edge – The Cloud: Not Just Better IT, All-New IT

It’s fair to say that the cloud is fast-approaching the tipping point as the dominant means of deploying enterprise infrastructure. But while the broad outlines are coming into view, the exact architecture and the host’s location are still very much “up in the air.”

The latest estimate on cloud deployments came from 451 Research this week, which pegged the current cloud workload at about 41 percent of the enterprise total with a likely rise to 60 percent by the middle of 2018. In breaking down the numbers, the firm noted that the majority of deployments are taking place on private clouds and public SaaS infrastructure, and that going forward the private side will see largely flat growth while SaaS will jump by 23 percent. As well, IaaS deployments, currently only 6 percent of the total, will double to 12 percent in the next two years.

More of the IT Business Edge post from Arthur Cole


05
Sep 16

Baseline – IT Pros Can’t Keep Up With the Onslaught of Alerts

While the vast majority of IT professionals agree that it’s important to monitor the performance of their networks and systems, few of them are “very satisfied” with their approach to this critical issue, according to a recent survey from BigPanda. The accompanying “State of Monitoring 2016” report reveals that technology organizations struggle to quickly remediate service disruptions, and they are overwhelmed by an excess of alert “noise” from monitoring tools. Of those receiving 100 or more alerts a day, only a small minority can investigate and resolve those alerts within a day. What would help is the adoption of defined, strategic monitoring, which often boosts agility and the potential to rapidly identify the root cause of problems. “IT teams are receiving an onslaught of alerts, [but] few are able covert those alerts into insight, and the inability to quickly remediate service disruptions is a pain felt across the board,” according to the report.

More of the Baseline slideshow from Dennis McCafferty