12
Apr 16

CIO Insight – How CIOs Can Lead a Digital Transformation

The majority of CIOs now perceive of themselves as their organization’s primary driver of a business and IT transformation, according to a recent survey from Cognizant. The report, titled “Being Digital: How and Why CIOs Are Reinventing Themselves for a New Age,” indicates that CIOs must make key contributions to strategic planning, while emerging as a “chief inclusion officer” who fosters an open culture of innovation. CIOs also need to take a hands-on role in leading digital programs/initiatives, while articulating clearly how these efforts contribute to company goals. And to ensure successful transformations, CIOs have to build a strong relationship of trust with their CEO. “Being digital will require a strong push from the CIO; it won’t happen on its own,” according to the report.

More of the CIOInsight post from Dennis McCafferty


08
Apr 16

Baseline – Many IT Pros Ignore Corporate Security Policies

One of the inescapable realities of enterprise cyber-security is that a huge gulf exists between what companies should do to protect their IT systems and data and what actually takes place. A recent research report released by Absolute Software, “IT Confidential: The State of Security Confidence,” illustrates the extent of the problem. The endpoint security and data risk management firm polled more than 500 U.S. employees who work in an IT or information security role and asked them about their security practices. The study found that, among other things, a shockingly high percentage of IT professionals admitted that they did not follow the same security protocols that they enforce on other employees. Many said they also intentionally circumvent key security policies. Consequently, many organizations—while placing a premium on security—expose themselves to significant risks.

More of the Baseline article from Samuel Greengard


07
Apr 16

HBR – When Was the Last Time You Asked, “Why Are We Doing It This Way?”

During a time when many retailers are struggling, business is booming at Target. But it wasn’t too long ago that the discount retailer’s future didn’t glow so bright. When CEO Brian Cornell took the reins two years ago, he inherited a company that had been struggling for years, taking far too few risks, and sticking too close to the core.

Since then the world has fallen in love with a far edgier Target, which has expanded its offerings through collaborations with such power brands as Lilly Pulitzer, Toms, Neiman Marcus, and SoulCycle, and updated product lines that break the status quo, like its latest gender-neutral kids home brand Pillowfort. But Cornell didn’t start right out of the gate making any big changes like these. Instead, he took time to carefully contemplate his approach, listen to his team, and ask questions.

At the MIT Leadership Center, I recently spoke with another leader, Guy Wollaert, chief exploration officer at Loggia Strategy & Design, about similar experiences he encountered at another highly visible brand, Coca-Cola. During his 20-plus year tenure with the global beverage brand, most recently serving as its chief technical and innovation officer, Wollaert made it a point to seek — and surround himself with — new ideas and people who challenged him to reflect and question first, then act later.

More of the Harvard Business Review post from Hal Gregersen


06
Apr 16

The Register – Successful DevOps? You’ll need some new numbers for that

Dark launches, feature flags and canary launches: They sound like something from science fiction or some new computer game franchise bearing the name of Tom Clancy.

What they are is the face of DevOps – processes that enable projects to run successfully.

And their presence is set to be felt by a good many as numerous industry surveys can attest.

With DevOps on the rise, then, the question becomes one of not just how to implement DevOps but also how to measure the success of that implementation.

Before I get to the measurement, what about how to roll out DevOps? That brings us back to that Tom Clancy trio.

Let’s start with dark launches. This is a technique to which a new generation of enterprises have turned and which is relatively commonplace among startups and giants like Facebook alike.

It’s the practice of releasing new features to a particular section of users to test how the software will behave in production conditions. Key to this process is that the software is released without any UI features.

Canary releases (really another name for dark launches) and feature flags (of feature toggles) work by building in conditional “switches” to the DevOps code using Boolean logic, so different users see different code with different features. The principle is the same as with dark launches: companies can get an idea as to how the implementation is handled without running full production.

More of The Register article from Maxwell Cooter


05
Apr 16

IT Business Edge – Diverse Infrastructure Requires Diverse Efficiency Metrics

Achieving data center efficiency is not only challenging on a technology level, but as a matter of perspective as well. With no clear definition of “efficient” to begin with, matters are only made worse by the lack of consensus as to how to even measure efficiency and place it into some kind of quantifiable construct.

At best, we can say that one technology or architecture is more efficient than another and that placing efficiency as a high priority within emerging infrastructural and architectural solutions at least puts the data industry on the path toward more responsible energy consumption.

The much-vaunted PUE (Power Usage Effectiveness) metric is an unfortunate casualty of this process. The Green Grid most certainly overreached when it designated PUE as the defining characteristic of an efficient data center, but this was understandable given that it is a simple ratio between total energy consumed and the portion devoted to data resources rather than ancillary functions like cooling and lighting. And when implemented correctly, it does in fact provide a good measure of energy efficiency. The problem is that it is easy to game and does not take into account the productivity of the data that low-PUE facilities provide nor the need for some facilities to shift loads between resources and implement other practices that could drive up their ratings.

More of the IT Business Edge article from Arthur Cole


04
Apr 16

Baseline – How Shadow IT Can Generate Huge Savings

The majority of organizations are allowing—and some are even encouraging—employees to create mobile business apps without any involvement from the IT department, according to a survey from Canvas. The company’s “3rd Annual Mobile Business Application” survey reveals that corporate and IT executives no longer fear such shadow IT practices, especially when they’ve demonstrated the ability to boost productivity and innovation, while driving down operating costs. Many company decision-makers, in fact, are comfortable with this emerging trend and are investing in tablet acquisitions to encourage work teams to expand such efforts. “Innovation is occurring at such a rapid pace in the enterprise that employees do not want to wait around for overwhelmed IT departments, so plug-and-play cloud services are transforming everyday employees into citizen developers,” said James Robins, CMO at Canvas. “Business decision-makers and IT departments recognize this evolution, and are shifting their perspective of shadow IT from a perceived liability to an invaluable tool for rapid innovation and cost management.” Nearly 400 business and IT decision-makers took part in the research. – See more at: http://www.baselinemag.com/it-management/slideshows/how-shadow-it-can-generate-huge-savings.html#sthash.1JbQwy1Q.dpuf

More of the Baseline article from Dennis McCafferty


01
Apr 16

The Register – SMBs? Are you big enough to have a serious backup strategy?

One of the TLAs* we come across all the time in IT is CIA. It’s not, in this context, a shady American intelligence force: as far as we’re concerned it stands for Confidentiality, Integrity and Availability – the three strands you need to consider as part of your security and data management policies and processes.

Most organisations tend to focus on confidentiality. And that’s understandable because a guaranteed way for your company to become super-famous is for confidential data to be made publicly available and for the Press to find out – just ask TalkTalk. On the other hand, site outages will often make the news (particularly if you’re a prominent company like DropBox or Microsoft) but they’re generally forgotten the moment that the owner puts out a convincing statement saying that their data centre fell into a sinkhole or they were the subject of a type of DDoS attack never previously seen – as long as that statement says: “… and there was never any risk of private data being exposed”.

Internally, though, you care about the integrity and availability of your data. By definition, the data you process needs to be available and correct – otherwise you wouldn’t need it to do your company’s work. And guaranteeing this is a pain in the butt – for companies of all sizes.

More of The Register post from Dave Cartright


31
Mar 16

Data Center Knowledge – How to Avoid the Outage War Room

Most IT pros have experienced it. The dreaded war room meeting that immediately starts after an outage to a critical application or service, but how do you avoid it? The only reliable way is to avoid the outage in the first place.

First, you need to build in redundancy. Most enterprises have already done much of this work. Building redundancy and disaster recovery into systems has been a best practice for decades. Avoiding single points of failure (SPOF) is simply mandatory in mission critical, performance sensitive, highly distributed and dynamic environments.

Next, you need to assess spikes in load. Most organizations have put in place methods to “burst” capacity. This most often takes the form of a hybrid cloud where the base system runs on premise, and the extra capacity is rented as needed. It can also take the form of hosting the entire application on public cloud like Amazon, Google or Microsoft, but that carries many downsides including the need to re-architect the applications to be stateless so they can run on an inherently unreliable infrastructure.

More of the Data Center Knowledge article from Bernd Harzog


30
Mar 16

CIO Insight – Do You Know Where Your Critical Data Lives?

Engage with others to assess needs from differing perspectives: business operations, customers, regulators/auditors and shareholders. Keep this list updated because it evolves.

In an era of continuous business operations, being offline has become unacceptable. Yet this drive for high availability, although exciting, also poses serious risks to the security of your data. Your data may be among the most important assets to your business. Any form of downtime can be detrimental to the livelihood of your business because it affects reputation and revenue, said Derek Brost, director of Engineering at Bluelock. “Don’t wait until a disaster strikes to take action,” he warned. “If you’re experiencing pressure to improve your current IT program, don’t fret. [These tips] should set you on the right path to a secure business environment, one with optimized recovery.” Bluelock provides Disaster Recovery-as-a-Service for complex environments and sensitive data to help companies mitigate risk with confidence. Confidence begins with a plan that works, Brost said. These tips should help the always-on business to proceed with confidence in the face of an intrusion.

More of the CIO Insight post from Karen A. Frenkel


29
Mar 16

Baseline – Data Center Outages Result in Shocking Expenses

The average cost of data center outages has increased by tens of thousands of dollars in recent years, according to recent research published by the Ponemon Institute and Emerson Network Power. The accompanying report, “2016 Cost of Data Center Outages,” reveals that unplanned outages usually last longer than a typical two-hour movie and cost organizations thousands of dollars for every minute of downtime. An uninterruptible power supply (UPS) system failure and, of course, hackers account for most of these incidents, causing business disruption, lost revenue and a slowdown in productivity. With continued growth in cloud computing and the Internet of things (IoT)—which is expected to grow to a $1.7 trillion market by 2020, up from about $656 billion in 2014—the data center will continue to be crucial in leveraging business-benefiting opportunities. So IT departments are under pressure to reduce these outages. “As organizations … invest millions in data center development, they are exploring new approaches to data center design and management to both increase agility and reduce the cost of downtime,” according to the report.

More of the Baseline article from Dennis McCafferty