07
Apr 17

CustomerThink – Do CMOs Really Spend More on MarTech Than CIOs? A New Study Says No

Like many people in the marketing technology industry, I was tickled in 2011 when Gartner predicted that CMOs would soon have bigger tech budgets than CIOs, and even more tickled when Gartner said in 2016 that it had happened. But my recent pondering of the relationship of marketing and IT departments had me rethinking the question. On an anecdotal level, I’ve never seen or heard of a company where the marketing technology group was anywhere near the size of the IT department. And from a revenue perspective, there’s no way that marketing technology companies make up half the total revenue of the software industry.

But just as I was working myself up for some back-of-the-envelope calculations, the good people at International Data Corporation (IDC) announced a report with authoritative figures on the topic. Actually, the study estimates spending on 20 technologies and 12 corporate functional areas across 16 enterprise industries in eight regions and 53 countries, comparing the amounts funded by IT departments and by business departments.

More of the CustomerThink article from David Raab


06
Apr 17

The Register – Researchers steal data from CPU cache shared by two VMs

A group of researchers say they can extract information from an Amazon Web Services virtual machine by probing the cache of a CPU it shares with other cloudy VMs.

A paper titled Hello from the Other Side: SSH over Robust Cache Covert Channels in the Cloud (PDF) explains the challenges of extracting data from CPU cache, a very contested resource in which the OS, the hypervisor and applications all conduct frequent operations. All that activity makes a lot of noise, defying attempts to create a persistent communications channel.

Until now, as the researchers claim they’ve built “a high-throughput covert channel [that] can sustain transmission rates of more than 45 KBps on Amazon EC2”. They’ve even encrypted it: the technique establishes a TCP network within the cache and transmits data using SSH.

The results sound scarily impressive: a Black Hat Asia session detailing their work promised to peer into a host’s cache and stream video from VM to VM.

The paper explains that this stuff is not entirely new, but has hitherto also not been entirely successful because it’s been assumed that “error-correcting code can be directly applied, and the assumption that noise effectively eliminates covert channels.”

More of The Register article from Simon Sharwood


05
Apr 17

CIO Insight – Despite the Cloud’s Value, Funds Are Often Wasted

On average, the IT pros surveyed said their organization wastes 30% of its cloud spend.

With its days as an emerging technology behind us, the cloud is now firmly established in the fabric of modern companies: Nearly all organizations are investing in the cloud in some way, according to a recent survey report, “State of the Cloud,” from RightScale. The hybrid cloud has emerged as the most preferred option, followed by the public cloud. Regardless of the chosen cloud pathway, companies are reaping the rewards of faster access to infrastructure, greater scalability, higher availability, quicker time to market and more assured business continuity. Challenges linger, however, especially in the form of security concerns and a lack of needed staffing expertise.

More of the CIO Insight slideshow from Dennis McCafferty


04
Apr 17

Continuity Central – IT disaster recovery failures: why aren’t we learning from them?

The news of an IT outage impacting a large company seems to appear in the headlines more and more frequently these days and often the root cause seems to be out-of-date approaches and strategies in place for IT disaster recovery and compliance. Common mistakes that businesses make include not testing the recovery process on a recurring basis; and relying on data backups instead of continuous replication. Also, businesses are still putting all their data protection eggs in one basket: it is always better to keep your data safe in multiple locations.

C-level leaders are now realising the need for IT resilience, whether they’re creating a disaster recovery strategy for the first time, or updating an existing one. IT resilience enables businesses to power forwards through any IT disaster, whether it be from human error, natural disasters, or criminal activities such as ransomware attacks. However, many organizations are over-confident in what they believe to be IT resilience; in reality they have not invested enough in disaster recovery planning and preparation. The resulting high-profile IT failures can be used as a lesson for business leaders to ensure their disaster recovery plan is tough, effective, and allows true recovery to take place.

If it ain’t broke… test it anyway

Virtualization and cloud-based advancements have actually made disaster recovery quite simple and more affordable. But it doesn’t stop there: organizations need to commit to testing disaster recovery plans consistently, or else the entire strategy is useless.

More of the Continuity Central post


03
Apr 17

HBR – Why CIOs Make Great Board Directors

According to Korn Ferry unpublished data, there has been a 74% increase in the number of CIOs serving on Fortune 100 boards in the past two years.

It’s no wonder CIOs are the fastest-growing addition to the boardroom: They can help address a host of issues of crucial importance to boards, including using technologies to create operational efficiencies and competitive advantage; identifying opportunities related to cloud computing, digitization, and data; addressing threats and risks associated with information security; and using their experience and judgment to oversee, question, and provide input on technology budgets.

But there’s room for growth. Only 31% of Fortune 100 boards currently have a director who is a CIO, even though technology is at the core of every business today. As Sheila Jordan, CIO at Symantec and director at FactSet, put it, “All companies are technology companies today. Technology is a lever to run the business, but also to change and grow.”

More of the Harvard Business Review article from Craig Stephenson and Nels Olson


15
Mar 17

The Register – It’s time for our annual checkup on the circus that is the Internet Governance Forum

Unaccountable? Check. Pointlessly bureaucratic? Check. Blocking reform? Check

It’s March again so it must be time for an annual checkup on the Internet Governance Forum – the United Nations body that is tasked with working through the complex social, technological and economic issues associated with a global communications network, and runs an annual conference to that end.

Around this time every year, the IGF’s organizing group the Multistakeholder Advisory Group (MAG) meets in Geneva to decide how the annual conference will be structured and what topics it will cover, and to set the rules for how sessions and the conference itself will be run.

And we are pleased to announce for another year, the IGF remains a circus, an unaccountable and pointlessly bureaucratic organization that goes to great lengths to pretend it is open to everyone’s input and even greater lengths to make sure it isn’t.

At the two-day meeting, the IGF’s three core issues again took pride of place at the event:

  • Fantasy of democratic representation
  • Opaque decision-making and finances
  • Bureaucratic blocking of any efforts at reform

Let’s take a look at each:

More of The Register article from Kieren McCarthy


03
Mar 17

Customer Think – Why Your Customer Research is Flawed

U.S. pollsters got quite a surprise in the early morning hours of November 9, 2016.

That’s when it became apparent that their sophisticated voter research had completely failed to predict the outcome of the U.S. Presidential election. Longtime Republican political strategist Mike Murphy went so far as to assert that “data died” that night.

Yes, the 2016 U.S. Presidential election was a highly visible casualty for data-driven research, but far from the only one.

In 1985, Coca-Cola announced the rollout of “New Coke,” an updated formulation of the venerable soft drink, designed to appeal to changing consumer tastes.

More of the Customer Think article from Jon Picoult


02
Mar 17

ITWorld – Why DRaaS is a better defense against ransomware

Recovering from a ransomware attack doesn’t have to take days

It’s one thing for a user’s files to get infected with ransomware, it’s quite another to have a production database or mission-critical application infected. But, restoring these databases and apps from a traditional backup solution (appliance, cloud or tape) will take hours or even days which can cost a business tens or hundreds of thousands of dollars. Dean Nicolls, vice president of marketing at Infrascale, shares some tangible ways disaster recovery as a service (DRaaS) can pay big dividends and quickly restore systems in the wake of a ransomware attack.

Quickly pinpointing the time of infection

With a cloud backup, it takes a while to determine if your application has been corrupted. Admins must download the application files from the cloud (based on your most recent backup), rebuild, and then compile the database or application.

More of the ITWorld post from Ryan Francis


01
Mar 17

Continuity Central – Security policies matter for disaster recovery

Replicating the production security infrastructure at a disaster recovery site can be a problem: Professor Avishai Wool looks at how organizations should approach security policy management in their disaster recovery planning.

When it comes to downtime and cybersecurity attacks, despite many high profile incidents in the past year many businesses are still stuck in the mind-set of ‘it won’t happen to me’ and are ill-prepared for IT failures. And with IT teams facing a broad range of unpredictable challenges while maintaining ‘business as usual’ operations, this mind-set places organizations at serious risk of a damaging, costly outage. Therefore, it’s more important than ever to have plans for responding and recovering as quickly as possible when a serious incident strikes. As the author Franz Kafka put it, it’s better to have and not need, than to need and not have. In short, effective disaster recovery is a critical component of a business’ overall cybersecurity posture.

Most large organizations do have a contingency plan in place in case its primary site is hit by a catastrophic outage – which, remember, could just as easily be a physical or environmental problem like a fire or flood, as well as a cyberattack. This involves having a disaster recovery site in another city or even another country, which replicates all the infrastructure that is used at the primary site. However, a key piece of this infrastructure is often overlooked – network security – which must also be replicated on the disaster recovery site in order for the applications to function yet remain secure when the disaster recovery site is activated.

More of the Continuity Central postCon


28
Feb 17

TheWHIR – 3 Steps to Ensure Cloud Stability in 2017

We’re reaching a point of maturity when it comes to cloud computing. Organizations are solidifying their cloud use-cases, understanding how cloud impacts their business, and are building entire IT models around the capabilities of cloud.

Cloud growth will only continue; Gartner recently said that more than $1 trillion in IT spending will, directly or indirectly, be affected by the shift to cloud during the next five years.

“Cloud-first strategies are the foundation for staying relevant in a fast-paced world,” said Ed Anderson, research vice president at Gartner. “The market for cloud services has grown to such an extent that it is now a notable percentage of total IT spending, helping to create a new generation of start-ups and ‘born in the cloud’ providers.”

More of TheWHIR post from Bill Kleyman