06
Apr 17

The Register – Researchers steal data from CPU cache shared by two VMs

A group of researchers say they can extract information from an Amazon Web Services virtual machine by probing the cache of a CPU it shares with other cloudy VMs.

A paper titled Hello from the Other Side: SSH over Robust Cache Covert Channels in the Cloud (PDF) explains the challenges of extracting data from CPU cache, a very contested resource in which the OS, the hypervisor and applications all conduct frequent operations. All that activity makes a lot of noise, defying attempts to create a persistent communications channel.

Until now, as the researchers claim they’ve built “a high-throughput covert channel [that] can sustain transmission rates of more than 45 KBps on Amazon EC2”. They’ve even encrypted it: the technique establishes a TCP network within the cache and transmits data using SSH.

The results sound scarily impressive: a Black Hat Asia session detailing their work promised to peer into a host’s cache and stream video from VM to VM.

The paper explains that this stuff is not entirely new, but has hitherto also not been entirely successful because it’s been assumed that “error-correcting code can be directly applied, and the assumption that noise effectively eliminates covert channels.”

More of The Register article from Simon Sharwood


15
Mar 17

The Register – It’s time for our annual checkup on the circus that is the Internet Governance Forum

Unaccountable? Check. Pointlessly bureaucratic? Check. Blocking reform? Check

It’s March again so it must be time for an annual checkup on the Internet Governance Forum – the United Nations body that is tasked with working through the complex social, technological and economic issues associated with a global communications network, and runs an annual conference to that end.

Around this time every year, the IGF’s organizing group the Multistakeholder Advisory Group (MAG) meets in Geneva to decide how the annual conference will be structured and what topics it will cover, and to set the rules for how sessions and the conference itself will be run.

And we are pleased to announce for another year, the IGF remains a circus, an unaccountable and pointlessly bureaucratic organization that goes to great lengths to pretend it is open to everyone’s input and even greater lengths to make sure it isn’t.

At the two-day meeting, the IGF’s three core issues again took pride of place at the event:

  • Fantasy of democratic representation
  • Opaque decision-making and finances
  • Bureaucratic blocking of any efforts at reform

Let’s take a look at each:

More of The Register article from Kieren McCarthy


28
Feb 17

TheWHIR – 3 Steps to Ensure Cloud Stability in 2017

We’re reaching a point of maturity when it comes to cloud computing. Organizations are solidifying their cloud use-cases, understanding how cloud impacts their business, and are building entire IT models around the capabilities of cloud.

Cloud growth will only continue; Gartner recently said that more than $1 trillion in IT spending will, directly or indirectly, be affected by the shift to cloud during the next five years.

“Cloud-first strategies are the foundation for staying relevant in a fast-paced world,” said Ed Anderson, research vice president at Gartner. “The market for cloud services has grown to such an extent that it is now a notable percentage of total IT spending, helping to create a new generation of start-ups and ‘born in the cloud’ providers.”

More of TheWHIR post from Bill Kleyman


17
Feb 17

Washington Post – Weather Service suffered ‘catastrophic’ outage; website stopped sending forecasts, warnings

On a day when a blizzard was pasting Maine and Northern California faced a dire flooding threat, several of the National Weather Service’s primary systems for sending out alerts to the public failed for nearly three hours.

Between 1:08 p.m. and 3:44 p.m. Eastern time Monday, products from the Weather Service stopped disseminating over the Internet, including forecasts, warnings, radar and satellite imagery, and current conditions.

Updates to the Weather Service’s public-facing website, Weather.gov, ceased publishing.

In an email to staff on Tuesday, David Michaud, the director of the Weather Service’s Office of Central Processing, said a power outage had triggered the outage and characterized the impacts as “significant”. The cause of the outage was under review, a Weather Service spokesperson said.

“[I] want to ensure you that everyone involved is working hard to avoid these outages in the future and find ways to better communicate to employees across the agency in real time when outages occur,” Michaud’s email said.

More of the Washington post article from Jason Samenow


13
Feb 17

TheWHIR – Why Does It Seem Like Airline Computers Are Crashing More?

Another week, another major airline is crippled by some kind of software glitch.

If you feel as if you’re hearing about these incidents more often, you are—but not necessarily because they’re happening more frequently.

Delta Air Lines Inc. suffered an IT outage that led to widespread delays and 280 flight cancellations on Jan. 29 and 30, a problem the carrier said was caused by an electrical malfunction. A week earlier, United Continental Holdings Inc. issued a 2 1/2-hour ground stop for all its domestic flights following troubles with a communication system pilots use to receive data.

These two shutdowns were the latest in what’s been a series of computer crack-ups over the past few years, including major system blackouts that hobbled Southwest Airlines Co. as well as Delta for several days last summer—affecting tens of thousands of passengers.

More of the WHIR post from Bloomberg


09
Feb 17

CIO Insight – Deep Insecurities: Things Just Keep Getting Worse

Ninety-three percent of companies’ security operation centers admit they’re not keeping up with the volume of threat alerts and incidents, putting them at risk.

Cyber-threats
Despite a growing focus on cyber-security—along with gobs of money and staff time thrown at the task—things just seem to get worse. According to a December 2016 report from McAfee Labs, 93 percent of organizations’ security operation centers admit that they are not keeping up with the volume of threat alerts and incidents, putting them at significant risk of moderate to severe breaches.

Altogether, 67 percent of the survey respondents (more than 400 security practitioners spanning multiple countries, industries and company sizes) reported an increase in security breaches. Yet, on average, organizations are unable to sufficiently investigate 25 percent of security alerts.

More of the CIO Insight article from Samuel Greengard


31
Jan 17

The Register: Suffered a breach? Expect to lose cash, opportunities, and customers – report

More than a third of organisations that experienced a breach last year reported substantial customer, opportunity and revenue loss.

The finding is one of the key takeaways from the latest edition of Cisco’s annual cybersecurity report, which also suggests that defenders are struggling to improve defences against a growing range of threats.

The vast majority (90 per cent) of breached organisations are improving threat defence technologies and processes following attacks by separating IT and security functions (38 per cent), increasing security awareness training for employees (38 per cent), and implementing risk mitigation techniques (37 per cent). The report surveyed nearly 3,000 chief security officers (CSOs) and security operations leaders from 13 countries. CSOs cite budget constraints, poor compatibility of systems, and a lack of trained talent as the biggest barriers to advancing their security policies.

More than half of organisations faced public scrutiny after a security breach. Operations and finance systems were the most affected, followed by brand reputation and customer retention. For organisations that experienced an attack, the effect can be substantial: 22 per cent of breached organisations lost customers and 29 per cent lost revenue, with 38 per cent of that group losing more than 20 per cent of revenue. A third (33 per cent) of breached organisations lost business opportunities.

More of The Register article from John Leyden


18
Jan 17

The Register – Just give up: 123456 is still the world’s most popular password

Data diggers’ dumpster dive demonstrates dumb and dumberer defences

The security industry’s ongoing efforts to educate users about strong passwords appears to be for naught, with a new study finding the most popular passwords last year were 123456 and 123456789.

Keeper Security wonks perused breached data dumps for the most popular passwords when they made the despondent discovery.

Some 1.7 million accounts used the password “123456”, or 17 per cent of the 10 million hacked accounts the firm studied.

More of The Register post from Darren Pauli


16
Jan 17

Untitled

From the article: “The problem I see more often is that leaders don’t make decisions at all. They don’t clearly signal their intent about what matters. In short, they don’t prioritize.” Is your IT staff clear on priorities?

Every organization needs what I call a “hierarchy of purpose.” Without one, it is almost impossible to prioritize effectively.

When I first joined BNP Paribas Fortis, for example, two younger and more dynamic banks had just overtaken us. Although we had been a market leader for many years, our new products had been launched several months later than the competition — in fact, our time to market had doubled over the previous three years. Behind that problem was a deeper one: We had more than 100 large projects (each worth over 500,000 euros) under way. No one had a clear view of the status of those investments, or even the anticipated benefits. The bank was using a project management tool, but the lack of discipline in keeping it up to date made it largely fruitless. Capacity, not strategy, was determining which projects launched and when. If people were available, the project was launched. If not, it stalled or was killed.

Prioritization at a strategic and operational level is often the difference between success and failure. But many organizations do it badly.

More of the Harvard Business Review article from Antonio Nieto-Rodriguez


05
Jan 17

Continuity Central – Survey finds that US companies are struggling with remote and branch office IT disaster recovery

Riverbed Technology has published the results of a survey that looks at the challenges that organizations are facing when managing IT at remote and branch offices. The survey asked IT professionals about the various challenges they face in provisioning and managing remote and branch offices (ROBOs) and found supporting ‘the IT edge’ was expensive, resource-intensive and full of potential data security risks.

ROBO IT continues to be provisioned and managed largely as it has been for the past 20 years, with distributed IT spread out across potentially hundreds of remote and branch locations. However, this approach can bring data risk and operational penalties to companies at an extremely high cost, and in today’s increasingly distributed enterprise with a primary focus on data and security, past approaches may not be ideal for business success. Given the various challenges associated with managing remote sites, organizations have their hands full in supporting the edge.

More of the Continuity Central post