25
Sep 18

TechTarget – What is multi-access edge computing, and how has it evolved?

Multi-access edge computing provides the processing capacity needed to support the increase of ‘things’ at the network edge. But for all its promise, MEC has challenges to face.

Multi-access edge computing is based on the principle that processing capacity at the edge of the network will provide significant application benefits in terms of responsiveness, reliability and security. Despite the increasing number of vendor options, multi-access edge computing is in its early stages, with many potential buyers in the investigation or pilot phases of deployment.

Multi-access edge computing (MEC) is a network architecture that supports compute and storage capacity at the network edge, rather than in a central data center or cloud location.

More of the TechTarget article from Lee Doyle


24
Sep 18

Forrester – Beyond Moore’s Law: How Exponential Technology Will Drive Disruption

I’ve been thinking a lot about exponential technology and asking myself: “Is disruptive change due to ‘Moore’s Law’ done?” Newsweek recently proclaimed, “The Future Is Uncertain As Moore’s Law Comes To An End.” However, while most experts agree that silicon transistors will stop shrinking around 2021, this doesn’t mean Moore’s Law is dead in spirit — even though, technically, it might be.

Chip makers have to find another way to increase power. For example, there are Germanium and III-V technologies — and, at some point, carbon Nano-tubes — that provide new ways of increasing power. There is also “gate-all-around” transistor design, extreme-ultraviolet and self-directed assembly techniques, and so on. But how will more powerful CPUs drive future disruptions? I don’t know for sure, but I suspect not as much as other exponential trends and the technologies that exploit them. Here is what I’m thinking:

More of the Forrester blog post from Brian Hopkins


08
Jun 18

CIO.com – 8 IT management productivity killers

From neglecting to prioritize key strategic initiatives to failing to adjust project estimates, weak IT management practices are threatening IT’s ability to get the job done.

There are two types of productivity killers in the modern workplace: small distractions that sap your focus and big productivity killers that push you into applying time and effort in all the wrong places. Like it or not, weak IT management practices are what cause the more significant productivity killers.

Following is a look at eight such practices that are derailing your IT department — and how to adjust for success.

1. Neglecting to prioritize strategic projects
IT has to put out fires on occasion. When the online banking servers go down, it’s an emergency. But panic situations tend to be rare. Instead, the steady stream of ad hoc questions and change requests from users are the more significant problem. Making users happy is a worthy goal, but you can easily fall victim to short-term thinking.

More of the CIO.com article from Bruce Harpham


07
Jun 18

InformationWeek – Why IT Costs Keep Rising (and How to Resist the Climb)

It will take a multi-pronged approach for IT organizations to stop the escalation of IT costs.

IT departments have gone through several fundamental changes over the past couple of decades. Today’s technology seems space-aged compared to what was available just 10 years ago, and IT professionals everywhere are just trying to keep up.

Many businesses are seeing their IT expenses, or costs, rise. They’re being forced to invest more in their technological infrastructure and, in many cases, the growing demand of superior technology is driving budgets through the roof. IT costs are expected to maintain this upward trajectory for years to come, and for businesses with already-tight budgets, this seems like an insurmountable challenge.

So why is it that IT costs keep climbing, and what can you do to resist those increases?

More of the InformationWeek article from Larry Alton


06
Jun 18

WSJ – Cloud, Not Tax Cuts, Drives IT Spending: Survey

Corporate information-technology budgets are expected to inch up over last year, as large firms continue to shift more workloads to the cloud, according to Morgan Stanley.

Despite recent federal tax cuts aimed at boosting corporate spending, most chief information officers say their IT spending plans haven’t changed, the bank said in a report Wednesday.

The results are based on a survey of 75 U.S. and 25 European CIOs at companies in a range of industries, most with more than $1 billion in annual revenue. The survey was conducted online and by phone between February and March.

More of the Wall Street Journal article from Angus Loten


05
Jun 18

On stones, clay and rubber balls: why business continuity is not a risk management discipline

Mark Armour explains why he believes that we need to agree on our definitions and change our thinking around risk management, business continuity and resilience.

First, this is not about where the responsibility for business continuity should reside within an organization. It is about the responsibilities of the business continuity profession and its practitioners. Lately, I’ve witnessed the practice of risk management begin to take over that of business continuity. Many practitioners promote this alignment and foster the perception that business continuity is simply a part of the practice of risk management. I say this is bad for both disciplines and the organizations they serve.

For the sake of clarity, let’s start with some simple definitions:

The Institute of Risk Management states ‘Risk management involves understanding, analysing and addressing risk to make sure organisations achieve their objectives.’ The International Risk Management Institute describes its work as ‘The practice of identifying and analyzing loss exposures and taking steps to minimize the financial impact of the risks they impose.’

More of the Continuity Central post


04
Jun 18

InfoWorld – When being cloud-native is a bad idea

Although being pushed as the end game for most cloud-based applications, there are trade-offs to consider.

It’s good to be cloud-native, or at least that’s what everyone is telling you. The idea is that you refactor (meaning partially recode) your applications to take advantage of the native features of the host cloud, such as its native APIs, storage systems, database systems, or security systems, depending on what that host cloud services offers.

The promise you’re being given is that being cloud-native will provide enhanced performance, lower operational costs for your applications, easier operations, and a bunch of other benefits as the cloud platform improves over time.

More of the InfoWorld article from David Linthicum


07
May 18

Computer Weekly – How a new ISO standard helps you take control of your IT assets

The updated ISO standard 19770-1:2017 offers IT managers a way to bring their hardware and software assets under a single management standard.

You must have control over your software and hardware. Not just because you should – but because it makes perfect sense and it is good for your business.

The updated ISO standard 19770-1:2017 promises to help you do just that. ISO 19770-1:2017 is really not a new standard, but an update from ISO 19770-1:2012.

But it is not a minor update. It feels more like an overhaul in that it now meets the requirements of a “real” management systems standard, such as ISO 27001. In relation to IT asset management (ITAM), the standard helps to address some significant problems when it comes to reducing risk and establishing a best practice for managing your IT assets.

The 19770-x family covers all the essential areas, such as lifecycle processes and best practices, software tagging and usage rights (entitlements).

More of the Computer Weekly article from Jesper Østergaard


25
Apr 18

TechTarget – Serverless technology obfuscates workflows, performance data

I’m hearing that IT infrastructure is dead. And who needs it anymore, really? The future is about moving up the stack to microservices and serverless technology, as we continue to abstract, embed and automate away all the complexities of explicit infrastructure layers, such as storage arrays and physical servers.

On-premises, Capex-style IT is shrinking, while rented and remotely managed hardware and cloud transformation set new standards for modern IT. All the cool kids use end-to-end orchestration, advanced machine learning, real-time management data streams, microservices architecture and insanely scalable container environments. And now we even have serverless computing, sometimes called function as a service (FaaS).

More of the TechTarget article from Mike Matchett


24
Apr 18

Continuity Central – Vast majority of organizations are unable to identify anomalous behaviour in cloud applications

Bitglass has published the findings of a survey for its ‘Cloud Hard 2018: Security with a Vengeance’ report, which includes insights from over 570 cybersecurity and IT professionals on their approach to cloud security.

Visibility and compliance challenges continue to trouble organizations, with less than half of respondents claiming they have visibility into external sharing and DLP policy violations in their cloud application and environments. Even more worrying was the finding that 85 percent of organizations were unable to identify anomalous behaviour across cloud applications.

More of the Continuity Central article