Blog Feed Post

Four key techniques of Cognitive Operations

Forrester recently published the report: “Vendor Landscape: Cognitive Operations”, defined as the “AI version of IT ops management and application performance management.” While cognitive ops solutions can help IT manage increasingly complex and dynamic environments with less effort, Forrester says, “the power of Cognitive Operations depends on the technology within.”

As a pioneer in the use of artificial intelligence in IT operations, having launched our AI-powered platform over three years ago, we couldn’t agree more.

Why we can support all aspects of cognitive ops, while others can’t

When we designed our platform, we realized that traditional approaches would no longer work for managing modern, dynamic, web-scale applications. The complexity, scale and rate of change is simply more than humans can keep up with using traditional tools.

This blog discusses four of the fundamental changes in approach that we identified were needed, and purpose-built the Dynatrace platform around this foundation.

Full stack monitoring

For AI-powered analysis, everything starts with data. The better the data, the better the insights.

Many Cognitive Ops platforms don’t provide any data themselves. Instead, they rely on events and time-series inputs from various monitoring sources. There are two problems with this approach.

First, the data is siloed and lacks transactional context. Most monitoring tools still focus on a particular domain, and those that do cover multiple domains still treat them as individual silos of data. Cognitive Ops platforms may ingest and attempt to correlate all this data, but the data sources are still fundamentally disconnected without the proper semantics.

The second problem is the sampling approach used by most monitoring tools. Just like you wouldn’t rely on one scan every ten seconds for a self-driving car, it’s not enough for self-driving IT either. Fidelity of the data is essential.

That’s why full stack is core to our approach. When we say full stack, we mean:

  • Seeing every transaction
  • across the vertical stack, i.e. user to app to infrastructure – even within containers and log files
  • across every topology tier regardless of technology stack, end-to-end
  • in context with deep code-level visibility
  • automatically, by deploying a single agent
Dynatrace delivers full stack monitoring with a single agent

Real-time dependency and change detection

Understanding, in real-time, how everything in your environment is connected is fundamental for effectively leveraging AI. Without it, your AI engine can only provide insights based on correlation, which is fraught with problems. More on that in a bit.

Traditional approaches to instrumentation are laborious and require significant time to understand the relationships between components. I recently visited a large financial services company that relied on discovery products and custom scripting to learn their application dependencies. It updated on a weekly basis, and was out of date the moment the mapping process was complete. With the advent of DevOps and CI/CD this is simply too long – you can’t keep up with daily and hourly application and infrastructure changes.

That’s why we took a fundamentally different approach with Dynatrace. Our OneAgent installs at the host level, discovers every process on the host, automatically defines virtual and physical relationships, and detects changes in real-time at the granularity of individual transaction flows. Our AI engine relies on this topology map, called Smartscape, to analyze dependencies in real-time so we can go beyond correlation and get to true causation.

Dynatrace Smartscape maps relationships in real time

Intelligent anomaly and pattern detection

Most monitoring tools rely on baselines and thresholds that are derived from simple averages and standard deviations using sampled metrics. While some might believe this is better than no visibility, this approach results in a high number of false positives and missed issues when applied at scale.

With Dynatrace, we take a more sophisticated, multidimensional approach to automatic baselining. To determine baselines, we consider unique factors like user actions, service methods, geolocation, and browser operating system types. Then we use different algorithms to analyze performance for specific behaviors for every discrete transaction like application and service response time, error rates and load. These smart baselines automatically learn behavior to cope with dynamic changes, and eliminate the error-prone results of generic baselines (e.g. real-time errors need to be analyzed differently than seasonal load).

The net result is a much more accurate and intelligent view of what’s working well and what’s not that doesn’t require manual configuration and adapts to changing patterns.

Dynatrace multidimensional baselining approach

Domain specific AI-powered causation

Without these building blocks, you can’t get to causation. And that’s really the name of the game, isn’t it? When something goes wrong, you want to immediately know the root cause.

Other Cognitive Ops solutions rely on correlation. They ingest data from different sources and look for anomalies that occur around the same time, then assume that the two things are related.

But that leads to all sorts of false conclusions. There are many examples of this, my favorite being the correlation between Nicolas Cage movies and people drowning in pools. If only we could stop Nicolas Cage from making movies, think how many lives could be spared!

Image credit: tylervigen.com

Dynatrace, on the other hand, relies on a deterministic AI causation engine. As input we use not only metrics, but also anomalies and violations, actual dependencies, event sequence, natural events like code deploys, and we even incorporate expert knowledge from our own experience.

Our AI algorithms then deliver a weighted graph of all incidents that are part of the same problem and the specific incident that is causing the overall problem. This means we replace hundreds of alerts with a single problem notification pointing to the exact cause.

We even present it back in a problem evolution viewer that you can use to replay the problem to see how it evolved over time, identify which failed services calls or infrastructure health issues led to the failure of other service calls and ultimately led to the problem that affected your customers’ experience.

Realize the benefits of cognitive ops today

Forrester describes four benefits of cognitive ops:

  • Reduce the effort of owners of performance and availability
  • React and resolve problems faster
  • Predict and prevent problems before they affect the customer
  • Give meaning relative to the business impact

Dynatrace can deliver on all of these benefits today thanks to the four key capabilities outlined here. Try it for yourself – I’m confident you’ll agree.

The post Four key techniques of Cognitive Operations appeared first on Dynatrace blog – monitoring redefined.

Read the original blog entry...

More Stories By APM Blog

APM: It’s all about application performance, scalability, and architecture: best practices, lifecycle and DevOps, mobile and web, enterprise, user experience

Latest Stories
Having been in the web hosting industry since 2002, dhosting has gained a great deal of experience while working on a wide range of projects. This experience has enabled the company to develop our amazing new product, which they are now excited to present! Among dHosting's greatest achievements, they can include the development of their own hosting panel, the building of their fully redundant server system, and the creation of dhHosting's unique product, Dynamic Edge.
This session will provide an introduction to Cloud driven quality and transformation and highlight the key features that comprise it. A perspective on the cloud transformation lifecycle, transformation levers, and transformation framework will be shared. At Cognizant, we have developed a transformation strategy to enable the migration of business critical workloads to cloud environments. The strategy encompasses a set of transformation levers across the cloud transformation lifecycle to enhance ...
Your job is mostly boring. Many of the IT operations tasks you perform on a day-to-day basis are repetitive and dull. Utilizing automation can improve your work life, automating away the drudgery and embracing the passion for technology that got you started in the first place. In this presentation, I'll talk about what automation is, and how to approach implementing it in the context of IT Operations. Ned will discuss keys to success in the long term and include practical real-world examples. Ge...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
So the dumpster is on fire. Again. The site's down. Your boss's face is an ever-deepening purple. And you begin debating whether you should join the #incident channel or call an ambulance to deal with his impending stroke. Yes, we know this is a developer's fault. There's plenty of time for blame later. Postmortems have a macabre name because they were once intended to be Viking-like funerals for someone's job. But we're civilized now. Sort of. So we call them post-incident reviews. Fires are ne...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
Hackers took three days to identify and exploit a known vulnerability in Equifax’s web applications. I will share new data that reveals why three days (at most) is the new normal for DevSecOps teams to move new business /security requirements from design into production. This session aims to enlighten DevOps teams, security and development professionals by sharing results from the 4th annual State of the Software Supply Chain Report -- a blend of public and proprietary data with expert researc...
CloudEXPO New York 2018, colocated with DevOpsSUMMIT and DXWorldEXPO New York 2018 will be held November 12-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI and Machine Learning to one location.
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, softwar...
The digital transformation is real! To adapt, IT professionals need to transform their own skillset to become more multi-dimensional by gaining both depth and breadth of a wide variety of knowledge and competencies. Historically, while IT has been built on a foundation of specialty (or "I" shaped) silos, the DevOps principle of "shifting left" is opening up opportunities for developers, operational staff, security and others to grow their skills portfolio, advance their careers and become "T"-sh...
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app secu...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
ICC is a computer systems integrator and server manufacturing company focused on developing products and product appliances to meet a wide range of computational needs for many industries. Their solutions provide benefits across many environments, such as datacenter deployment, HPC, workstations, storage networks and standalone server installations. ICC has been in business for over 23 years and their phenomenal range of clients include multinational corporations, universities, and small busines...
This sixteen (16) hour course provides an introduction to DevOps, the cultural and professional movement that stresses communication, collaboration, integration and automation in order to improve the flow of work between software developers and IT operations professionals. Improved workflows will result in an improved ability to design, develop, deploy and operate software and services faster.