Blog Feed Post

Four key techniques of Cognitive Operations

Forrester recently published the report: “Vendor Landscape: Cognitive Operations”, defined as the “AI version of IT ops management and application performance management.” While cognitive ops solutions can help IT manage increasingly complex and dynamic environments with less effort, Forrester says, “the power of Cognitive Operations depends on the technology within.”

As a pioneer in the use of artificial intelligence in IT operations, having launched our AI-powered platform over three years ago, we couldn’t agree more.

Why we can support all aspects of cognitive ops, while others can’t

When we designed our platform, we realized that traditional approaches would no longer work for managing modern, dynamic, web-scale applications. The complexity, scale and rate of change is simply more than humans can keep up with using traditional tools.

This blog discusses four of the fundamental changes in approach that we identified were needed, and purpose-built the Dynatrace platform around this foundation.

Full stack monitoring

For AI-powered analysis, everything starts with data. The better the data, the better the insights.

Many Cognitive Ops platforms don’t provide any data themselves. Instead, they rely on events and time-series inputs from various monitoring sources. There are two problems with this approach.

First, the data is siloed and lacks transactional context. Most monitoring tools still focus on a particular domain, and those that do cover multiple domains still treat them as individual silos of data. Cognitive Ops platforms may ingest and attempt to correlate all this data, but the data sources are still fundamentally disconnected without the proper semantics.

The second problem is the sampling approach used by most monitoring tools. Just like you wouldn’t rely on one scan every ten seconds for a self-driving car, it’s not enough for self-driving IT either. Fidelity of the data is essential.

That’s why full stack is core to our approach. When we say full stack, we mean:

  • Seeing every transaction
  • across the vertical stack, i.e. user to app to infrastructure – even within containers and log files
  • across every topology tier regardless of technology stack, end-to-end
  • in context with deep code-level visibility
  • automatically, by deploying a single agent
Dynatrace delivers full stack monitoring with a single agent

Real-time dependency and change detection

Understanding, in real-time, how everything in your environment is connected is fundamental for effectively leveraging AI. Without it, your AI engine can only provide insights based on correlation, which is fraught with problems. More on that in a bit.

Traditional approaches to instrumentation are laborious and require significant time to understand the relationships between components. I recently visited a large financial services company that relied on discovery products and custom scripting to learn their application dependencies. It updated on a weekly basis, and was out of date the moment the mapping process was complete. With the advent of DevOps and CI/CD this is simply too long – you can’t keep up with daily and hourly application and infrastructure changes.

That’s why we took a fundamentally different approach with Dynatrace. Our OneAgent installs at the host level, discovers every process on the host, automatically defines virtual and physical relationships, and detects changes in real-time at the granularity of individual transaction flows. Our AI engine relies on this topology map, called Smartscape, to analyze dependencies in real-time so we can go beyond correlation and get to true causation.

Dynatrace Smartscape maps relationships in real time

Intelligent anomaly and pattern detection

Most monitoring tools rely on baselines and thresholds that are derived from simple averages and standard deviations using sampled metrics. While some might believe this is better than no visibility, this approach results in a high number of false positives and missed issues when applied at scale.

With Dynatrace, we take a more sophisticated, multidimensional approach to automatic baselining. To determine baselines, we consider unique factors like user actions, service methods, geolocation, and browser operating system types. Then we use different algorithms to analyze performance for specific behaviors for every discrete transaction like application and service response time, error rates and load. These smart baselines automatically learn behavior to cope with dynamic changes, and eliminate the error-prone results of generic baselines (e.g. real-time errors need to be analyzed differently than seasonal load).

The net result is a much more accurate and intelligent view of what’s working well and what’s not that doesn’t require manual configuration and adapts to changing patterns.

Dynatrace multidimensional baselining approach

Domain specific AI-powered causation

Without these building blocks, you can’t get to causation. And that’s really the name of the game, isn’t it? When something goes wrong, you want to immediately know the root cause.

Other Cognitive Ops solutions rely on correlation. They ingest data from different sources and look for anomalies that occur around the same time, then assume that the two things are related.

But that leads to all sorts of false conclusions. There are many examples of this, my favorite being the correlation between Nicolas Cage movies and people drowning in pools. If only we could stop Nicolas Cage from making movies, think how many lives could be spared!

Image credit: tylervigen.com

Dynatrace, on the other hand, relies on a deterministic AI causation engine. As input we use not only metrics, but also anomalies and violations, actual dependencies, event sequence, natural events like code deploys, and we even incorporate expert knowledge from our own experience.

Our AI algorithms then deliver a weighted graph of all incidents that are part of the same problem and the specific incident that is causing the overall problem. This means we replace hundreds of alerts with a single problem notification pointing to the exact cause.

We even present it back in a problem evolution viewer that you can use to replay the problem to see how it evolved over time, identify which failed services calls or infrastructure health issues led to the failure of other service calls and ultimately led to the problem that affected your customers’ experience.

Realize the benefits of cognitive ops today

Forrester describes four benefits of cognitive ops:

  • Reduce the effort of owners of performance and availability
  • React and resolve problems faster
  • Predict and prevent problems before they affect the customer
  • Give meaning relative to the business impact

Dynatrace can deliver on all of these benefits today thanks to the four key capabilities outlined here. Try it for yourself – I’m confident you’ll agree.

The post Four key techniques of Cognitive Operations appeared first on Dynatrace blog – monitoring redefined.

Read the original blog entry...

More Stories By APM Blog

APM: It’s all about application performance, scalability, and architecture: best practices, lifecycle and DevOps, mobile and web, enterprise, user experience

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...