Welcome!

Blog Feed Post

Top 10 End-User Experience Monitoring Trends 2018

It was a big year for end-user experience monitoring in 2017, with more businesses baselining, detecting, identifying, escalating, and fixing performance issues that can disrupt their customer’s or end user’s experience. With the rise of high profile performance issues such as Amazon’s $150 million S3 issue or Macy’s Black Friday and Cyber Monday “mini-outages,” the negative impact goes beyond revenue to damaged reputation and loyalty.

According to Forrester, 40% of consumers have a high willingness and ability to shift spend, with an additional 25% building that mindset. Customers can reward or punish companies based on a single experience – a single moment in time. It’s no wonder that, according to one study in Forbes, 75% of companies said their number one objective was to improve customer experience.

Here’s my take on what this means for end-user experience monitoring in 2018:

1. Reset of the Customer-Centric CIO

The difference with customer-centric CIO from 2007 to now can be summed up in one word: digitalization. Cloud and mobility are accelerating a pace of unprecedented business disruption where 70% of the top 10 global companies are new. Companies like Amazon, Southwest Air, Apple, Disney, TD Bank, and others are fanatically focusing on customers and raising the stakes where, according to Walker Information’s Customers 2020 report, customer experience will overtake price and product as the key brand differentiator by 2020.

In 2018, CIOs will embed customer-centricity into their organization’s DNA, dramatically shifting IT’s mindset from internal management to delivering amazing customer experiences. CIO’s will (and must) become change leaders, building customer-centric IT capabilities to attain far broader business objectives such as customer experience and revenue. This means both IT innovation and renovation initiatives will be geared towards and measured against these outcomes, expanding end-user experience monitoring as the “central nervous system” of digital performance management.

2. Rise of Modern Synthetic Monitoring Technology 

Synthetic monitoring technology is as old as the World Wide Web. However, the role of it in end-userhttp://blog.catchpoint.com/wp-content/uploads/2017/12/Untitled-design-1-... 300w" sizes="(max-width: 600px) 100vw, 600px" /> experience monitoring has never been more important. According to Research and Market’s study, the enterprise synthetic application monitoring tool market size is expected to grow to $2.1 billion by 2021 or over 18% annually, growing significantly faster than code level or infrastructure monitoring technology.

In 2018, the critical need to simulate users’ interactions with increasing complex digital services running on increasing dynamic, distributed, and heterogeneous environments is spurring the rise of modern synthetic monitoring technology. But the reasons for this comeback are more about what it can do for end-user experience monitoring than its “website monitoring” ancestor.  Modern synthetic monitoring technology can:

  1. Proactively identify performance issues before customers or users are impacted
  2. Test most any element traversing the internet including third party services and network protocols
  3. Analyze multiple factors affecting speed, availability, and reliability in real time at painstaking granularity, and automatically guide troubleshooting diagnosis
  4. Eliminate the “noise” and false alerts associated with older synthetic technology

Further reading: 5 Reasons Synthetic Monitoring is More Important than Ever; Synthetic Monitoring Wherever you need it

3. SaaS Monitoring Gains Big Traction

SaaS adoption is becoming a business imperative – “if you aren’t using SaaS broadly, your business risks falling behind,” titled a recent Forrester report. In fact, a recent study conducted by IDG Research shows that 90% of all organizations today either have apps running in the cloud or are planning to use cloud apps within 12 months. But moving to SaaS does not relieve IT from delivering business value. Just because you didn’t monitor your hosted Exchange before, does not mean you don’t need to monitor Office 365 now. While you’re no longer on the hook for code maintenance, who do your users call when your Office 365 service is hindering business productivity in your San Francisco office, or your Salesforce service experiences mini-outages in some of your call centers? Adding to this storm, over 90% of the delivery paths of SaaS services are beyond your firewall and outside of your control.

For 2018:  Business demand to monitor their user’s experience of SaaS applications will become an end-user experience monitoring imperative. Monitoring your SaaS provider’s speed, availability and reliability in your physical locations or wherever employees are located will be the new normal. This includes telemetry and analytics to drill down and troubleshoot a host of moving parts that can degrade SaaS performance and availability including end-to-end path visibility starting with a user, traveling through the network, and to the application. As an aside, this is where modern synthetic monitoring technology has a huge advantage to help. While APM and other traditional monitoring technologies can only monitor systems within your infrastructure, modern synthetic monitoring technology can see how your SaaS apps are performing from your user’s lens, almost always before there is a widespread impact.

Further reading:  State of SaaS Report

4. RUM and Synthetic Unite! 

The problem is the dominant discussion about these two technologies has been largely binary – either synthetic monitoring or RUM, synthetic monitoring vs RUM. But according to Gartner’s Innovation Insight for Digital Experience Monitoring report, “Traditionally, the various end-user experience monitoring data ingestion mechanisms have been deployed separately from one another and sometimes heated arguments have been had as to which mechanism is the most optimal. The truth is that each ingestion mechanism has something to contribute to the observation and understanding of how users, customers, and others interact with an enterprise application portfolio.”

The combination of synthetic monitoring and RUM will gain traction in 2018 as businesses learn how the two complement each other in their end-user experience monitoring strategy. Most notably that synthetic monitoring allows you to simulate and test real-world interactions of your users, helping preemptively drill down into causal factors instead of waiting for shopping carts to be abandoned. At the same time, RUM lets you see how your website responds to actual users, helping validate and/or tune your synthetic test results and telemetry.

Further reading: Monitor All That Matters: How Synthetic + RUM Provide Comprehensive Web Performance Insight

5. Monitoring the API Economy 

Forbes said 2017 is the Year of the API Economy, with shifts occurring in how APIs are consumed, integrated into platforms and enriched with greater potential to provide contextual intelligence for customers. According to API University’s directory, there are over 14,000 public APIs that can be used to deliver new workflows, products, and business models such as omnichannel selling. As APIs gain traction as the “enabler of high-speed digital business innovation and renovation,” the complexity of orchestrating them in real-time with 3rd party data aggregators, mobile service providers, social media, and so on, increases.

With the rise of modern end-user experience monitoring and demand of digital business, modern API monitoring will multiply in 2018. While frameworks like Flask and Express can enable developing APIs in minutes, monitoring third-party web services is another thing. Triggering alerts of API performance degradation and SLA breaches will become table stakes in monitoring the user’s experience – businesses will bulk up on the API monitoring capabilities to give them the granular analytics to determine which web service (yours or the third party’s) is causing the performance issues before it degrades user experience and hurts business outcomes.

Further reading: Web Performance 101: Monitoring APIs; Ensuring API Reliability; API Monitoring Primer Handbook

6. SLA Management Comes of Age

http://blog.catchpoint.com/wp-content/uploads/2017/12/shutterstock_31031... 300w, http://blog.catchpoint.com/wp-content/uploads/2017/12/shutterstock_31031... 360w" sizes="(max-width: 500px) 100vw, 500px" />

Digitalization, cloud, and mobility (and IoT is around the corner) are bringing a torrent of third-party service dependencies such as SaaS, DNS, CDN, and even APIs, changing the nature of end-user experience monitor from less about managing monolithic applications to more about governing such services in the context of the end-user experience. For example, when your application is experiencing micro-outages, do you have the monitoring telemetry to identify the problem a specific third-party service and NOT your web page? How do you effectively measure the performance of third-party services and implementing that technique in SLAs? Do you have the ability to promptly alert and accurately report on SLA compliance?

An external or internal SLA breach includes lost revenue, productivity, and legal penalties – add any loss of brand goodwill and return customers, and total costs can easily reach into the millions. For 2018, demand to implement modern end-user experience monitoring tooling to include comprehensive third-party providers instrumentation will rise, but it is only the first step to tackling modern SLA management. IT ops leaders will also build requisite skills and processes to hold third part providers accountable when they breach your service level agreements (SLAs).

Further reading: A Practical Guide to SLAs; The Dos and Don’ts of SLA Management;  SLA Monitoring Handbook

7. EUEM Overshadows APM

APM was supposed to be the darling of end-user experience monitoring until everything starting to move to the cloud (SaaS, APIs, network paths, microservices, etc). APM solutions are designed to monitor application services where they have direct access to the code – they are not built to monitor services where the majority of the service infrastructure lies outside the IT periphery. And according to Forbes, “While current monitoring tools typically rely on application performance monitoring (APM), these metrics aren’t dynamic or granular enough to provide line-of-business value…to understand software issues experienced by end users.”

The adoption of modern end-user or digital experience monitoring (DEM) tools will surge in 2018 as more digital services move outside firewalls and APM becomes less tenable in terms cost. Modern end-user experience monitoring solutions are specifically designed to monitor performance and availability from the user’s perspective, filling the increasing gap (due to the cloud) left by traditional application discovery, tracing and diagnostics. Through active and passive monitoring of a host of service infrastructure outside the application code, businesses can “deep dive” into troubleshooting and root-cause analysis with granular visibility into issues that impact user experience and business outcomes.

Further reading: Closing the End-User Experience Gap in APM; Closing Costly Visibility Gaps in Application Performance Management

8.  The “New Network” Monitoring

Relying on the Internet and cloud to deliver applications significantly changes user traffic patterns. Traditional network traffic, which was generated by an end user accessing a centralized data center, is now generated by an extremely diverse set of network protocols going to and from a diverse set of locations where data is accessed. And poor performing network protocols or DNS can provoke widespread end-user experience dissatisfaction and negatively impact business outcomes.  Unfortunately, according to Gartner, “the vast majority of (traditional packet and flow) network monitoring technologies deployed today leave significant visibility gaps.”

For 2018, end-user experience monitoring strategies that include monitoring cloud-centric network elements will gain solid traction, especially applicable for public cloud or SaaS environments, and the monitoring of non-office-based user traffic. As more business is moved to the cloud and user mobility increases, business will augment their user or digital experience monitoring instrumentation to be able to effectively troubleshoot new network elements such as route health, Border Gateway Protocol (BGP), TCP connectivity, DNS traversing, IPv4, IPv6, and network time protocol (NTP).

Further reading: Is It Time to Rethink Your Network Monitoring Strategy?; Troubleshooting Network Protocols in a Complex Digital Environment; The Network’s Impact on End-User Experience

9. AI: It’s the Data, S&^$!

This is not just another tabloid opinion on how AI (Artificial Intelligence) is redefining monitoring. Yes, the use of AI is gaining traction in nearly every area of IT operations, where Gartner forecasts that by 2019, “25% of global enterprises will have strategically implemented an AIOps platform supporting two or more major IT operations functions.” AIOps platforms such as Sumo Logic and Splunk use AI to discover patterns from very large data sets from log files, service desk and, increasingly, various monitoring practices.

By contrast, use of AI for end-user experience monitoring is 2018 is more about the quality of the data ingested since modern synthetic monitoring itself is a predictive technology, using “robots” to simulate users’ interactions (including the location and network from which they are accessing your services) to identify potential issues before your users are disrupted widespread. Bad and/or noisy data (often found in legacy “web monitoring” systems) means a deluge of false positives, false negatives, and endless war room hours and finger-pointing. Adding to user experience monitoring increasing the complexity of digital services running on increasing dynamic, distributed, and heterogeneous environments and it’s easy to see why AI for end-user experience monitoring is more about the hard stuff: the data.

Further reading: Actionable Insights with Guided Intelligence; Reducing MTTR; Guided Intelligence

10. The Amazon Effect

The new normal for customer experience is digital anything, instant everything. The Amazon effect is causing customers (and increasingly employees) to expect the same experience regardless of what they buy even healthcare. According to Ingrid Lindberg, president of loyalty marketing and customer experience consultancy at Kobie Marketing and former chief experience officer at Cigna (CI), “Consumers are not comparing their experience between health care providers or insurance companies. Instead, they’re measuring customer experience everywhere they go. In effect, the experience at CVS and Aetna is being compared to that of Zappos, Marriott (MAR) and Nordstrom (JWN).”

The Amazon Effect has everything to do with modern end-user experience monitoring in 2018 as IT ops fundamentally shifts from an inward mindset to manically focusing on delivering successful customer (or in the case of internal services, employees) experiences. Customer-centric CIOs will dispel the “I don’t need:

  • modern end-user experience monitoring. I have APM and infrastructure monitoring.”
  • synthetic monitoring. I have RUM. I don’t need RUM. I have synthetic monitoring.”
  • to monitor my user’s experience of my SaaS providers. I have a guaranteed SLA from them.”
  • to monitor the new network. I use packet and flow monitoring.”

and so on. And if there is still doubt about the importance of modern end-user experience monitoring, Gartner has some has some sobering insights by 2020:  50% of CEOs say their industries will be digitally transformed; more than 50% of enterprises will replace core IT operations management tools entirely, and 30% of global enterprises will have strategically implemented end-user or digital experience monitoring  technologies (up from fewer than 5% today).

Further reading: Keeping Up with Amazon; EMA Research: Taming IT Complexity with End-User Experience Monitoring;  The Ultimate Guide to End-User Experience Monitoring; Gartner: How to Start an IT Monitoring Initiative.

The post Top 10 End-User Experience Monitoring Trends 2018 appeared first on Catchpoint's Blog - Web Performance Monitoring.

Read the original blog entry...

More Stories By Mehdi Daoudi

Catchpoint radically transforms the way businesses manage, monitor, and test the performance of online applications. Truly understand and improve user experience with clear visibility into complex, distributed online systems.

Founded in 2008 by four DoubleClick / Google executives with a passion for speed, reliability and overall better online experiences, Catchpoint has now become the most innovative provider of web performance testing and monitoring solutions. We are a team with expertise in designing, building, operating, scaling and monitoring highly transactional Internet services used by thousands of companies and impacting the experience of millions of users. Catchpoint is funded by top-tier venture capital firm, Battery Ventures, which has invested in category leaders such as Akamai, Omniture (Adobe Systems), Optimizely, Tealium, BazaarVoice, Marketo and many more.

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...