Welcome!

Related Topics: @DevOpsSummit, Microservices Expo, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

The Dos and Don’ts of SLA Management | @DevOpsSummit #DevOps #WebPerf #APM #Monitoring

SLAs can be very tricky to manage for a number of different reasons

The Dos and Don'ts of SLA Management
By Craig Lowell

The past few years have seen a huge increase in the amount of critical IT services that companies outsource to SaaS/IaaS/PaaS providers, be it security, storage, monitoring, or operations. Of course, along with any outsourcing to a service provider comes a Service Level Agreement (SLA) to ensure that the vendor is held financially responsible for any lapses in their service which affect the customer’s end users, and ultimately, their bottom line.

SLAs can be very tricky to manage for a number of different reasons: discrepancies over the time period being addressed, the source of the performance metrics, and the accuracy of the data can lead to legal disputes between vendor and customer. However, there are several things that both sides can do to get accurate and verifiable performance data as it pertains to their SLAs.

The first and most critical step is to define the parameters around which the data will be used; this includes the method of data collection (often an agreed-upon neutral third party), and the time and locations from which the performance will be measured. The first part of this is critical. If the vendor and the customer are using different monitoring tools to measure the Service Level Indicators (SLIs), then there will inevitably be disagreements on the validity of the data and whether the Service Level Objective (SLO) was reached or not.

Selecting that vendor depends a great deal on the number of users being served, and where they are located. For a company such as Flashtalking, an ad serving, measuring, and technology company delivering ad impressions throughout the US, Europe, and other international markets, the need for a monitoring tool which can accurately measure the performance and user experience in many different areas around the world is critical to their SLA management efforts.

Flashtalking agrees upon the external monitoring tool with every one of their clients as part of their SLAs, using Catchpoint as the unbiased third party due to the number of monitoring locations and the accuracy of the data. Their customers obviously want the most accurate view of the customer experience and the impressions garnered, so monitoring from as close to the end user as possible is the best way to achieve that. In that sense, the more locations from which to test the product, the more accurate the data from an end user’s perspective.

Those measurement locations should include backbone and last mile, as well as any cloud provider from which the ads are being served. This diversity of locations ensures that they will still have visibility and reporting capabilities should the cloud provider itself experience an outage; the backbone tests eliminate noise and are therefore the cleanest for validating the SLO, and the last mile tests best replicate the end user experience./p>

Once the SLA and its parameters are agreed upon by both sides, each one of Flashtalking’s products is then set up with a single test that captures the performance of their clients’ ads through every stage of the IT architecture, whether it’s a single site, single server, or encompasses multiple databases/networks/etc.

Of course, establishing criteria and setting up the tests is only part of the SLA management battle. For a cloud provider to stay on top of its SLAs, they must also be able to rely on the alerting features to notify them if they are in danger of being in breach, as well as the accuracy and depth of the reporting to assist with identifying the root cause of the issue. In many cases, an ad serving company such as Flashtalking is relying on other third parties such as DNS resolvers, cloud providers, and content delivery networks to deliver the ads to the end users, which means that a disruption in service is not necessarily their fault. Still, they must be able to share their performance data with their own vendors in order to resolve the issue as quickly as possible for their own customers. In cases such as these, they must be able to easily separate their first- and third-party architecture components to show when a service disruption is not their fault and hold their own vendors accountable instead.

To learn more about SLA management and how both customers and vendors can ensure continuous service delivery, check out our SLA handbook.

The post The Dos and Don’ts of SLA Management appeared first on Catchpoint's Blog - Web Performance Monitoring.

Read the original blog entry...

More Stories By Mehdi Daoudi

Catchpoint radically transforms the way businesses manage, monitor, and test the performance of online applications. Truly understand and improve user experience with clear visibility into complex, distributed online systems.

Founded in 2008 by four DoubleClick / Google executives with a passion for speed, reliability and overall better online experiences, Catchpoint has now become the most innovative provider of web performance testing and monitoring solutions. We are a team with expertise in designing, building, operating, scaling and monitoring highly transactional Internet services used by thousands of companies and impacting the experience of millions of users. Catchpoint is funded by top-tier venture capital firm, Battery Ventures, which has invested in category leaders such as Akamai, Omniture (Adobe Systems), Optimizely, Tealium, BazaarVoice, Marketo and many more.

Latest Stories
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Our strategy is to focus on the hyperscale providers - AWS, Azure, and Google. Over the last year we saw that a lot of developers need to learn how to do their job in the cloud and we see this DevOps movement that we are catering to with our content," stated Alessandro Fasan, Head of Global Sales at Cloud Academy, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
"Software-defined storage is a big problem in this industry because so many people have different definitions as they see fit to use it," stated Peter McCallum, VP of Datacenter Solutions at FalconStor Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Busine...
Deep learning has been very successful in social sciences and specially areas where there is a lot of data. Trading is another field that can be viewed as social science with a lot of data. With the advent of Deep Learning and Big Data technologies for efficient computation, we are finally able to use the same methods in investment management as we would in face recognition or in making chat-bots. In his session at 20th Cloud Expo, Gaurav Chakravorty, co-founder and Head of Strategy Development ...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
As organizations shift towards IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. Commvault can ensure protection, access and E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his general session at 18th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Part...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
"Cloud computing is certainly changing how people consume storage, how they use it, and what they use it for. It's also making people rethink how they architect their environment," stated Brad Winett, Senior Technologist for DDN Storage, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Detecting internal user threats in the Big Data eco-system is challenging and cumbersome. Many organizations monitor internal usage of the Big Data eco-system using a set of alerts. This is not a scalable process given the increase in the number of alerts with the accelerating growth in data volume and user base. Organizations are increasingly leveraging machine learning to monitor only those data elements that are sensitive and critical, autonomously establish monitoring policies, and to detect...