Welcome!

Blog Feed Post

Tales from the Field: Spectre, Meltdown, and Patching Performance

By now you’ve certainly heard of – or perhaps been impacted by – Meltdown and Spectre, two newly discovered vulnerabilities that affected nearly every modern processor. If not, you might want to take a moment and visit Meltdownattack.com for a good overview, and Ars Technica for some good examples of how the vulnerability affects processors.

As with any highly visible and impactful public vulnerability, companies often quickly shift into an all-hands-on-deck operational motion once vendor patches are available. This particular set of patches, once applied, was advertised as very likely to negatively impact system performance as a result of the patch code needing to compensate for the vulnerability in the hardware. The end result of any of these circumstances can lead to a host of negative consequences, including an impacted user experience, which typically ties to a decrease in revenue or other business success. As a result, teams should spend some additional time on performance testing to ensure the viability of their systems post-patch.

As a leader of our SaaS team, I have been heavily involved in our own evaluation of the impact of patching Meltdown and Spectre. This post is meant to share our experience and how you might apply it to your own patch process.

Our SaaS platform services customers are running mission-critical applications. It is therefore very important for us to understand the impact these patches might have on our environment so we can make any operational adjustments needed post-patch and our customers can continue to rely on our software to run their businesses. We have been testing the Meltdown and Spectre patches in-house in pre-production, leveraging our own platform to better gauge potential impacts on both the SaaS infrastructure, and in a typical on-premises environment.

We use AppD to identify all of the business transactions that end up using various components of our infrastructure, e.g., databases, message queues, caches, etc.; see the code where transactions in the application are negatively impacted, and identify adjustments that might help mitigate the performance degradation.

A key to identifying these performance degradations is AppDynamics’ dynamic baselining, which records how business transactions and various system metrics behave throughout the day and week. This means that once we apply a patch, we are able to compare the new performance to what we’ve seen prior – something you cannot do when relying on static thresholds to tell you if there is a problem.

We aren’t just applying baselines to our technical performance metrics, either. We’re leveraging our own custom dashboards from Business iQ to track the key transactions and data that drive our business KPIs. This allows us to easily identify which patching-related slowdowns should be addressed first. Since we may need to roll out patches before all slowdowns are addressed, we want to minimize business-impact while doing the right thing, security-wise.

As you might imagine, the environment underlying the AppDynamics platform is extremely complex. We have a private data center with software running on bare metal servers, and we leverage public cloud providers for many of our services. There are different cost implications to updating these environments that we consider, and you may be in a similar situation.

In the interest of rolling patches as soon as possible, a short-term solution to any performance impact may be to change the computing environment itself until you’ve had a chance to update any code. For instance, if the average response time of a critical business transaction doubles due to resource contention during testing, you may decide it’s time to increase the size of your cloud instances, or add additional instances to help carry the load until you’ve redesigned your implementation.

For AppD, our servers offer a consistent compute environment; this means the only way to boost compute power due to performance degradations is to increase the number of servers allocated to a given task, which has obvious cost implications. In the cloud, our systems are running on virtual machines, allowing us to increase instance size or simply allocate more instances. There may be cost trade-offs for increasing instance size vs. running more instances, as everyone’s situation is unique. We rely on our own software to help us understand how our servers and instances are being utilized, which assists with our capacity planning updates.

A note on the example above: I used average response time as the decision point for whether to change our environment, not just a technical metric like CPU utilization. It’s important to focus on the actual customer impact of system performance, and then use system metrics to help understand root cause. You don’t want to rush into environment changes for things that may not impact your business.

After extensive testing in our development environments, we are currently deploying canary releases – applying the upgrades to a small percentage of isolated production systems – and monitoring the results, which helps us gauge the impact of patches on the performance of our applications. This proves a useful method for identifying potential performance impacts without affecting the entire production environment.

Conclusion

To recap, a few things you should consider when preparing to apply patches:

– The potential impacts of patching on the user experience.

– Using an APM solution with dynamic baselining to monitor and compare performance before and after patching.

– Understand the business impact of patching, it may influence what short-term trade offs you make before addressing in the long-term.

– Using canary releases to gauge the impact of any patches on application performance, without the risk of a full rollout.

When the inevitable happens and you find yourself preparing to apply patches, remember to keep your users’ experience – and the application performance driving that experience – front of mind. Better yet, start preparing now and experience all the other benefits that AppDynamics has to offer with a 15 day free trial!

The post Tales from the Field: Spectre, Meltdown, and Patching Performance appeared first on Application Performance Monitoring Blog | AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

Latest Stories
Sometimes I write a blog just to formulate and organize a point of view, and I think it’s time that I pull together the bounty of excellent information about Machine Learning. This is a topic with which business leaders must become comfortable, especially tomorrow’s business leaders (tip for my next semester University of San Francisco business students!). Machine learning is a key capability that will help organizations drive optimization and monetization opportunities, and there have been some...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infra...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacent...
Blockchain. A day doesn’t seem to go by without seeing articles and discussions about the technology. According to PwC executive Seamus Cushley, approximately $1.4B has been invested in blockchain just last year. In Gartner’s recent hype cycle for emerging technologies, blockchain is approaching the peak. It is considered by Gartner as one of the ‘Key platform-enabling technologies to track.’ While there is a lot of ‘hype vs reality’ discussions going on, there is no arguing that blockchain is b...
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud host...
The cloud era has reached the stage where it is no longer a question of whether a company should migrate, but when. Enterprises have embraced the outsourcing of where their various applications are stored and who manages them, saving significant investment along the way. Plus, the cloud has become a defining competitive edge. Companies that fail to successfully adapt risk failure. The media, of course, continues to extol the virtues of the cloud, including how easy it is to get there. Migrating...
Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.