Welcome!

Blog Feed Post

Tales from the Field: Spectre, Meltdown, and Patching Performance

By now you’ve certainly heard of – or perhaps been impacted by – Meltdown and Spectre, two newly discovered vulnerabilities that affected nearly every modern processor. If not, you might want to take a moment and visit Meltdownattack.com for a good overview, and Ars Technica for some good examples of how the vulnerability affects processors.

As with any highly visible and impactful public vulnerability, companies often quickly shift into an all-hands-on-deck operational motion once vendor patches are available. This particular set of patches, once applied, was advertised as very likely to negatively impact system performance as a result of the patch code needing to compensate for the vulnerability in the hardware. The end result of any of these circumstances can lead to a host of negative consequences, including an impacted user experience, which typically ties to a decrease in revenue or other business success. As a result, teams should spend some additional time on performance testing to ensure the viability of their systems post-patch.

As a leader of our SaaS team, I have been heavily involved in our own evaluation of the impact of patching Meltdown and Spectre. This post is meant to share our experience and how you might apply it to your own patch process.

Our SaaS platform services customers are running mission-critical applications. It is therefore very important for us to understand the impact these patches might have on our environment so we can make any operational adjustments needed post-patch and our customers can continue to rely on our software to run their businesses. We have been testing the Meltdown and Spectre patches in-house in pre-production, leveraging our own platform to better gauge potential impacts on both the SaaS infrastructure, and in a typical on-premises environment.

We use AppD to identify all of the business transactions that end up using various components of our infrastructure, e.g., databases, message queues, caches, etc.; see the code where transactions in the application are negatively impacted, and identify adjustments that might help mitigate the performance degradation.

A key to identifying these performance degradations is AppDynamics’ dynamic baselining, which records how business transactions and various system metrics behave throughout the day and week. This means that once we apply a patch, we are able to compare the new performance to what we’ve seen prior – something you cannot do when relying on static thresholds to tell you if there is a problem.

We aren’t just applying baselines to our technical performance metrics, either. We’re leveraging our own custom dashboards from Business iQ to track the key transactions and data that drive our business KPIs. This allows us to easily identify which patching-related slowdowns should be addressed first. Since we may need to roll out patches before all slowdowns are addressed, we want to minimize business-impact while doing the right thing, security-wise.

As you might imagine, the environment underlying the AppDynamics platform is extremely complex. We have a private data center with software running on bare metal servers, and we leverage public cloud providers for many of our services. There are different cost implications to updating these environments that we consider, and you may be in a similar situation.

In the interest of rolling patches as soon as possible, a short-term solution to any performance impact may be to change the computing environment itself until you’ve had a chance to update any code. For instance, if the average response time of a critical business transaction doubles due to resource contention during testing, you may decide it’s time to increase the size of your cloud instances, or add additional instances to help carry the load until you’ve redesigned your implementation.

For AppD, our servers offer a consistent compute environment; this means the only way to boost compute power due to performance degradations is to increase the number of servers allocated to a given task, which has obvious cost implications. In the cloud, our systems are running on virtual machines, allowing us to increase instance size or simply allocate more instances. There may be cost trade-offs for increasing instance size vs. running more instances, as everyone’s situation is unique. We rely on our own software to help us understand how our servers and instances are being utilized, which assists with our capacity planning updates.

A note on the example above: I used average response time as the decision point for whether to change our environment, not just a technical metric like CPU utilization. It’s important to focus on the actual customer impact of system performance, and then use system metrics to help understand root cause. You don’t want to rush into environment changes for things that may not impact your business.

After extensive testing in our development environments, we are currently deploying canary releases – applying the upgrades to a small percentage of isolated production systems – and monitoring the results, which helps us gauge the impact of patches on the performance of our applications. This proves a useful method for identifying potential performance impacts without affecting the entire production environment.

Conclusion

To recap, a few things you should consider when preparing to apply patches:

– The potential impacts of patching on the user experience.

– Using an APM solution with dynamic baselining to monitor and compare performance before and after patching.

– Understand the business impact of patching, it may influence what short-term trade offs you make before addressing in the long-term.

– Using canary releases to gauge the impact of any patches on application performance, without the risk of a full rollout.

When the inevitable happens and you find yourself preparing to apply patches, remember to keep your users’ experience – and the application performance driving that experience – front of mind. Better yet, start preparing now and experience all the other benefits that AppDynamics has to offer with a 15 day free trial!

The post Tales from the Field: Spectre, Meltdown, and Patching Performance appeared first on Application Performance Monitoring Blog | AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

Latest Stories
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
CloudEXPO New York 2018, colocated with DevOpsSUMMIT and DXWorldEXPO New York 2018 will be held November 12-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI and Machine Learning to one location.
Hackers took three days to identify and exploit a known vulnerability in Equifax’s web applications. I will share new data that reveals why three days (at most) is the new normal for DevSecOps teams to move new business /security requirements from design into production. This session aims to enlighten DevOps teams, security and development professionals by sharing results from the 4th annual State of the Software Supply Chain Report -- a blend of public and proprietary data with expert researc...
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, softwar...
So the dumpster is on fire. Again. The site's down. Your boss's face is an ever-deepening purple. And you begin debating whether you should join the #incident channel or call an ambulance to deal with his impending stroke. Yes, we know this is a developer's fault. There's plenty of time for blame later. Postmortems have a macabre name because they were once intended to be Viking-like funerals for someone's job. But we're civilized now. Sort of. So we call them post-incident reviews. Fires are ne...
The digital transformation is real! To adapt, IT professionals need to transform their own skillset to become more multi-dimensional by gaining both depth and breadth of a wide variety of knowledge and competencies. Historically, while IT has been built on a foundation of specialty (or "I" shaped) silos, the DevOps principle of "shifting left" is opening up opportunities for developers, operational staff, security and others to grow their skills portfolio, advance their careers and become "T"-sh...
This session will provide an introduction to Cloud driven quality and transformation and highlight the key features that comprise it. A perspective on the cloud transformation lifecycle, transformation levers, and transformation framework will be shared. At Cognizant, we have developed a transformation strategy to enable the migration of business critical workloads to cloud environments. The strategy encompasses a set of transformation levers across the cloud transformation lifecycle to enhance ...
Authorization of web applications developed in the cloud is a fundamental problem for security, yet companies often build solutions from scratch, which is error prone and impedes time to market. This talk shows developers how they can (instead) build on-top of community-owned projects and frameworks for better security.Whether you build software for enterprises, mobile, or internal microservices, security is important. Standards like SAML, OIDC, and SPIFFE help you solve identity and authenticat...
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app secu...
Mike is managing director in Deloitte Consulting LLP's Cloud practice, responsible for helping clients implement cloud strategy and architecture to drive digital transformation. Beyond his technology experience, Mike brings an insightful understanding of how to address the organizational change, process improvement, and talent management challenges associated with digital transformation. Mike brings more than 30 years of experience in software development and architecture to his role. Most recen...
Having been in the web hosting industry since 2002, dhosting has gained a great deal of experience while working on a wide range of projects. This experience has enabled the company to develop our amazing new product, which they are now excited to present! Among dHosting's greatest achievements, they can include the development of their own hosting panel, the building of their fully redundant server system, and the creation of dhHosting's unique product, Dynamic Edge.
Your job is mostly boring. Many of the IT operations tasks you perform on a day-to-day basis are repetitive and dull. Utilizing automation can improve your work life, automating away the drudgery and embracing the passion for technology that got you started in the first place. In this presentation, I'll talk about what automation is, and how to approach implementing it in the context of IT Operations. Ned will discuss keys to success in the long term and include practical real-world examples. Ge...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...