Welcome!

Blog Feed Post

Tales from the Field: Debugging Applications in Production with Information Points

Production debugging is one of the most difficult parts of the job for any software engineer and one of the most underrated problems faced by IT. Developers usually rely on logs to troubleshoot production issues. They go through hundreds of lines of logs, sorting through complex logic as their stress levels rise, acutely aware that the bug they are looking for could be crippling the business. It’s painstaking, laborious work at best, and all too often the relevant logs are not available.

Developers need better tools to debug production issues faster. At stake is not only lost revenue but often something even more valuable: The trust of their customers.

I’ve spent many late nights staring at my computer trying to find the root cause of an issue in production environments. The general lack of insights into what is happening in the production environment makes this hard enough, but if the code has been inherited (legacy code) or involves timed elements such as background processes or cron jobs, things get even more difficult to track down. Engineers (myself included) often don’t have a clear understanding of the entire application, so it’s hard to know if the issues we’re tracking down are related to a change we made, or something that changed in another part of the codebase. We look for answers in the logs, but it feels like we are hunting for a needle in a haystack. Adding additional logs in search of more relevant data only increases the amount of proverbial straw and is probably not an option if the issue we are dealing with is time-sensitive.

During these late nights, I used to wish that production environments were more like a local developer environment where debugging is relatively easy, thanks to tools like the debugger in my local IDE or browser. Wouldn’t it be nice to have a debugger or a dynamic logging device in production? Developers need a magic wand (or magnet) to get all the needles from the haystack of production logs! Well, it turns out that the Information Points feature of AppDynamics was the magic wand I was looking for.

What are information points?

If you are not familiar with AppDynamics, consider an information point as a tool that allows you to inspect the input parameters or return value of any invocation of a method along with additional metrics about the execution time of the method for each invocation. If you are familiar with AppDynamics, information points are similar to data collectors in business transactions. However, while data collectors show application data in the context of a business transaction, information points reflect data state across all invocations of a method, independently of business transactions. They also let you apply computations to the values, for example, representing the sum or average for a method return value or input parameter.

Below is an example of how I have used information points as a production debugging tool. You will notice the debug flow is very similar to the way developers find and fix issues using an IDE.

One of our customers was reaching the data limits for one of the metadata items we collect, and the customer was adamant that the stale/old data was not getting purged. Our operations engineers increased the limits a couple of times, and the issue got escalated to my engineering team. The system was designed in a way where the stale/old data was deleted by a background cron task. The same background task was used for all similar data, so it was hard to diagnose what was going on and whether the background task to clean old data was being invoked for that particular account and metadata records. There were no relevant logs available to debug this issue further.

To resolve the issue I created a new information point on the Information Point Page with a few clicks as shown in Figure 1 (below). The information point was created on the delete method of the background task for that particular account and metadata record. I also created a custom metric for the return value which returned the number of deleted entities.

Code block for the delete background task:

DeleteBackgroundTask {

public int deleteStaleEntries(int accountId, String entityType) {

Delete code…

}

}

Screen Shot 2017-10-26 at 8.38.56 PM.png

Figure 1: Information points created with custom metric.

Within the next few minutes, I was able to confirm that the background job to delete stale entries was triggered every 10 minutes and was working as per design. I also was able to see how many records were deleted.

Screen Shot 2017-10-26 at 8.39.32 PM.png

Next, I created another information point for this particular account on the method that was creating the stale entries. This information point had a custom metric that collected the number of records being created. From this information, I was able to determine that the customer was creating these records at a rate higher than the documented limits, and our delete task could not keep up. This information was then conveyed to the customer, they were able to adjust their usage, and the entire issue was solved within less than hour.

The Information Points feature in AppDynamics has truly changed my life. Information points help me understand what is going on in real time, and I use them regularly to debug production issues. They are also used by our quality engineers to test complex background tasks. With information points, problems can be easily isolated to a particular method or segment of code.

I have just one word of caution: There is a limit on the number of information points that can be added to the system as collecting too many can impact your own application performance. Please make sure you delete the information points that you create during your debugging session so you are prepared for the next one. Happy debugging!

For more details on information points, check out the docs here. You can also learn more about AppDynamics with our guided tour or by scheduling a demo today.

The post Tales from the Field: Debugging Applications in Production with Information Points appeared first on Application Performance Monitoring Blog | AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

Latest Stories
Sometimes I write a blog just to formulate and organize a point of view, and I think it’s time that I pull together the bounty of excellent information about Machine Learning. This is a topic with which business leaders must become comfortable, especially tomorrow’s business leaders (tip for my next semester University of San Francisco business students!). Machine learning is a key capability that will help organizations drive optimization and monetization opportunities, and there have been some...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infra...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacent...
Blockchain. A day doesn’t seem to go by without seeing articles and discussions about the technology. According to PwC executive Seamus Cushley, approximately $1.4B has been invested in blockchain just last year. In Gartner’s recent hype cycle for emerging technologies, blockchain is approaching the peak. It is considered by Gartner as one of the ‘Key platform-enabling technologies to track.’ While there is a lot of ‘hype vs reality’ discussions going on, there is no arguing that blockchain is b...
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud host...
The cloud era has reached the stage where it is no longer a question of whether a company should migrate, but when. Enterprises have embraced the outsourcing of where their various applications are stored and who manages them, saving significant investment along the way. Plus, the cloud has become a defining competitive edge. Companies that fail to successfully adapt risk failure. The media, of course, continues to extol the virtues of the cloud, including how easy it is to get there. Migrating...
Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.