Blog Feed Post

Bringing Deeper Monitoring to DevOps

The point of continuous integration is to automate builds and tests, and bring efficiency and quality to the pipeline. However, things do sometimes go wrong with the faster pace of development and more frequent updates that accompany a continuous integration process.

When major incidents or something goes wrong occur, there’s a lot of panic. That’s where incident management comes into the picture. But does it always have to be after something goes wrong? Integrating incident management from the beginning and into your continuous integration process will help take accountability, visibility, and transparency to a whole new level.

In this post, we’ll discuss how incident management brings deeper monitoring to DevOps, and how it can transform your application development.

Accountability starts at the continuous integration phase

The goal of DevOps is to facilitate collaboration between Development and Operations teams so they understand each other’s needs and don’t just point at each other when things go wrong. Uptime does not always have to be the Ops team’s burden to bear. With DevOps, even a new developer should feel responsible for uptime and should be able to chip in during downtime.

One of the big advantages of implementing continuous integration is that Dev and QA teams are also accountable for shipping quality code. Each time a new build is committed, it is automatically verified by a series of automated unit tests. If incident management is implemented at this level, when something does break, your teams are ready with the right data at hand to resolve the issue effectively. This way, they can quickly troubleshoot without panic and without having to blame anyone. Incident management automatically enforces a culture of quality and makes Dev and QA teams accountable for availability.

Like real-life emergency teams, it’s also good to have a first-response engineer, or on-call engineer, who acts first during an incident before someone with higher responsibility can arrive on scene. To enable this culture of accountability, you need monitoring and on-call management systems that respectively make monitoring data visible across teams, and divide unplanned work based on equitable shifts.

Visibility across Dev & Ops teams

A good overview of what the entire team is working on and the progress made helps everyone focus their efforts. Many businesses let the Ops team in on any new code implementations only when things go wrong or when an incident occurs. As a result, Ops teams are sometimes blamed for holding back on changes due to mistrust, which results in slower updates.

If the Dev team is transparent with Ops about new changes even at the planning phase, they can be more open to changes and understand how changes benefit the entire business. Letting the Ops team know of new ideas, upcoming features, and possible risks even at the development phase will do wonders for the awareness of the entire team. The Ops team can rest assured that even if something breaks, the entire team is always ready and prepared.

Implementing incident management in the earlier phases helps everyone understand the health of the application and what they ought to do when issues arise. Everyone is aware of the big picture and can troubleshoot more quickly.

Transparency requires unified metrics

The more the entire team is aware of each other’s responsibilities during a crisis, the more effectively they can work and the quicker things can get back to normal.

Too often, Dev and Ops use a completely different set of metrics and monitoring tools without unifying the data into one centralized hub and trying to understand patterns, anomalies, and dependencies. A car cannot be driven without a windshield; in the same way, it’s crucial to centralize all your monitoring data to proactively and holistically give everyone a good view of what’s going on.

Collecting, correlating, and analyzing data from multiple sources gives Dev and Ops continuous insight. But that data is only valuable if it’s made actionable. With an incident management solution, you can provide an overview of the churning gears to the right people, and even empower them to zero in on the things that might eventually break your app.

Finally, make sure your incident management tools are actually helping by providing real-time notifications when an issue is simmering or hits. It’s crucial to define a process around how issues of different severities should route; while you don’t want to throw away data, you don’t want to get notified on vanity metrics that do not contribute to solving the issue at hand.

For a successful DevOps transformation, continuous integration and incident management must go hand-in-hand. This will provide huge relief across the entire team and much quicker responses to downtime. Incident management makes the DevOps engine function smoothly, without breakdowns.


The post Bringing Deeper Monitoring to DevOps appeared first on PagerDuty.

Read the original blog entry...

More Stories By PagerDuty Blog

PagerDuty’s operations performance platform helps companies increase reliability. By connecting people, systems and data in a single view, PagerDuty delivers visibility and actionable intelligence across global operations for effective incident resolution management. PagerDuty has over 100 platform partners, and is trusted by Fortune 500 companies and startups alike, including Microsoft, National Instruments, Electronic Arts, Adobe, Rackspace, Etsy, Square and Github.

Latest Stories
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across supply chain networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost and time for product recall as well as advance trade. Are you curious about Blockchain and how it can provide you with new opportunities for innovation and growth? In her session at 20th Cloud Exp...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. Jack Norris reviews best practices to show how companies develop, deploy, and dynamically update these applications and how this data-first...
Intelligent Automation is now one of the key business imperatives for CIOs and CISOs impacting all areas of business today. In his session at 21st Cloud Expo, Brian Boeggeman, VP Alliances & Partnerships at Ayehu, will talk about how business value is created and delivered through intelligent automation to today’s enterprises. The open ecosystem platform approach toward Intelligent Automation that Ayehu delivers to the market is core to enabling the creation of the self-driving enterprise.
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, shared examples from a wide range of industries – including en...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
"We're here to tell the world about our cloud-scale infrastructure that we have at Juniper combined with the world-class security that we put into the cloud," explained Lisa Guess, VP of Systems Engineering at Juniper Networks, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
Consumers increasingly expect their electronic "things" to be connected to smart phones, tablets and the Internet. When that thing happens to be a medical device, the risks and benefits of connectivity must be carefully weighed. Once the decision is made that connecting the device is beneficial, medical device manufacturers must design their products to maintain patient safety and prevent compromised personal health information in the face of cybersecurity threats. In his session at @ThingsExpo...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.