Blog Feed Post

How Feedback Loops Make It Safe to Deploy to Production

Almost every organization is being pressured to respond more quickly to changes in the market. As development and operations professionals, supporting this business imperative requires that we get new code into production quickly or risk falling behind our competitors. But whether we work in Dev or Ops, pushing the deploy button can be intimidating. After all, who wants to be the person responsible for bringing down production?

Feedback loops promote a better working relationship between Dev and Ops by reinforcing shared goals, responsibilities and empathy.https://i0.wp.com/blog.xebialabs.com/wp-content/uploads/2017/07/Two-men-... 300w, https://i0.wp.com/blog.xebialabs.com/wp-content/uploads/2017/07/Two-men-... 768w" sizes="(max-width: 542px) 100vw, 542px" data-recalc-dims="1" />

Fear of deploying code by both Dev and Ops is not unusual. In fact, Mike Bland described deploying code at Google in 2005 like this:

Fear became the mind-killer. Fear stopped new team members from changing things because they didn’t understand the system. But fear also stopped experienced people from changing things because they understood it all too well.

By providing faster and more frequent feedback to engineers performing deployments, and by reducing the batch size of their work, we can create a safe system of work, integrating the deployment of changes into production as a part of our daily work and elevating everyone’s productivity. Doing this also promotes a better working relationship between Dev and Ops by reinforcing shared goals, responsibilities and empathy.

Let’s look at what we can do to create the feedback mechanisms needed to take the fear out of deploying to production and keep our companies at the top of the market.

Use Telemetry to Make Deployments Safer

Feedback loops depend on understanding how our systems behave as a whole. For that, we need telemetry—the collecting of measurements and other data within our applications and environments (both in production and pre-production) and in our deployment pipeline.

Telemetry provides the intelligence we need to make fact-based decisions about how to improve the health of the value stream at every stage of the service life cycle, ensuring that our services are “production ready,” even at the earliest stages of the project. The information that telemetry yields empowers us to integrate what we learn from each release and production problem into our future work, resulting in better safety and productivity for everyone.

Using telemetry, we can actively monitor the metrics associated with features during deployment. This enables whoever is doing the deployment, whether Dev or Ops, to catch errors in our deployment pipeline before our features reach production, to quickly determine whether features are operating as designed once they get there, and to quickly restore service in the event of errors that we did not detect.

DevOps by the Numbers On Demand Webinarhttps://i0.wp.com/blog.xebialabs.com/wp-content/uploads/2017/04/DevOps-b... 150w, https://i0.wp.com/blog.xebialabs.com/wp-content/uploads/2017/04/DevOps-b... 300w, https://i0.wp.com/blog.xebialabs.com/wp-content/uploads/2017/04/DevOps-b... 120w" sizes="(max-width: 232px) 100vw, 232px" data-recalc-dims="1" />


DevOps by the Numbers

How to Approach the Measurement and Metrics of Your Continuous Delivery Transformation

Watch this on-demand webinar to learn ways to better measure the processes and output of your DevOps and Continuous Delivery transformation.

Have Dev Share Pager Rotation Duties with Ops    

Our production deployment and release went flawlessly, but we still experienced some unexpected problems. Left unfixed, they can cause recurring problems and suffering for Ops engineers downstream. But even if issues are assigned to a feature team, they may be considered low priority, which can cause chaos and disruption in Operations and degrade performance for the entire value stream.

To prevent this upheaval, we can put developers, development managers, and architects on pager rotation so that everyone in the value stream shares the downstream responsibilities of handling operational incidents. In this way, Operations no longer struggles alone with code-related production issues. Instead, everyone works together to find the proper balance between fixing production defects and developing new functionality.

Have Developers Follow Work Downstream

Observing customers using an application in their natural environment often uncovers startling ways that they struggle with the application. For developers who choose to participate in the observation, it can be a difficult thing to watch, but it almost always results in significant learning and a fervent desire to improve the situation for the customer.

We can use this same technique to observe how our work affects internal customers. Developers follow their work downstream so they can see how downstream work centers must interact with their product to get it running into production. UX observation enables the creation of quality at the source and helps developers make more informed decisions in their daily work. It also results in far greater empathy for fellow team members in the value stream, which is important for creating a strong DevOps work culture.¶

Have Developers Initially Self-Manage Their Production Service

Even when developers write and run their code in production-like environments in their daily work, Operations may still experience disastrous production releases. That’s because it is the first time we see how our code behaves under true production conditions. Operational learnings often occur too late in the software life cycle, which can be an outcome of not having enough Ops engineers to support all the product teams and the services already in production.

As a countermeasure, Development can self-manage their services in production before they go to a centralized Ops group to manage. By making developers responsible for deployment and production support, we are far more likely to see a smooth transition to Operations.

Defining launch requirements can help prevent the possibility of problematic, self-managed services going into production and creating organizational risk. Services would need to meet these requirements before interacting with real customers or being exposed to real production traffic. Launch guidance allows every product team to benefit from the cumulative and collective experience of the entire organization, especially Operations.

Ops engineers, acting as consultants, can help the feature team resolve issues or even re-engineer a service if necessary, so that it can be easily deployed and managed in production. For services already in production, creating a “handback” mechanism helps ensure that Operations can return production support responsibility back to Development when a service becomes sufficiently fragile.

To Learn More

Creating fast and continuous feedback from Operations to Development is part of the “Second Way,” which is the second of the three major principles underpinning DevOps. To learn more about the Second Way and the other DevOps principles, see The DevOps Handbook and The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win.

(Portions of this article were excerpted with permission from The DevOps Handbook.)

The post How Feedback Loops Make It Safe to Deploy to Production appeared first on XebiaLabs Blog.

Read the original blog entry...

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...