Welcome!

Blog Feed Post

Expedite the Handoff from Dev to Test: An Interview with Gary Gruver (Part 2)

In the second part of our conversation with Gary Gruver (read part 1 here), we continue to discuss delivery pipeline inefficiencies of enterprise releases. As Gary describes in his book Starting and Scaling DevOps:

“The biggest inefficiencies in most large organizations exists in the large, complex, tightly coupled systems that require coordination across large numbers of people from the business all the way out to Operations.”

Quite a broad topic, so for our discussion we are drilling down on the dev to test hand off as well as test environments. We left off part 1 of the interview with this awesome soundbite from Gary: “Lots of orgs spend more time and effort getting the big, complex enterprise environments up and ready for testing than they do actually writing the code.”

In your book Starting and Scaling DevOps you state:

“For these orgs, it is much more important to push the majority of testing and defect fixing into smaller, less complex test environments with quality gates to keep defects out of the bigger more complex environments. This helps to reduce the cost and complexity of the testing and also helps with the triage process because the issues are localized to the subsystem or application that created them.”

Can you provide some additional insight on gating code?

GG: “For organizations with tightly coupled architectures, it’s important to build up stable enterprise systems using a well-structured deployment pipeline with good quality gates. Large test environments are incredibly expensive and hard to manage. As a result, they are not a very efficient approach for finding defects. You have to establish quality gates, where release teams are not allowed to move code further along the pipeline until they pass specific tests.”

What’s the process for introducing gated code into a delivery pipeline?

GG: “Find out what are the sources of the apps or subsystems that are breaking most frequently. You’ll want to start by gating those apps to find and fix defects before moving code onto the next, more complex test environment.

Create a paredo chart as to why tests are failing. When you have those metrics for each environment, use different tags for the various defects. Then you can see all the waste that can potentially come out of the system. This sounds easy, but it’s so rare that people step back and look at how their organization works in order to put metrics on it. There’s never time to step back and look at everything.”

How do you identify areas for improvement in the dev to test handoff?

GG: “First, I tell teams to answer this question: What percentage of defects are you finding in each of your environments? For example, if 90% of defects occur in the initial test environment, the system is good, meaning the dev team is receiving quality feedback. If you are finding a majority of your issues on the right side of the pipeline, closer to production, then feedback to dev is not that valuable as triage and root cause is much more complex. By the time you are in a complex environment, you should only be testing the interfaces.

The next question to ask is: What are the different types of issues found in the different test environments? Are you finding environment issues, deployment issues, problems with automated tests, the code…? We’ll look at test results from two perspectives to understand the best approach to triage. First, we look at whether the same test has passed from build to build, then we’ll look at how well the same build is performing as the test environment complexity increases from release phase to phase. To do this, each stage of the deployment pipeline needs a stable test environment for gating code.”

Besides incorrect configurations, what other problems have you run into with test environments?

GG: “I work with many organizations that are on a journey to test automation, meaning they still have manual testing left in the system. It takes work to make sure tests are triagable, maintainable, and can adapt as the app is changing. One organization I’m working with now runs 3000 automated tests on a weekly basis. Recently a release train was held up not because there was a problem with the tests, but it turns out a test environment didn’t have enough memory and CPU power to run the automated test.”

In your book, you state:

“For large tightly coupled systems, developers often don’t understand the complexities of the production environments. Additionally, the people that understand the production environments don’t understand well the impact of the changes that developers are making. There are also frequently different end points in different test environments at each stage of the deployment pipeline. No one person understands what needs to happen all the way down the deployment pipeline. Therefore, managing environments for complex systems requires close collaboration from every group between dev and ops.”

What is your advice for managing test environments in tightly coupled systems?

GG: “The DevOps journey towards efficiency must involve getting test environment configurations under version control, that way everyone can see exactly who changed what and when. Then there’s no need to hold big meetings like a Scrum of Scrums because everyone can see code progress down the pipeline to understand status. Also, developers need faster access to early stage test environments, so they can validate their changes and catch their own defects. Success really requires being able to provide environments with cloud-like efficiencies on demand.”

Stable test environments, under version control, and on demand. Sage advice.

Plutora Environments integrates with Jenkins to trigger builds and track component versions of test environments.

Please join us on Wednesday, January 24th for a webinar with Gary Gruver: Continuous Delivery Pipelines:  Metrics, Myths, and Milestones.

 

The post Expedite the Handoff from Dev to Test: An Interview with Gary Gruver (Part 2) appeared first on Plutora.

Read the original blog entry...

More Stories By Plutora Blog

Plutora provides Enterprise Release and Test Environment Management SaaS solutions aligning process, technology, and information to solve release orchestration challenges for the enterprise.

Plutora’s SaaS solution enables organizations to model release management and test environment management activities as a bridge between agile project teams and an enterprise’s ITSM initiatives. Using Plutora, you can orchestrate parallel releases from several independent DevOps groups all while giving your executives as well as change management specialists insight into overall risk.

Supporting the largest releases for the largest organizations throughout North America, EMEA, and Asia Pacific, Plutora provides proof that large companies can adopt DevOps while managing the risks that come with wider adoption of self-service and agile software development in the enterprise. Aligning process, technology, and information to solve increasingly complex release orchestration challenges, this Gartner “Cool Vendor in IT DevOps” upgrades the enterprise release management from spreadsheets, meetings, and email to an integrated dashboard giving release managers insight and control over large software releases.

Latest Stories
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
Sometimes I write a blog just to formulate and organize a point of view, and I think it’s time that I pull together the bounty of excellent information about Machine Learning. This is a topic with which business leaders must become comfortable, especially tomorrow’s business leaders (tip for my next semester University of San Francisco business students!). Machine learning is a key capability that will help organizations drive optimization and monetization opportunities, and there have been some...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infra...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacent...
Blockchain. A day doesn’t seem to go by without seeing articles and discussions about the technology. According to PwC executive Seamus Cushley, approximately $1.4B has been invested in blockchain just last year. In Gartner’s recent hype cycle for emerging technologies, blockchain is approaching the peak. It is considered by Gartner as one of the ‘Key platform-enabling technologies to track.’ While there is a lot of ‘hype vs reality’ discussions going on, there is no arguing that blockchain is b...
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
The cloud era has reached the stage where it is no longer a question of whether a company should migrate, but when. Enterprises have embraced the outsourcing of where their various applications are stored and who manages them, saving significant investment along the way. Plus, the cloud has become a defining competitive edge. Companies that fail to successfully adapt risk failure. The media, of course, continues to extol the virtues of the cloud, including how easy it is to get there. Migrating...
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud host...
Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.