Welcome!

Blog Feed Post

Continuous Testing at the Speed of DevOps

"Even if you've got a Maserati, you need a good driver who knows how to drive it. Speed is important, but safety and accuracy are also key" – Bob Aiello

Parasoft recently hosted a "Continuous Testing in the DevOps World" webinar featuring Bob Aiello, Technical Editor for CM Crossroads and Wayne Ariola, Parasoft Chief Strategy Officer and co-author of Continuous Testing. Since the webinar generated such overwhelming attendance and response, we thought we'd highlight some of the key points here.
 

What is DevOps?

DevOps is a set of principles and practices that helps you communicate and collaborate more effectively. Why? Because developers know a lot of very important information. So do QA testers… and so does Operations. If your organization is going to be successful, you need to be able to harness all of that knowledge and experience. A key part of DevOps is creating the "automated deployment pipeline" –the ability to deploy changes (bug fixes, new features, infrastructure changes, etc.) as often as needed, with absolute reliability. The ability to get it right each and every time is really key. Companies that master this gain a very important strategic advantage. Those that don't incur a huge amount of risk.

What is Continuous Testing?

Per the Continuous Testing Wikipedia page:

"Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. For Continuous Testing, the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals."

Are we done testing? Wrong question!

When assessing the risk of a release candidate, most software teams ask the question:  “Are we done testing?”  Fundamentally, this is the wrong question. 

Especially with DevOps and Continuous Delivery, releasing with both speed and confidence requires having immediate feedback on the business risks associated with a software release candidate. Given the rising cost and impact of software failures, you can't afford to unleash a release that could disrupt the existing user experience or introduce new features that expose the organization to new security, reliability, or compliance risks. To prevent this, the organization needs to extend from validating bottom-up requirements to assessing the system requirements associated with overarching business goals.

Instead of “Are we done testing,” software development teams should be asking: “Does the release candidate have an acceptable level of business risk?” If we can answer this question, we can determine if the application is truly ready to progress through the delivery pipeline at any given time.

What will the next generation of testing look like?

The overarching goal will be to close the gap between business expectations and dev/test activities. First off, we'll have to bring operations (runtime) data associated with the actual user experience into our testing environment. We'll also use simulation to enable testing to begin early and continue to run at the necessary frequency. We won't stop testing requirements or performing bottom-up verification of changes. This still needs to occur, but it needs to be designed into a context so that it's integrated into a continuous regression suite that's validated vs. the user experience. Understanding how new functionality impacts the broader system is key for accurately assessing the total risk of the release candidate.

All tests need to be placed into a regression suite that measures business objectives, then "process intelligence" can be used to better understand the impact of change. This will guide us to two things. First, better exploratory testing, where corner cases and more investigative testing can happen on more variable outcomes. Second, automated acceptance testing. This is designed to ensure that there's no negative impact to today's user experience.
 

What are key factors in moving from automated testing to Continuous Testing?

Automated testing involves automated, CI-driven execution of whatever set of tests the team has accumulated. However, if one of these tests fails, what does that really mean: does it indicate a critical business risk, or just a violation of some naming standard that nobody is really committed to following anyway? And what happens when it fails? Is there a clear workflow for prioritizing defects vs. business risks and addressing the most critical ones first? And for each defect that warrants fixing, is there a process for exposing all similar defects that might already have been introduced, as well as preventing this same problem from recurring in the future?  This is where the difference between automated and continuous becomes evident.

To evolve from automated to continuous, you need the following:

  1. Clearly defined business expectations, with business risks identified per application, team, and release.
  2. Defects automatically prioritized versus the business drivers and knowing how to mitigate those risks before the release candidate goes live.
  3. Testing in complete test environments continuously using simulation—this is critical for protecting the current user experience from the impact of change.
  4. Feedback loop for defect prevention—looking for patterns that emerge and using this as an opportunity to design and implement defect prevention practices that prevent similar defects from being introduced.
      

How does Service Virtualization fit into DevOps and Continuous Testing?

After organizations start accelerating their software delivery pipeline, they often reach the point where they need to test, but can't exercise the AUT because a complete test environment is not yet ready. A lot of teams use simulation technologies such as service virtualization to get around these roadblocks. For example, say your mainframe is only available for 2 hours on Saturday night. You can record your AUT's interactions with it, then capture this as a virtual asset. You can do the same for a database, 3rd-party application, SAP, etc.

Simulation technologies are not perfect. But in order for us to start to exercise the "big blocks" and get those risks out of the way, we need to begin exploratory testing . . . and we can't do this without simulation. To truly protect the end user experience, we need to aggressively test and defend the end user's experience across key end-to-end transactions. With today's systems, those transactions pass through a high number of different components, so it's very difficult to accommodate that in a single staged test environment—cloud or not. Simulation helps us get around this.  For the most realistic simulated environment, we need to really understand how components are working in an operational environment and transfer this to the simulation.

Watch the "Continuous Testing in the DevOps World" Webinar On Demand

Want to learn more? You can watch the complete 60-minute webinar at your convenience

 

Read the original blog entry...

More Stories By Wayne Ariola

Wayne Ariola is Vice President of Strategy and Corporate Development at Parasoft, a leading provider of integrated software development management, quality lifecycle management, and dev/test environment management solutions. He leverages customer input and fosters partnerships with industry leaders to ensure that Parasoft solutions continuously evolve to support the ever-changing complexities of real-world business processes and systems. Ariola has more than 15 years of strategic consulting experience within the technology and software development industries. He holds a BA from the University of California at Santa Barbara and an MBA from Indiana University.

Latest Stories
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
"Loom is applying artificial intelligence and machine learning into the entire log analysis process, from start to finish and at the end you will get a human touch,” explained Sabo Taylor Diab, Vice President, Marketing at Loom Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists loo...
"Tintri focuses on the Ops side of the DevOps, which basically is pushing more and more of the accessibility of the infrastructure to the developers and trying to get behind the scenes," explained Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
In the world of DevOps there are ‘known good practices’ – aka ‘patterns’ – and ‘known bad practices’ – aka ‘anti-patterns.' Many of these patterns and anti-patterns have been developed from real world experience, especially by the early adopters of DevOps theory; but many are more feasible in theory than in practice, especially for more recent entrants to the DevOps scene. In this power panel at @DevOpsSummit at 18th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists discussed...
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, highlighted the current challenges of these transformative technologies and shared strategies for preparing your organization for these changes. This “view from the top” outlined the latest trends and developments i...
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
"We focus on composable infrastructure. Composable infrastructure has been named by companies like Gartner as the evolution of the IT infrastructure where everything is now driven by software," explained Bruno Andrade, CEO and Founder of HTBase, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing bes...
Artificial intelligence, machine learning, neural networks. We’re in the midst of a wave of excitement around AI such as hasn’t been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. Will this time be different? Most likely. Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Furthermore, the funding and research going into AI now comes from a wide range of com...
In this presentation, Striim CTO and founder Steve Wilkes will discuss practical strategies for counteracting fraud and cyberattacks by leveraging real-time streaming analytics. In his session at @ThingsExpo, Steve Wilkes, Founder and Chief Technology Officer at Striim, will provide a detailed look into leveraging streaming data management to correlate events in real time, and identify potential breaches across IoT and non-IoT systems throughout the enterprise. Strategies for processing massive ...