Welcome!

Blog Feed Post

How a Leading Financial Services Company Scaled Test Environment Access for Parallel Agile Development—Saving $2M+

finance.jpgThe transition to Agile at a leading financial services company meant that their Development organization was reorganized into many smaller cross-functional (dev/test) groups.  Ironically, this effort to speed up the SDLC actually introduced new delays. One example: a test environment that was once dedicated to a single team suddenly needed to be shared by 9 smaller teams.  Due to the complex data setup required, the environment could be used by only one team at a time—the others had to wait.

Since the test environment included a third-party application that was $250K per instance, creating 9 separate instances of this physical test environment would have been prohibitively expensively. Service virtualization enabled them to establish 9 simulated test environments that gave each team instant, flexible access to the behavior of that system—with zero impact on the other teams. 

 

The Challenge: Scaling Access to a Complete Test Environment Including an Expensive Third-Party Application

This company's mission is to provide personal investors direct access to investment and brokerage services. They focus their research and development efforts on making it easy for clients to research and select securities that suit their financial goals, as well as to monitor and optimize portfolio performance. Rather than "reinvent the wheel," they leverage a proven third-party application to handle core industry-standard functionality, such as executing market purchases and sales.

Before the transition to Agile, the team responsible for trading functionality was able to complete their development and testing tasks using a shared test environment. However, once the team was split into 9 different teams—each of which was trying to complete different development and testing tasks in parallel—test environment access quickly emerged as a problem. Since each group had to have the test environment set up in a very specific manner, any attempts to access that test environment simultaneously meant that the groups were stepping on one another's toes—wasting time having to constantly configure and reconfigure the conditions and data needed to complete their particular tasks.

Restricting test environment access to one group at a time was not well-suited to their goal of accelerating the SDLC with parallel development. However, providing each group their own physical test environment was not feasible. Since each instance of the third-party trading application cost $250K, this meant that they would have to spend $2 million to make this application available in eight additional test environments. This option was deemed prohibitively expensive.

The Solution: Simulating the Constrained Dependency's Behavior and Data in Multiple Zero-Impact Sandboxes

The company was able to use Parasoft Service Virtualization and Parasoft Environment Manager to simulate the behavior and data of this third-party application and make it available on demand in 9 independent test environments that each team could configure and reconfigure as needed—with zero impact on the other teams.

Exercising the AUT's interactions with the third-party application, they were able to capture the behavior and data associated with their core use cases and make it available in "virtual assets." Parasoft Environment Manager was then used to design a master test environment template that included these virtual assets. From this template, any number of teams could instantly stamp out their own test environment—with the virtual asset configured to the appropriate state, and with the ability to easily add additional data to increase test coverage or to adjust response times as needed for performance testing. This way, each team could instantly access a preconfigured environment, then customize it for their own specialized testing needs—with zero disruption to other teams' dev/test activities.

Replacing the actual third-party system with virtual assets yielded additional benefits beyond enabling the Agile teams to develop and test in parallel. Previously, when their test environment included a real instance of the trading application, trading-related transactions could be tested only during trading hours—9:30 am to 4 pm Eastern time. Since the development and test teams were based in California, this meant that they could test only from 6:30am to 1 pm, which is only about 50% of their typical work day. With the virtual assets standing in for the actual system, testing could be performed 24/7, enabling the team to perform exploratory testing at their convenience as well as exercise these transactions as part of their continuous integration process.

Another benefit was that test execution time was significantly shortened. Tests against the actual system took over 20 minutes due to a delayed (asynchronous) response from the trading system. By adjusting the performance of the virtual asset, the team could get almost instantaneous responses, which expedited both automated and exploratory testing.

 

Read the original blog entry...

More Stories By Wayne Ariola

Wayne Ariola is Vice President of Strategy and Corporate Development at Parasoft, a leading provider of integrated software development management, quality lifecycle management, and dev/test environment management solutions. He leverages customer input and fosters partnerships with industry leaders to ensure that Parasoft solutions continuously evolve to support the ever-changing complexities of real-world business processes and systems. Ariola has more than 15 years of strategic consulting experience within the technology and software development industries. He holds a BA from the University of California at Santa Barbara and an MBA from Indiana University.

Latest Stories
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
"Loom is applying artificial intelligence and machine learning into the entire log analysis process, from start to finish and at the end you will get a human touch,” explained Sabo Taylor Diab, Vice President, Marketing at Loom Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists loo...
"Tintri focuses on the Ops side of the DevOps, which basically is pushing more and more of the accessibility of the infrastructure to the developers and trying to get behind the scenes," explained Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
In the world of DevOps there are ‘known good practices’ – aka ‘patterns’ – and ‘known bad practices’ – aka ‘anti-patterns.' Many of these patterns and anti-patterns have been developed from real world experience, especially by the early adopters of DevOps theory; but many are more feasible in theory than in practice, especially for more recent entrants to the DevOps scene. In this power panel at @DevOpsSummit at 18th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists discussed...
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, highlighted the current challenges of these transformative technologies and shared strategies for preparing your organization for these changes. This “view from the top” outlined the latest trends and developments i...
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
"We focus on composable infrastructure. Composable infrastructure has been named by companies like Gartner as the evolution of the IT infrastructure where everything is now driven by software," explained Bruno Andrade, CEO and Founder of HTBase, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing bes...
Artificial intelligence, machine learning, neural networks. We’re in the midst of a wave of excitement around AI such as hasn’t been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. Will this time be different? Most likely. Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Furthermore, the funding and research going into AI now comes from a wide range of com...
In this presentation, Striim CTO and founder Steve Wilkes will discuss practical strategies for counteracting fraud and cyberattacks by leveraging real-time streaming analytics. In his session at @ThingsExpo, Steve Wilkes, Founder and Chief Technology Officer at Striim, will provide a detailed look into leveraging streaming data management to correlate events in real time, and identify potential breaches across IoT and non-IoT systems throughout the enterprise. Strategies for processing massive ...