Welcome!

Blog Feed Post

Control Costs: Scale Down Test Environments

You’d be surprised how often organizations insist on running a carbon-copy of a production network for a rarely-used staging environment; this is unnecessary. You don’t need a million-dollar staging environment sitting next to your production environment. You should control your test environment costs.

If you are searching for strategies to reduce the cost of your test environments, one approach is to scale down their size. Test environments can be much smaller than production environments, and you can make smart choices about VM sizing to ensure that your test environments don’t break the bank.

Scale Down Test Environments

If you run a large website or a popular app, your production system has hundreds of servers with several pairs of clustered databases running on the most expensive hardware. Maybe you have app servers with 32 cores and 200 GB of RAM and databases with more storage than you thought possible to run on SSD. You’ve decided to spend money on production because it needs to scale. You also have a QA staff telling you that the only way to qualify software for production is to have a staging system that has the same specs as production.

You don’t require that same level of firepower in your test environments as you do in production. You can run smaller clusters of application servers and use less infrastructure, as only a handful of employees use your QA systems. What you are hearing from your QA staff is superstition. This idea that any difference between staging and production is unacceptable is a holdover from an age when production systems were much smaller. If you are running a very large, complex system, it is economically infeasible to “recreate” production.

Despite this fact, there will always be a chorus of developers telling you that your pre-production systems must be equal in every way to the size and scale of production. Don’t listen. QA and Staging support testing processes focus not on the scale, but on quality. You need just enough hardware to support software qualification.

While your production system might need to scale to ten thousand TPS, your QA and Staging systems might need to scale to two or three TPS. While your production system supports a million simultaneous users, your QA and Staging systems support ten maybe twenty simultaneous testers. Don’t drop a couple million on database hardware in staging just because it would make your QA team feel better if the software was verified on the same infrastructure. You don’t need it.

Is it an Accurate Representation of Production?

But, don’t scale down to one server. You’ll need to test some level of redundancy. Your Staging and QA systems should use the same clustering approach as production, and you should aim to test your system with a minimum level of redundancy: four servers in two data centers (a 2×2). If you have a multi-datacenter production network, you should be testing your system with a multi-datacenter cluster. Doing this will allow you to test failover scenarios and other issues encountered in production. There is some wisdom in recreating some level of redundancy to test clustering, but you can’t afford to run a carbon-copy of production.

This is especially true if you run systems to support highly scalable web applications. If your production cluster is tens of thousands of machines backed by petabytes of data and more systems than you can keep track of, it will be economically unfeasible for you to run a “copy” of production for use as a staging environment. If you have a two-thousand-node cluster running an application server, your QA and Staging environments can get by with a four-node cluster. Testing environments are for software quality testing, and for testing assumptions made by developers before code hits production.

What about a Performance Testing Environment?

There are times when your QA or a performance QA environment may need to scale to the same level of capability as your production systems, but you should explore using dynamic, cloud-based infrastructure to achieve this temporary level of scale in QA. Use a public cloud provider to temporarily grow QA into a PQA environment that you can use to test architectural assumptions, but don’t establish a permanent PQA environment at the scale of production.

Instead, Create and Test a Performance Model

If you develop applications at scale, you can avoid having to scale QA to production sizes by creating a reliable “performance model” of your system in production.

What is a “performance model”? A performance model allows you to qualify that a system will scale by running a smaller set of servers in staging and QA. If your performance testing efforts develop a model of system behavior on a few servers, you can then test how this model scales to production. It should be the job of a performance testing team to understand how the performance of a system in QA represents the performance of a system in production. If you perform these tests regularly, you can qualify software with far fewer servers, and achieve dramatic cost savings on test environments.

An example is a system that uses an application server as well as several databases. To develop a performance model that will let you scale your assumptions from a small cluster of QA servers to production, you’ll need to conduct experiments to understand what your bottleneck is, and how the system scales with increased cluster sizes. This model will help you scale to meet demand, and it will also help to control costs associated with QA and Staging because you’ll be able to qualify the system on a much smaller cluster size.

The post Control Costs: Scale Down Test Environments appeared first on Plutora.

Read the original blog entry...

More Stories By Plutora Blog

Plutora provides Enterprise Release and Test Environment Management SaaS solutions aligning process, technology, and information to solve release orchestration challenges for the enterprise.

Plutora’s SaaS solution enables organizations to model release management and test environment management activities as a bridge between agile project teams and an enterprise’s ITSM initiatives. Using Plutora, you can orchestrate parallel releases from several independent DevOps groups all while giving your executives as well as change management specialists insight into overall risk.

Supporting the largest releases for the largest organizations throughout North America, EMEA, and Asia Pacific, Plutora provides proof that large companies can adopt DevOps while managing the risks that come with wider adoption of self-service and agile software development in the enterprise. Aligning process, technology, and information to solve increasingly complex release orchestration challenges, this Gartner “Cool Vendor in IT DevOps” upgrades the enterprise release management from spreadsheets, meetings, and email to an integrated dashboard giving release managers insight and control over large software releases.

Latest Stories
Regardless of what business you’re in, it’s increasingly a software-driven business. Consumers’ rising expectations for connected digital and physical experiences are driving what some are calling the "Customer Experience Challenge.” In his session at @DevOpsSummit at 20th Cloud Expo, Marco Morales, Director of Global Solutions at CollabNet, will discuss how organizations are increasingly adopting a discipline of Value Stream Mapping to ensure that the software they are producing is poised to ...
When NSA's digital armory was leaked, it was only a matter of time before the code was morphed into a ransom seeking worm. This talk, designed for C-level attendees, demonstrates a Live Hack of a virtual environment to show the ease in which any average user can leverage these tools and infiltrate their network environment. This session will include an overview of the Shadbrokers NSA leak situation.
SYS-CON Events announced today that delaPlex will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. delaPlex pioneered Software Development as a Service (SDaaS), which provides scalable resources to build, test, and deploy software. It’s a fast and more reliable way to develop a new product or expand your in-house team.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore t...
Interested in leveling up on your Cloud Foundry skills? Join IBM for Cloud Foundry Days on June 7 at Cloud Expo New York at the Javits Center in New York City. Cloud Foundry Days is a free half day educational conference and networking event. Come find out why Cloud Foundry is the industry's fastest-growing and most adopted cloud application platform.
In his opening keynote at 20th Cloud Expo, Michael Maximilien, Research Scientist, Architect, and Engineer at IBM, will motivate why realizing the full potential of the cloud and social data requires artificial intelligence. By mixing Cloud Foundry and the rich set of Watson services, IBM's Bluemix is the best cloud operating system for enterprises today, providing rapid development and deployment of applications that can take advantage of the rich catalog of Watson services to help drive insigh...
Automation is enabling enterprises to design, deploy, and manage more complex, hybrid cloud environments. Yet the people who manage these environments must be trained in and understanding these environments better than ever before. A new era of analytics and cognitive computing is adding intelligence, but also more complexity, to these cloud environments. How smart is your cloud? How smart should it be? In this power panel at 20th Cloud Expo, moderated by Conference Chair Roger Strukhoff, pane...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
As enterprise cloud becomes the norm, businesses and government programs must address compounded regulatory compliance related to data privacy and information protection. The most recent, Controlled Unclassified Information and the EU’s GDPR have board level implications and companies still struggle with demonstrating due diligence. Developers and DevOps leaders, as part of the pre-planning process and the associated supply chain, could benefit from updating their code libraries and design by in...
The 21st International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Existing Big Data solutions are mainly focused on the discovery and analysis of data. The solutions are scalable and highly available but tedious when swapping in and swapping out occurs in disarray and thrashing takes place. The resolution for thrashing through machine learning algorithms and support nomenclature is through simple techniques. Organizations that have been collecting large customer data are increasingly seeing the need to use the data for swapping in and out and thrashing occurs ...
SYS-CON Events announced today that DivvyCloud will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. DivvyCloud software enables organizations to achieve their cloud computing goals by simplifying and automating security, compliance and cost optimization of public and private cloud infrastructure. Using DivvyCloud, customers can leverage programmatic Bots to identify and remediate common cloud problems in rea...
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
SYS-CON Events announced today that Tintri, Inc, a leading provider of enterprise cloud infrastructure, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Tintri offers an enterprise cloud platform built with public cloud-like web services and RESTful APIs. Organizations use Tintri all-flash storage with scale-out and automation as a foundation for their own clouds – to build agile development environments...
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing bes...