Welcome!

Blog Feed Post

Control Costs: Scale Down Test Environments

You’d be surprised how often organizations insist on running a carbon-copy of a production network for a rarely-used staging environment; this is unnecessary. You don’t need a million-dollar staging environment sitting next to your production environment. You should control your test environment costs.

If you are searching for strategies to reduce the cost of your test environments, one approach is to scale down their size. Test environments can be much smaller than production environments, and you can make smart choices about VM sizing to ensure that your test environments don’t break the bank.

Scale Down Test Environments

If you run a large website or a popular app, your production system has hundreds of servers with several pairs of clustered databases running on the most expensive hardware. Maybe you have app servers with 32 cores and 200 GB of RAM and databases with more storage than you thought possible to run on SSD. You’ve decided to spend money on production because it needs to scale. You also have a QA staff telling you that the only way to qualify software for production is to have a staging system that has the same specs as production.

You don’t require that same level of firepower in your test environments as you do in production. You can run smaller clusters of application servers and use less infrastructure, as only a handful of employees use your QA systems. What you are hearing from your QA staff is superstition. This idea that any difference between staging and production is unacceptable is a holdover from an age when production systems were much smaller. If you are running a very large, complex system, it is economically infeasible to “recreate” production.

Despite this fact, there will always be a chorus of developers telling you that your pre-production systems must be equal in every way to the size and scale of production. Don’t listen. QA and Staging support testing processes focus not on the scale, but on quality. You need just enough hardware to support software qualification.

While your production system might need to scale to ten thousand TPS, your QA and Staging systems might need to scale to two or three TPS. While your production system supports a million simultaneous users, your QA and Staging systems support ten maybe twenty simultaneous testers. Don’t drop a couple million on database hardware in staging just because it would make your QA team feel better if the software was verified on the same infrastructure. You don’t need it.

Is it an Accurate Representation of Production?

But, don’t scale down to one server. You’ll need to test some level of redundancy. Your Staging and QA systems should use the same clustering approach as production, and you should aim to test your system with a minimum level of redundancy: four servers in two data centers (a 2×2). If you have a multi-datacenter production network, you should be testing your system with a multi-datacenter cluster. Doing this will allow you to test failover scenarios and other issues encountered in production. There is some wisdom in recreating some level of redundancy to test clustering, but you can’t afford to run a carbon-copy of production.

This is especially true if you run systems to support highly scalable web applications. If your production cluster is tens of thousands of machines backed by petabytes of data and more systems than you can keep track of, it will be economically unfeasible for you to run a “copy” of production for use as a staging environment. If you have a two-thousand-node cluster running an application server, your QA and Staging environments can get by with a four-node cluster. Testing environments are for software quality testing, and for testing assumptions made by developers before code hits production.

What about a Performance Testing Environment?

There are times when your QA or a performance QA environment may need to scale to the same level of capability as your production systems, but you should explore using dynamic, cloud-based infrastructure to achieve this temporary level of scale in QA. Use a public cloud provider to temporarily grow QA into a PQA environment that you can use to test architectural assumptions, but don’t establish a permanent PQA environment at the scale of production.

Instead, Create and Test a Performance Model

If you develop applications at scale, you can avoid having to scale QA to production sizes by creating a reliable “performance model” of your system in production.

What is a “performance model”? A performance model allows you to qualify that a system will scale by running a smaller set of servers in staging and QA. If your performance testing efforts develop a model of system behavior on a few servers, you can then test how this model scales to production. It should be the job of a performance testing team to understand how the performance of a system in QA represents the performance of a system in production. If you perform these tests regularly, you can qualify software with far fewer servers, and achieve dramatic cost savings on test environments.

An example is a system that uses an application server as well as several databases. To develop a performance model that will let you scale your assumptions from a small cluster of QA servers to production, you’ll need to conduct experiments to understand what your bottleneck is, and how the system scales with increased cluster sizes. This model will help you scale to meet demand, and it will also help to control costs associated with QA and Staging because you’ll be able to qualify the system on a much smaller cluster size.

The post Control Costs: Scale Down Test Environments appeared first on Plutora.

Read the original blog entry...

More Stories By Plutora Blog

Plutora provides Enterprise Release and Test Environment Management SaaS solutions aligning process, technology, and information to solve release orchestration challenges for the enterprise.

Plutora’s SaaS solution enables organizations to model release management and test environment management activities as a bridge between agile project teams and an enterprise’s ITSM initiatives. Using Plutora, you can orchestrate parallel releases from several independent DevOps groups all while giving your executives as well as change management specialists insight into overall risk.

Supporting the largest releases for the largest organizations throughout North America, EMEA, and Asia Pacific, Plutora provides proof that large companies can adopt DevOps while managing the risks that come with wider adoption of self-service and agile software development in the enterprise. Aligning process, technology, and information to solve increasingly complex release orchestration challenges, this Gartner “Cool Vendor in IT DevOps” upgrades the enterprise release management from spreadsheets, meetings, and email to an integrated dashboard giving release managers insight and control over large software releases.

Latest Stories
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buyers...
DevOps and microservices are permeating software engineering teams broadly, whether these teams are in pure software shops but happen to run a business, such Uber and Airbnb, or in companies that rely heavily on software to run more traditional business, such as financial firms or high-end manufacturers. Microservices and DevOps have created software development and therefore business speed and agility benefits, but they have also created problems; specifically, they have created software securi...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, Cloud Expo and @ThingsExpo are two of the most important technology events of the year. Since its launch over eight years ago, Cloud Expo and @ThingsExpo have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, I provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading the...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
While not quite mainstream yet, WebRTC is starting to gain ground with Carriers, Enterprises and Independent Software Vendors (ISV’s) alike. WebRTC makes it easy for developers to add audio and video communications into their applications by using Web browsers as their platform. But like any market, every customer engagement has unique requirements, as well as constraints. And of course, one size does not fit all. In her session at WebRTC Summit, Dr. Natasha Tamaskar, Vice President, Head of C...
Cloud Expo, Inc. has announced today that Aruna Ravichandran, vice president of DevOps Product and Solutions Marketing at CA Technologies, has been named co-conference chair of DevOps at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager - Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, reviewed next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discussed how to increase efficiencies, improve service delivery and enhance corporate cloud solution development. Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has been engaged in t...
In the enterprise today, connected IoT devices are everywhere – both inside and outside corporate environments. The need to identify, manage, control and secure a quickly growing web of connections and outside devices is making the already challenging task of security even more important, and onerous. In his session at @ThingsExpo, Rich Boyer, CISO and Chief Architect for Security at NTT i3, will discuss new ways of thinking and the approaches needed to address the emerging challenges of securit...
TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. By creating abundant, high-quality editorial content across more than 140 highly targeted technology-specific websites, TechTarget attracts and nurtures communities of technology buyers researching their companies' information technology needs. By understanding these buyers' content consumption behaviors, TechTarget creates the purchase inte...
Almost two-thirds of companies either have or soon will have IoT as the backbone of their business. Though, IoT is far more complex than most firms expected with a majority of IoT projects having failed. How can you not get trapped in the pitfalls? In his session at @ThingsExpo, Tony Shan, Chief IoTologist at Wipro, will introduce a holistic method of IoTification, which is the process of IoTifying the existing technology portfolios and business models to adopt and leverage IoT. He will delve in...
"We host and fully manage cloud data services, whether we store, the data, move the data, or run analytics on the data," stated Kamal Shannak, Senior Development Manager, Cloud Data Services, IBM, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
As cloud adoption continues to transform business, today's global enterprises are challenged with managing a growing amount of information living outside of the data center. The rapid adoption of IoT and increasingly mobile workforce are exacerbating the problem. Ensuring secure data sharing and efficient backup poses capacity and bandwidth considerations as well as policy and regulatory compliance issues.
Many private cloud projects were built to deliver self-service access to development and test resources. While those clouds delivered faster access to resources, they lacked visibility, control and security needed for production deployments. In their session at 18th Cloud Expo, Steve Anderson, Product Manager at BMC Software, and Rick Lefort, Principal Technical Marketing Consultant at BMC Software, discussed how a cloud designed for production operations not only helps accelerate developer inno...
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great dea...
SYS-CON Events announced today that LeaseWeb USA, a cloud Infrastructure-as-a-Service (IaaS) provider, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. LeaseWeb is one of the world's largest hosting brands. The company helps customers define, develop and deploy IT infrastructure tailored to their exact business needs, by combining various kinds cloud solutions.