Welcome!

Blog Feed Post

How to interpret and report your performance test results (so people actually read them)

I recently attended STPCon in San Francisco. It was a good software testing conference, and STP continues to welcome performance testing in a way most other testing conferences do not. I led a tutorial called Interpreting and Reporting Performance Test Results, which I want to share in blog form today. (My slides are embedded at the bottom of this post, if you’d like to check them out, too.)

What is a performance test?

Performance tests try to reduce the risks of downtime or outages on a multi-user systems by conducting experiments that use load to reveal limitations and errors in the system. Testing is usually assessing the performance and capacity of systems that were expensive and time-consuming to build.

Very few software projects are delivered early – and it’s never happened in my experience – so there are usually significant time pressures. The findings of a performance test inform tactical and strategic decisions that have even more at stake; the wrong decision about going live with a website or application could damage the financial results, brand, or even viability of the company.

The stakes for performance testing are almost always pretty high

In a short period of time, we need to gather information to help us advise our stakeholders on making decisions that will affect businesses and potentially, careers. As the performance tester, we have a responsibility to report reliable information about the systems we test, and be willing to not only risk our credibility, but to ask others to stake theirs on our word, too.

All of the steps in performance testing matter to successful projects and making good decisions. These steps include (but aren’t limited to):

  • discovery,
  • modeling,
  • developing scripts, and
  • executing tests.

These are all steps where skill and experience are necessary to getting it right, always at the peril of asking the wrong questions and getting information that is not predictive about the production load the system will face.

Properly interpreting that information and reporting it is where even the right test can fail to deliver value – or worse, mislead by being wrong. Interpreting these results and reporting them properly is where the value of an experienced performance engineer is proven.

Data needs analysis to become information

This is the place that my tutorial started. After running a performance test, there will be barrels full of numbers.

So what’s next?

The answer is definitely not to generate and send a canned report from your testing tool. Results interpretation and reporting is where a performance tester earns their stripes.

Visualizing data with graphs is the most commonly used method for analyzing load testing results

Most load testing tools have some graphing capability, but you should not mistake graphs for reports. Graphs are just a tool. The person operating the tool has to interpret the data that graphs help visualize, determine what matters and what doesn’t, and present actionable information in a way that stakeholders can consume.

As an aside, here’s an example of a graph showing how averages lie. Good visualizations help expose how data can be misleading.graph: averages lie

The performance tester should form hypotheses, draw tentative conclusions, determine what information is needed to confirm or disprove them, and prepare key visualizations that both give insight on system performance and bottlenecks and support the narrative of the report.

Some of the skills necessary for doing this are foundational technical skills, understanding things like:

  • architecture,
  • hard and soft resources,
  • garbage collection algorithms,
  • database performance,
  • message bus characteristics, and
  • other components of complex systems.

Understanding that a system slows down at a certain load is of some value. Understanding the reason for the system slowing down: the limiting resource, the scalability characteristics of the system – this information is actionable. This knowledge and experience recognizing patterns can take years to acquire, and that learning is ongoing.

Other skills are socio-political in nature

We need to know what stakeholders want to hear, because that reveals what information they are looking for:

  • Who needs to know these results?
  • What do they need to know?
  • How do they want to be told?
  • How can we form and share the narrative so that everyone on the team can make good decisions that will help us all succeed?

It is our job to be the headlights of a project, revealing information about what reality is. We want to tell the truth, but we can guide our team with actionable feedback to turn findings into a plan, not just a series of complaints.

It might seem daunting to imagine growing all of these skills

The good news is that you don’t have to do this all by yourself. The subject matter experts you are working with – Developers, Operations, DBAs, help desk techs, business stakeholders, and your other teammates — all have information that can help you unlock the full value of a performance test.

This is a complex process, full of tacit knowledge and difficult to teach. In describing how to do this, my former consulting partner and mentor Dan Downing came up with a six-step process called CAVIAR:

  1. Collecting
  2. Aggregating
  3. Visualizing
  4. Interpreting
  5. Analyzing
  6. Reporting

1. Collecting is gathering all results from test that can help gain confidence in results validity.

Are there errors? What kind, and when? What are the patterns? Can you get error logs from the application?

One important component of collecting is granularity. Measurements from every few seconds can help you spot trends and transient conditions. One tutorial attendee shared how he asked for access to monitor servers during a test, and was instead sent resource data with five minute granularity.

2. Aggregating is summarizing measurements using various levels of granularity to provide tree and forest views, but using consistent granularities to enable accurate correlation.

Another component is meaningful statistics: scatter, min-max range, variance, percentiles, and other ways of examining the distribution of data. Use multiple metrics to “triangulate”” — that is, confirm (or invalidate) hypotheses

3. Visualizing is about graphing key indicators to help understand what occurred during the test.

Here are some key graphs to start with:

  • Errors over load (“results valid?”)
  • Bandwidth throughput over load (“system bottleneck?”)
  • Response time over load (“how does system scale?”)
    • Business process end-to-end
    • Page level (min-avg-max-SD-90th percentile)
  • System resources (“how’s the infrastructure capacity?”)
    • Server cpu over load
    • JVM heap memory/GC
    • DB lock contention, I/O Latency

4. Interpreting is making sense of what you see, or to be scientific, drawing conclusions from observations and hypotheses.

Some of the steps here:

  • Make objective, quantitative observations from graphs / data: “I observe that…”; no evaluation at this point!
  • Correlate / triangulate graphs / data: “Comparing graph A to graph B…” – relate observations to each other
  • Develop hypotheses from correlated observations
  • Test hypotheses and achieve consensus among tech teams: “It appears as though…” – test these with extended team; corroborate with other information (anecdotal observations, manual tests)
  • Turn validated hypotheses into conclusions: “From observations a, b, c, corroborated by d, I conclude that…”

5. Assessing is checking where we met our objectives, and deciding what action we should take as a result.

Determine remediation options at appropriate level – business, middleware, application, infrastructure, network. Perform agreed-to remediation, and retest.

Generate recommendations at this stage. Recommendations should be specific and actionable at a business or technical level. Discuss findings with technical team members: “What does this look like to you?” Your findings should be reviewed (and if possible, supported) by the teams that need to perform the actions. Nobody likes surprises.

Recommendations should quantify the benefit, if possible the cost, and the risk of not doing it. Remember that a tester illuminates and describes the situation. The final outcome is up to the judgment of your stakeholders, not you. If you provide good information and well-supported recommendations, you’ve done your job.

6. Reporting is last.

Note the “-ing”. We’re not talking about dropping a massive report into an email and walking away.

This includes the written report, presentation of results, email summaries, and even oral reports. The narrative, the short 30-second elevator summary, the three paragraph email – these are the report formats that the most people will consume, so it is worth spending time on getting these right, instead of trying to write a great treatise that no one will read. Author the narrative yourself, instead of letting others interpret your work for you.

Good reporting conveys recommendations in stakeholders’ terms. You should identify the audience(s) for the report, and write and talk in their language. What are the three things you need to convey? What information is needed to support these three things?

How to write a test report

A written report is still usually the key deliverable, even if most people won’t read it (and fewer will read the whole report).

One way to construct the written report might be like this:

1. Executive summary (3 pages max, 2 is better)

  • The primary audience is usually executive sponsors and the business; write the summary at the front of the report for them.
  • Pay close attention to language, acronyms, and jargon. In other words, either explain it or leave it out.
  • Be careful about the appropriate level of detail.
  • Try to make a correlation to business objectives.
  • Summarize objectives, approach, target load, acceptance criteria.
  • Cite factual observations.
  • Draw conclusions based on observations.
  • Make actionable recommendations.

2. Supporting detail

  • Rich technical detail here. Include observations and annotated graphs.
  • Include feedback from technical teams. Quote accurately.
  • Test parameters (date/time executed, business processes, load ramp, think-times, system tested (hardware configuration, software versions/builds).
  • Consider sections for errors, throughput, scalability, and capacity.
  • In each section: annotated graphs, observations, conclusions, recommendations.

3. Associated docs (if appropriate)

Full set of graphs, workflow detail, scripts, test assets, at the end of the report to document what was done.

Note this is not pressing “Print” of tool’s default Report. Who is the audience? Why would they want to see 50 graphs and 20 tables? What will they be able to see?

Remember: Data + Analysis = INFORMATION

The last step: Present the results

Make 5-10 slides, schedule a meeting with all the stakeholders, and deliver the key findings. Explain your recommendations, describe the risks, and suggest the solutions.

Caveats and takeaways

This methodology isn’t appropriate for every context. Your project may be small, or you may have a charter to run a single test and report to only a technical audience. There are other reasons to decide to do things differently in your project, and that’s OK. Keep in mind that your expertise as a performance tester is what turns numbers into actionable information.

Related reading

Download: Understanding Continuous testing in an agile world ebook

Read the original blog entry...

More Stories By SOASTA Blog

The SOASTA platform enables digital business owners to gain unprecedented and continuous performance insights into their real user experience on mobile and web devices in real time and at scale.

Latest Stories
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices t...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to clos...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He expla...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...