Welcome!

News Feed Item

Boundary Surpasses 400% YoY Growth in Processing of Massive IT Operations Performance Analytics in the Cloud

Big Data Startup Now Analyzing up to 2 Trillion Metrics per Day, With Over 500 Billion From AWS

MOUNTAIN VIEW, CA -- (Marketwired) -- 01/28/14 -- Boundary is processing an average of 1.5 trillion application and infrastructure performance metrics per day on behalf of its clients and has computed occasional daily bursts of over 2 trillion metrics. The daily average represents a 400% year-over-year increase driven by the growing needs of customers to deliver high quality application performance, find problems faster and avoid unplanned downtime. The fastest growing segment of data processed by Boundary, with more than 500 billion metrics per day (a 600% growth), comes from Amazon Web Services (AWS) and reflects clients' expanded use of and confidence in the cloud infrastructure for transitioning legacy enterprise applications and building new ones when paired with the unparalleled visibility Boundary provides.

Increased use of Boundary is happening because CIOs are under pressure to provide consistently high uptime for internal and public-facing applications. That puts IT Operations professionals in the position of having to know how applications and infrastructure are performing at all times, and the impact of change to their environments. Modern environments are much more complex than their predecessors, raising the degree of difficulty for IT Operations. So, they send more and more data to Boundary, which ingests, correlates and analyzes performance information on a second-by-second basis, giving IT Operations staff as well as the CIO the confidence they need to expand their use of AWS and similar services.

Ninety-nine percent of the metrics collected by Boundary's service are used to maintain a current baseline for expected business application and network performance, giving its clients strong statistics on normal business service behavior. This level of granularity enables customers to pinpoint and in some cases predict moments of risk that could jeopardize service delivery. Boundary empowers clients with actionable data by delivering the following capabilities:

  • On-premise and cloud infrastructure network flow metrics offering insight on the communication latency between services indicating risks to applications. This single source offers organizations the only SaaS solution with per-second resolution and history to help optimize their cloud-based applications.
  • Cloud adoption with confidence by helping organizations move to AWS and similar services with the same visibility they would expect to have for their on-premise applications.
  • Real-time metrics calculated and streamed to customers including bits/second (in/out), packets/second (in/out), round-trip times, out of order packets/second, re-transmits/second, and more.
  • A real-time application topology map showing all the nodes a network traffic routes across an IT infrastructure, updated to accurately reflect the current upstream and downstream impacts.
  • SaaS based means that organizations do not need to spend significant resources on scoping, procuring, deploying and maintaining hardware. As a result, clients have visibility on the first day with little to no risk.

"Boundary's real-time network layer topology has allowed us to provide instantaneous operational value for our AWS implementations," says Allen Shacklock, lead cloud architect at Scripps Networks Interactive, owner of brands such as the Food Network, HGTV and the Travel Channel. "The benefit of using a SaaS provider to visualize traffic patterns and identify problem areas quickly allows for teams to proactively solve issues without taking on additional management tasks."

"Managing a system running on AWS that supports 5000 TPS from 20+ million users is not a simple task," says Charles Chan, head of engineering at Wattpad. "Boundary makes it easy by giving us insight into what is going on at the AWS network layer in real-time. Whether it is Nginx, Elasticsearch or Redis, Boundary is able to identify any change in traffic pattern going in or come out of any particular node. If there is anything happening to our system, we are able to isolate the node that is causing the issue almost instantaneously, which means we are able to avoid issues before the majority of our users see them."

"Boundary has become our go-to instrument for seeing into the clouds," says Matt Mankins, CTO at Fast Company. "Boundary helps us sleep soundly. In the middle of a winter snowstorm, we got reports from our monitoring service that our site wasn't responding. Was it the weather? Was it Amazon EC2? Boundary helped us disambiguate and create a narrative from its charts to answer, 'What's going on?' (An intermediate network had a connectivity issue. We can sleep!). I've used Boundary graphs to explain our cloud-based app on more than one occasion. It's a great way to illustrate how our technology stack interacts - both now and historically. People of all technical levels can use the Boundary interface to help tell a story about 'what's going on.'"

"We're 100 percent in the cloud, so we had no way of monitoring the physical network," says Adam D'Amico, director of technical operations at Okta. "Previously, we could see very coarse statistics, but now we have an aggregated view and we can see fine-grained aspects of how the network is functioning. With Boundary, we can now identify specific application service traffic volumes, such as MySQL traffic and overall AWS health, so we can proactively size our cloud instances and handle peak demand."

"We've completely disrupted the legacy IT Operations software model," says Gary Read, CEO and president at Boundary. "To start, we've removed the need and cost of scoping, procuring, deploying and managing hardware that would typically consume countless hours and other valuable resources. We're giving free trials to customers and they're typically seeing millions of metrics and optimizing their applications in the first week. Most of the time spent fixing an application or infrastructure problem is focused on finding the source, and that's exactly where Boundary helps customers."

About Boundary
Boundary allows customers to monitor their entire IT environments from a single point of control and is uniquely designed to deal with challenges stemming from modern, highly distributed applications. Boundary's best-in-class enterprise event management service centralizes and correlates events, alerts and notifications from any source, enabling Ops/DevOps professionals to quickly see the total picture. Event management data is enriched by Boundary's ability to provide a real-time, all-the-time snapshot of the logical application topology. Boundary is privately held, based in Mountain View, and backed by Lightspeed Venture Partners and Scale Venture Partners. For more information on Boundary visit us on the web at www.boundary.com or on www.twitter.com/boundary.

Image Available: http://www2.marketwire.com/mw/frame_mw?attachid=2504769

Boundary contact:
Kevin Wolf
TGPR
(650) 327-1641
Email Contact

More Stories By Marketwired .

Copyright © 2009 Marketwired. All rights reserved. All the news releases provided by Marketwired are copyrighted. Any forms of copying other than an individual user's personal reference without express written permission is prohibited. Further distribution of these materials is strictly forbidden, including but not limited to, posting, emailing, faxing, archiving in a public database, redistributing via a computer network or in a printed form.

Latest Stories
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of Soli...
Without a clear strategy for cost control and an architecture designed with cloud services in mind, costs and operational performance can quickly get out of control. To avoid multiple architectural redesigns requires extensive thought and planning. Boundary (now part of BMC) launched a new public-facing multi-tenant high resolution monitoring service on Amazon AWS two years ago, facing challenges and learning best practices in the early days of the new service. In his session at 19th Cloud Exp...
Predictive analytics tools monitor, report, and troubleshoot in order to make proactive decisions about the health, performance, and utilization of storage. Most enterprises combine cloud and on-premise storage, resulting in blended environments of physical, virtual, cloud, and other platforms, which justifies more sophisticated storage analytics. In his session at 18th Cloud Expo, Peter McCallum, Vice President of Datacenter Solutions at FalconStor, discussed using predictive analytics to mon...
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
Traditional on-premises data centers have long been the domain of modern data platforms like Apache Hadoop, meaning companies who build their business on public cloud were challenged to run Big Data processing and analytics at scale. But recent advancements in Hadoop performance, security, and most importantly cloud-native integrations, are giving organizations the ability to truly gain value from all their data. In his session at 19th Cloud Expo, David Tishgart, Director of Product Marketing ...
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
"ReadyTalk is an audio and web video conferencing provider. We've really come to embrace WebRTC as the platform for our future of technology," explained Dan Cunningham, CTO of ReadyTalk, in this SYS-CON.tv interview at WebRTC Summit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Everyone knows that truly innovative companies learn as they go along, pushing boundaries in response to market changes and demands. What's more of a mystery is how to balance innovation on a fresh platform built from scratch with the legacy tech stack, product suite and customers that continue to serve as the business' foundation. In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, discussed why and how ReadyTalk diverted from healthy revenue and mor...
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and G...
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busin...