Blog Feed Post

NYC Summit Exhibitor Focus: Turbonomic

https://46zwyrvli634e39iq2l9mv8g-wpengine.netdna-ssl.com/wp-content/uplo... 360w, https://46zwyrvli634e39iq2l9mv8g-wpengine.netdna-ssl.com/wp-content/uplo... 624w" sizes="(max-width: 720px) 100vw, 720px" />

In anticipation of our New York City Summit Event on October 19th, we’re highlighting some of the great partners who will be in attendance in this Exhibitor Series. We’re excited to share our latest Q&A with Charles Crouchman, Chief Technology Officer of Turbonomic. 

AppD: What was the impetus behind Turbonomic being founded?

CC: Turbonomic was founded on the premise that software can manage virtualized IT systems better than human beings to assure performance while maximizing efficiency. Turbonomic’s algorithm applies microeconomic theory and the principles of supply and demand to resource management in the data center and cloud stack.

This concept of applying economics to shared compute resources was originally discussed in a series of papers by co-founder, Yechiam Yemini, in the 1980s. The fundamental concept is that workloads choose the infrastructure on which they run and consume only the resources they need to perform, in the same way that shoppers look for the best overall price for a basket of goods, buying only what they need, when they need it.

AppD: Why does overprovisioning in the cloud occur, and why is it such an issue?

CC: Overprovisioning in the cloud occurs for two primary reasons. First, virtually every workload residing in a private data center today is overprovisioned because there is no cost-efficiency penalty for doing so.

Second, cloud providers actually encourage enterprises to overprovision their workloads for performance reasons. Combined, migrated workloads are migrated into oversized templates and net-new public cloud workloads are oversized from inception. The problem lies in the fact that overprovisioned resources are now rented, not owned, and the cost overruns can be significant. For example, if a customer over-sizes just 100 workloads by one template in Amazon Web Services, it equates to $1.2M in annual unnecessary spend.

AppD: How does Turbonomic reduce 30% or so of typical cloud provider costs?

CC: Turbonomic continuously observes workload demand, also known as consumption, and matches that demand to the cheapest available supply. The platform understands the actual resource consumption of each of your public cloud instances. It then matches that consumption to the best available template—the one with just the right amount of compute and storage—resulting in specific actions to resize templates, scale applications, or shift storage tiers. These actions bring supply and demand into alignment.

Where does the 30% come from, you ask? Well this is typically the average amount of resource and dollars by which customers overprovision. With Turbonomic at work, that waste can be safely eliminated.

AppD: What makes Turbonomic so different to other cloud-related vendors?

CC: Turbonomic bears several key differentiators, but the standout point is that Turbonomic is the only real-time management platform that bridges on-premises and cloud infrastructure for performance, cost and compliance management. We integrate with multiple providers up and down the stack, are entirely API-driven, and leverage the aforementioned economic abstraction, which empowers our agnostic approach.

At the end of the day however, it is the fact that Turbonomic is the only platform on the market that can assure your workloads are running performant, at the lowest possible cost, within compliance, regardless of where they reside – all in real-time. Nobody else can offer that, or anything close to it today.

AppD: Tell us about your decision engine that helps determine where a workload should run and when.

CC: The fundamental idea is that the workloads choose the infrastructure on which they run. If you have ever purchased a stock, or bought something on eBay, or spent money on anything, the item you purchased and the amount you paid presumably represented your demand for that item at the time. Turbonomic works in much the same way. Our decision engine exposes the realm of possible residences for each workload… On-prem VMware, Hyper-V, Amazon Web Services, Microsoft Azure… All zone, regions and clusters.

It then prices access to those entities as a function of their real-time utilization and the real-time utilization of their constituent resources (thread pools, heap, database connections, network throughput, storage latency, etc.). This pricing mechanism is a virtual currency, not a real one, but the net result is that workloads shop for the best deal, based on their resource requirements at the time. When a workload identifies a need for X, Y and Z compute, storage and network resources, Turbonomic brokers the purchase, so to speak, and the action to migrate and/or scale the workload takes place.

AppD: How does Turbonomic work with Cisco and AppDynamics to benefit enterprises?

CC: Customers are recognizing that applications are the lifeblood of today’s business. With Cisco and AppDynamics, Turbonomic is bringing our core value props—performance, efficiency, and compliance—to Cisco’s intent-based data center and enabling IT to focus on applications. With these partnerships, every decision our platform makes is about matching real-time application demand to the underlying compute, storage, and network.

The added benefit for UCS customers is that the Turbonomic integration enables demand-based elasticity in the UCS environment, turning blades on or off based on the resource needs of the applications running on UCS. For AppDynamics customers, Turbonomic can discover the application topology and OS metrics through AppDynamics and maps it to the data center stack. Using that information, the decision engine ensures that application components and workloads across the stack get the resources they need when they need them. With Cisco and AppDynamics, we’re stitching application performance directly to self-managing, elastic infrastructure.

AppD: Why should delegates at the NYC Summit visit the Turbonomic booth?

CC: Do you want to see how self-managing infrastructure directly impacts application performance? Come to the booth. We’ll have a demo showing how Turbonomic pulls in application topology and metrics from AppDynamics and then uses that information to drive the right sizing, placement, and capacity decisions that improve performance.

Applications and infrastructure have traditionally been siloed. With this integration, the finger pointing and guessing games are over—this is a whole new ball game for IT—one that every CEO, CRO, CMO, and, yes, every CIO will care about.  Because, again, today’s organizations rely on applications and the business outcomes they drive. Only software can assure their performance in real time, all the time.

Register here to book your free place at the NYC Summit on October 19th and meet the IBM team there.

Charles is Chief Technology Officer of Turbonomic. In this role he contributes to product strategy, evangelizes our technology, supports our strategic sales and business development efforts, and leads product management. Prior to joining Turbonomic, Charles held senior executive positions at several technology startups including Cirba, Mformation Technologies, Opalis Software, and Cybermation.

The post NYC Summit Exhibitor Focus: Turbonomic appeared first on Application Performance Monitoring Blog | AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

Latest Stories
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices t...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to clos...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He expla...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...