Welcome!

Blog Feed Post

Long-term CIO implications of application experience

I wrote previously that the networking industry was evolving from CapEx to OpEx to AppEx (Application Experience). There is certainly enough market buzz around applications if you listen to the major vendor positions. Cisco includes applications in its overarching moniker (Application-Centric Infrastructure), VMWare has blogged about application awareness as it relates to NSX, and even some of the peripheral players like F5 have embraced applications as part of their ongoing dialogue with the community.

If there is a shift to AppEx, what are the implications for the CIO?

The most obvious requirement to move to Application Experience as an IT driver is to define in explicit terms what application experience means. We need to define terms like performance and scale and even secure, and then break down the various contributions by system component so that we can understand who is responsible for what impact.

But a movement towards explicitly-defined application experience means a lot more than just instrumenting some infrastructure, collection statistics, and correlating the data.

What would have to be true for application experience to be a major driving factor behind architectural decisions? At its most basic, there would have to be widespread agreement on what the meaningful applications in the company are. Certainly you cannot create blanket application experience metrics that are applied uniformly to every application. This means that CIOs who want to prepare for a move in this direction could start by cataloguing the applications in play today.

Any such inventory should explicitly document how applications contribute to meaningful corporate objectives. For high-tech companies with a distributed development force, the applications might hinge around code versioning, bug tracking, and compilation tools. For companies that deal with tons of human resources, the most important applications might be HCM or even ERP. For companies whose job it is to maintain data, the applications could be more related to storage and replication.

Whatever the applications, the CIO ought to know those that are most critical to the business.

Note that the focus is on what is most important. There is not a real need to understand every application. Optimization is about treating some things differently. If you inventory 4,000 applications and try to make them all somewhat different, the deltas between individual applications become irrelevantly small. Instead, application experience will dictate that you manage by exception – identify the really critical or performance-sensitive stuff and do something special there.

For most enterprises, IT is treated as a service organization. If this is the case, the CIO will be expected to align application experience to the various lines of business. Not only will they have an interest in what the most critical applications are but also in what metrics are being used to define success. After all, it is their employees that are the consumers of that experience. This means that CIOs should include the lines of business in the definition of application experience.

But once you define the objective, you will be expected to report progress against it. It seems likely that these metrics would eventually become performance indicators for specific groups or IT as a whole. The implication here is that the metrics will help set objectives, which means that they will influence things like bonuses and budgets.

Leaders need to understand the likely endgame. The temptation when creating metrics in many organizations is to quickly pull together metrics that are good enough. But if you know ahead of time that those metrics will eventually drive how the organization is compensated, perhaps you ought to spend more time up front getting them right. And before setting targets, you likely want to spend real time benchmarking performance in a repeatable way.

Repeatable is the key here. Anyone who has instrumented ANY system will attest to the fact that metrics are only useful if they are repeatable. If running a report yields different results every time you run it, chances are that the report is not as meaningful as you would like. The ramification is that reports need to be run around well-defined scenarios that can be reproduced on demand.

The upside of all of this preparation is that the right set of metrics can be powerful change agents. They help focus entire organizations on a small set of tasks that can have demonstrable impact on the business.

The point here is that there is a lot of work that has to happen on the customer side before something like Application Experience becomes real. While it is incumbent on the vendors to create solutions that do something better for applications, customers should eventually be complicit in any shift in purchasing criteria. Those customers that start early will be in the best position to lead the dialogue with vendors.

And leading will matter because the efficacy of all of this will eventually rest on the existence of a solid analytics foundation. It is possible that clever customers can steer their vendors to work with specific analytics companies. That would give them a tangible deployment advantage, both in terms of acquisition costs (the solution is already on-prem) and operational effort.

For vendors, this means you ought to be looking around now to see who you will partner with. Choose wisely, because if the industry does go through consolidation and your dance partner is gobbled up, you might be left alone. The stakes might not be super high now, but when purchasing decisions hinge on a measurable Application Experience, you might think differently.

[Today's fun fact: In the average lifetime, a person will walk the equivalent of 5 times around the equator. You would think we would all be thinner.]

The post Long-term CIO implications of application experience appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

Latest Stories
SYS-CON Events announced today that Outlyer, a monitoring service for DevOps and operations teams, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Outlyer is a monitoring service for DevOps and Operations teams running Cloud, SaaS, Microservices and IoT deployments. Designed for today's dynamic environments that need beyond cloud-scale monitoring, we make monitoring effortless so you ...
My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum...
Virtualization over the past years has become a key strategy for IT to acquire multi-tenancy, increase utilization, develop elasticity and improve security. And virtual machines (VMs) are quickly becoming a main vehicle for developing and deploying applications. The introduction of containers seems to be bringing another and perhaps overlapped solution for achieving the same above-mentioned benefits. Are a container and a virtual machine fundamentally the same or different? And how? Is one techn...
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
What sort of WebRTC based applications can we expect to see over the next year and beyond? One way to predict development trends is to see what sorts of applications startups are building. In his session at @ThingsExpo, Arin Sime, founder of WebRTC.ventures, will discuss the current and likely future trends in WebRTC application development based on real requests for custom applications from real customers, as well as other public sources of information,
China Unicom exhibit at the 19th International Cloud Expo, which took place at the Santa Clara Convention Center in Santa Clara, CA, in November 2016. China United Network Communications Group Co. Ltd ("China Unicom") was officially established in 2009 on the basis of the merger of former China Netcom and former China Unicom. China Unicom mainly operates a full range of telecommunications services including mobile broadband (GSM, WCDMA, LTE FDD, TD-LTE), fixed-line broadband, ICT, data communica...
With the introduction of IoT and Smart Living in every aspect of our lives, one question has become relevant: What are the security implications? To answer this, first we have to look and explore the security models of the technologies that IoT is founded upon. In his session at @ThingsExpo, Nevi Kaja, a Research Engineer at Ford Motor Company, will discuss some of the security challenges of the IoT infrastructure and relate how these aspects impact Smart Living. The material will be delivered i...
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your contain...
Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, represent...
As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningf...
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, will provide a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services ...
SYS-CON Events announced today that Ocean9will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Ocean9 provides cloud services for Backup, Disaster Recovery (DRaaS) and instant Innovation, and redefines enterprise infrastructure with its cloud native subscription offerings for mission critical SAP workloads.
Building a cross-cloud operational model can be a daunting task. Per-cloud silos are not the answer, but neither is a fully generic abstraction plane that strips out capabilities unique to a particular provider. In his session at 20th Cloud Expo, Chris Wolf, VP & Chief Technology Officer, Global Field & Industry at VMware, will discuss how successful organizations approach cloud operations and management, with insights into where operations should be centralized and when it’s best to decentraliz...
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, will posit that disruption is inevitable for c...