Click here to close now.




















Welcome!

Blog Feed Post

Long-term CIO implications of application experience

I wrote previously that the networking industry was evolving from CapEx to OpEx to AppEx (Application Experience). There is certainly enough market buzz around applications if you listen to the major vendor positions. Cisco includes applications in its overarching moniker (Application-Centric Infrastructure), VMWare has blogged about application awareness as it relates to NSX, and even some of the peripheral players like F5 have embraced applications as part of their ongoing dialogue with the community.

If there is a shift to AppEx, what are the implications for the CIO?

The most obvious requirement to move to Application Experience as an IT driver is to define in explicit terms what application experience means. We need to define terms like performance and scale and even secure, and then break down the various contributions by system component so that we can understand who is responsible for what impact.

But a movement towards explicitly-defined application experience means a lot more than just instrumenting some infrastructure, collection statistics, and correlating the data.

What would have to be true for application experience to be a major driving factor behind architectural decisions? At its most basic, there would have to be widespread agreement on what the meaningful applications in the company are. Certainly you cannot create blanket application experience metrics that are applied uniformly to every application. This means that CIOs who want to prepare for a move in this direction could start by cataloguing the applications in play today.

Any such inventory should explicitly document how applications contribute to meaningful corporate objectives. For high-tech companies with a distributed development force, the applications might hinge around code versioning, bug tracking, and compilation tools. For companies that deal with tons of human resources, the most important applications might be HCM or even ERP. For companies whose job it is to maintain data, the applications could be more related to storage and replication.

Whatever the applications, the CIO ought to know those that are most critical to the business.

Note that the focus is on what is most important. There is not a real need to understand every application. Optimization is about treating some things differently. If you inventory 4,000 applications and try to make them all somewhat different, the deltas between individual applications become irrelevantly small. Instead, application experience will dictate that you manage by exception – identify the really critical or performance-sensitive stuff and do something special there.

For most enterprises, IT is treated as a service organization. If this is the case, the CIO will be expected to align application experience to the various lines of business. Not only will they have an interest in what the most critical applications are but also in what metrics are being used to define success. After all, it is their employees that are the consumers of that experience. This means that CIOs should include the lines of business in the definition of application experience.

But once you define the objective, you will be expected to report progress against it. It seems likely that these metrics would eventually become performance indicators for specific groups or IT as a whole. The implication here is that the metrics will help set objectives, which means that they will influence things like bonuses and budgets.

Leaders need to understand the likely endgame. The temptation when creating metrics in many organizations is to quickly pull together metrics that are good enough. But if you know ahead of time that those metrics will eventually drive how the organization is compensated, perhaps you ought to spend more time up front getting them right. And before setting targets, you likely want to spend real time benchmarking performance in a repeatable way.

Repeatable is the key here. Anyone who has instrumented ANY system will attest to the fact that metrics are only useful if they are repeatable. If running a report yields different results every time you run it, chances are that the report is not as meaningful as you would like. The ramification is that reports need to be run around well-defined scenarios that can be reproduced on demand.

The upside of all of this preparation is that the right set of metrics can be powerful change agents. They help focus entire organizations on a small set of tasks that can have demonstrable impact on the business.

The point here is that there is a lot of work that has to happen on the customer side before something like Application Experience becomes real. While it is incumbent on the vendors to create solutions that do something better for applications, customers should eventually be complicit in any shift in purchasing criteria. Those customers that start early will be in the best position to lead the dialogue with vendors.

And leading will matter because the efficacy of all of this will eventually rest on the existence of a solid analytics foundation. It is possible that clever customers can steer their vendors to work with specific analytics companies. That would give them a tangible deployment advantage, both in terms of acquisition costs (the solution is already on-prem) and operational effort.

For vendors, this means you ought to be looking around now to see who you will partner with. Choose wisely, because if the industry does go through consolidation and your dance partner is gobbled up, you might be left alone. The stakes might not be super high now, but when purchasing decisions hinge on a measurable Application Experience, you might think differently.

[Today's fun fact: In the average lifetime, a person will walk the equivalent of 5 times around the equator. You would think we would all be thinner.]

The post Long-term CIO implications of application experience appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

Latest Stories
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Organizations from small to large are increasingly adopting cloud solutions to deliver essential business services at a much lower cost. According to cyber security experts, the frequency and severity of cyber-attacks are on the rise, causing alarm to businesses and customers across a variety of industries. To defend against exploits like these, a company must adopt a comprehensive security defense strategy that is designed for their business. In 2015, organizations such as United Airlines, Sony...
The Internet of Things is in the early stages of mainstream deployment but it promises to unlock value and rapidly transform how organizations manage, operationalize, and monetize their assets. IoT is a complex structure of hardware, sensors, applications, analytics and devices that need to be able to communicate geographically and across all functions. Once the data is collected from numerous endpoints, the challenge then becomes converting it into actionable insight.
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
Consumer IoT applications provide data about the user that just doesn’t exist in traditional PC or mobile web applications. This rich data, or “context,” enables the highly personalized consumer experiences that characterize many consumer IoT apps. This same data is also providing brands with unprecedented insight into how their connected products are being used, while, at the same time, powering highly targeted engagement and marketing opportunities. In his session at @ThingsExpo, Nathan Trel...
Amazon and Google have built software-defined data centers (SDDCs) that deliver massively scalable services with great efficiency. Yet, building SDDCs has proven to be a near impossibility for ‘normal’ companies without hyper-scale resources. In his session at 17th Cloud Expo, David Cauthron, founder and chief executive officer of Nimboxx, will discuss the evolution of virtualization (hardware, application, memory, storage) and how commodity / open source hyper converged infrastructure (HCI) so...
In their Live Hack” presentation at 17th Cloud Expo, Stephen Coty and Paul Fletcher, Chief Security Evangelists at Alert Logic, will provide the audience with a chance to see a live demonstration of the common tools cyber attackers use to attack cloud and traditional IT systems. This “Live Hack” uses open source attack tools that are free and available for download by anybody. Attendees will learn where to find and how to operate these tools for the purpose of testing their own IT infrastructu...
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
With the Apple Watch making its way onto wrists all over the world, it’s only a matter of time before it becomes a staple in the workplace. In fact, Forrester reported that 68 percent of technology and business decision-makers characterize wearables as a top priority for 2015. Recognizing their business value early on, FinancialForce.com was the first to bring ERP to wearables, helping streamline communication across front and back office functions. In his session at @ThingsExpo, Kevin Roberts...
IBM’s Blue Box Cloud, powered by OpenStack, is now available in any of IBM’s globally integrated cloud data centers running SoftLayer infrastructure. Less than 90 days after its acquisition of Blue Box, IBM has integrated its Blue Box Cloud Dedicated private-cloud-as-a-service into its broader portfolio of OpenStack® based solutions. The announcement, made today at the OpenStack Silicon Valley event, further highlights IBM’s continued support to deliver OpenStack solutions across all cloud depl...
Red Hat is investing in Tesora, the number one contributor to OpenStack Trove Database as a Service (DBaaS) also ranked among the top 20 companies contributing to OpenStack overall. Tesora, the company bringing OpenStack Trove Database as a Service (DBaaS) to the enterprise, has announced that Red Hat and others have invested in the company as a part of Tesora's latest funding round. The funding agreement expands on the ongoing collaboration between Tesora and Red Hat, which dates back to Febr...