Blog Feed Post

How to Make AppDynamics the Salt & Pepper of the Spice Rack

Large financial services enterprises are known to invest in tools which are not always fully adopted. Many are excellent tools which for one reason or another ended up being adopted to a fraction of their potential or, even worse, were shelved without making an appearance in a production environment.

Thinking about these tools often reminds me of my favorite Michael McIntyre sketch, herbs and spices. Sometimes, in my head, I like to replace the unused herbs and spices with tools I have come across in my career which were selected by large enterprises but weren’t properly adopted (I know it’s a bit sad ☺).

Achieving successful tool adoption in large enterprises can be challenging. It requires special attention in a number of areas which I will highlight in this blog.

So let’s set the scene: A large global bank decides to invest in a best of breed application intelligence solution, in this case AppDynamics. The tool is selected to help accelerate the bank’s digital transformation programmers to improve its application performance and to replace older tools which became shelfware. The tool is expected to be used by an estimated 800 applications, utilizing up to 10,000 application agents. So where do we start?

Enablement — Delivering Monitoring Capabilities

Enablement refers to the delivery of monitoring capabilities. This includes the back-end controller (whether on-premises or SaaS) and the agent instrumentation mechanism. The back-end implementation is crucial, but usually straightforward, so I won’t expand on it in this post. The instrumentation delivery is also key and from my experience, requires special attention.

The 80/20 Rule — Platforms First

When it comes to agent instrumentation in large enterprises, the Pareto principle applies very well. Around 80% of the target applications in our large global bank will be deployed on a strategic platform service. The efforts of implementing an automated instrumentation mechanism by the platform engineering team are not huge and should fit into a standard sprint or two.

So “platforms first” should be the rule of thumb. This will ease the adoption process and give a professional experience to the users who’ll be able to start using AppDynamics with a click of a button.

The 20% applies to applications that don’t follow the strategic deployment process. These applications are hosted on physical or virtual servers outside of the strategic platform. They might be running mainframes, quirky ESBs, use third party products which are not compatible with the strategic platform, or are just tagged as “legacy”, making them unsuitable for investment or migration into the strategic platform. It’s fairly easy to see why 20% of the applications can consume 80% of the instrumentation efforts, as the instrumentation approach will have to be on a case-by-case basis.

Other challenges with the enablement phase deliveries are:

• Shortcuts in platform instrumentation will create technical debts, so try to avoid them.

• Minimal business benefits can be realized directly following a successful enablement phase, so set the right expectations.

Usage Standards — Enhance the Default

Once successful enablement is complete, internal applications can start onboarding and using the tool.

During these early implementation stages, the tools adoption service should focus on driving best practices and consistent usage standards through comprehensive out of the box functionality.

Best of breed application intelligence products like AppDynamics often come with out of the box functionality and benefits. However, it’s important to enhance these internally and address as many repeatable use cases as possible by configuring relevant detection rules, alerting rules, branded dashboards, reports, and more. The more the users get out of the box, the more likely they are to buy-in, contribute to the quality of the adoption, and encourage others to use the tool.

The initial efforts to create enhanced out of the box functionalities are key, but at the same time it’s important to establish processes which will make continuous improvements to these centralized configuration items.

The early adoption stage is also the best time to capture, measure, and realise benefits and values. It’s important to invest in tracking and documenting success stories, especially those of the benefits delivered out of the box.

Knowledge — It’s a No-brainer

Here are three assumptions I like to make about knowledge:

• The more product experts there are in the organization, the more successful the product adoption is

• Knowledge is one of the main drivers to careers in IT

• Large enterprises use formal training to invest in their staff’s development

If you agree with these assumptions, then we agree that staff like to gather knowledge, enterprises like to invest in staff knowledge, and good knowledge helps in achieving successful adoption. So here’s a simple adoption guideline: A good training strategy will improve the quality of the adoption.

From my experience, a good training strategy should combine internal knowledge sharing together with formal certification. The external certification will provide users with generic knowledge of how to master the product, while internal knowledge sharing sessions can focus on more specific areas, using some of the bank’s applications and the most relevant use cases.

When the knowledge spreads, many good things start to happen. For example, self-appointed product evangelists appear and start to challenge and push the product’s adoption forward. In many cases you also see a centre of excellence appear, initiated by SMEs who are keen to collaborate and share knowledge. Such a snowball of knowledge and drive is one of the best indicators of successful adoption, so if you’re looking to deliver successful adoption, it’s worth investing in forming this snowball and giving it a little push…

In summary, to make our large global banking client successful and to gain organisational adoption, my recommendation is to give extra attention to enablement, usage standards, and knowledge. Obviously, there are many other areas that require attention, but in my opinion the secret to a successful adoption lies with these three.



About the Author

Peretz Shamir is a Service Delivery Manager at Mansion House Consulting, leading services for Application Intelligence Tools Adoption across MHC’s financial services clients. Peretz is also part of the AppDynamics Instructors community, delivering training services on behalf of AppDynamics Education to its customers.


This publication has been prepared for general guidance on matters of interest only, and does not constitute professional advice. You should not act upon the information contained in this publication without obtaining specific professional advice. No representation or warranty (express or implied) is given as to the accuracy or completeness of the information contained in this publication, and, to the extent permitted by law, the Mansion House Consulting Limited Group, its members, employees, and agents do not accept or assume any liability, responsibility, or duty of care for any consequences of you or anyone else acting, or refraining to act, in reliance on the information contained in this publication or for any decision based on it.

© 2017 Mansion House Consulting Limited. All rights reserved.

In this document, “MHC” refers to the UK entity, and may sometimes refer to the MHC group network. Each MHC entity is a separate legal entity. Please see www.mansion-house.co.uk for further information.

The post How to Make AppDynamics the Salt & Pepper of the Spice Rack appeared first on Application Performance Monitoring Blog | AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

Latest Stories
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, shared examples from a wide range of industries – including en...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Rodrigo Coutinho is part of OutSystems' founders' team and currently the Head of Product Design. He provides a cross-functional role where he supports Product Management in defining the positioning and direction of the Agile Platform, while at the same time promoting model-based development and new techniques to deliver applications in the cloud.
DXWorldEXPO LLC announced today that Kevin Jackson joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Kevin L. Jackson is a globally recognized cloud computing expert and Founder/Author of the award winning "Cloud Musings" blog. Mr. Jackson has also been recognized as a "Top 100 Cybersecurity Influencer and Brand" by Onalytica (2015), a Huffington Post "Top 100 Cloud Computing Experts on Twitter" (2013) and a "Top 50 C...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Dhiraj Sehgal works in Delphix's product and solution organization. His focus has been DevOps, DataOps, private cloud and datacenters customers, technologies and products. He has wealth of experience in cloud focused and virtualized technologies ranging from compute, networking to storage. He has spoken at Cloud Expo for last 3 years now in New York and Santa Clara.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...