Click here to close now.




















Welcome!

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, Agile Computing, @CloudExpo, Cloud Security

@DevOpsSummit: Blog Feed Post

Micro-Architectures Need Relational, Application-Driven Monitoring

Is your monitoring strategy evolving along with your application and infrastructure architectures?

As "applications" continue to morph into what we once might have called "mashups" but no longer do because, well, SOA is officially dead, dontcha know, it is increasingly important for a variety of constituents within organizations - from business stakeholders to application owners to devops - to understand the overall "health" of an application.

Traditional monitoring techniques focus on monitoring from a very infrastructure point of view. That is, the technique is really more of a pool and resource monitor than it is an application monitor. Each individual service that comprises an application are monitored individually, with no real view of how the "application" itself is performing.

traditional-monitoring

Now the problem with this approach is that different applications may share the same services (especially in an API-driven model) but have very different performance and availability requirements. It may be completely acceptable for an internal application to respond more slowly than a consumer-facing application, for example.

Thus organizations are left with a view that accurately informs them as to the current health of individual services, but no real way to use them to get a picture of how the application is performing.

Application-Driven Monitoring
What we really need is to be able to not only monitor the performance and health of individual services but the concept of an application - even if that application is just a mashup of other applications or services.

modern-monitoring

Important to remember, too, is that applications aren't limited to a single protocol, like HTTP. Consider an application like Microsoft Exchange, which can be - and frequently is - accessed via multiple protocols. It may be necessary to monitor a variety of services in order to determine the actual health and availability of the application.

app-health-scoreThe key is to not just monitor individual services (that's important, but it's not the whole enchilada) but also the application as a whole. This provides the business and application stakeholders with a better view of how IT is servicing their needs and also offers IT significant value in understanding the impact of individual services on application and business services.

For example, if the same service is used for multiple applications and the service starts degrading, it should (logically) impact the health of every application. Noticing this early on enables IT to proactively deal with the situation, up to and including notifying all the application owners that there's an issue with a core service and IT is already on the case, before the call comes in. Being able to further monitor and analyze performance across time enables the identification of outliers earlier. By spotting these leading indicators of trouble, it can be possible to head off an outage or performance degradation before it occurs, leaving application and business stakeholders blissfully ignorant of what might have been a disastrous incident.

It can also be the case that sudden demand for an application negatively impacts the performance or availability of a shared service, which in turn, of course, impacts applications that use that service. By monitoring all the pieces of the application, the source of increased demand can be more easily correlated and a strategy to address it formulated.

Monitoring is a critical (and sadly often overlooked and underappreciated) function in the data center. Without it, however, modern methods of scalability (elasticity) and orchestrated responses to failure would not be possible. Because it is so critical, it's important to ensure that monitoring capabilities and your use of them is supporting modern architectures, networks and services.

Without monitoring, there's really no way to recognize and react to failures, overloads, and outages. So make sure your monitoring strategy is evolving along with your data center infrastructure and applications.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usag...
In their session at 17th Cloud Expo, Hal Schwartz, CEO of Secure Infrastructure & Services (SIAS), and Chuck Paolillo, CTO of Secure Infrastructure & Services (SIAS), provide a study of cloud adoption trends and the power and flexibility of IBM Power and Pureflex cloud solutions. In his role as CEO of Secure Infrastructure & Services (SIAS), Hal Schwartz provides leadership and direction for the company.
There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. In his session at 15th Cloud Expo, Michael Meiner, an Engineering Director at Oracle, Corporation, analyzed a range of cloud offerings (IaaS, PaaS, SaaS) and discussed the benefits/challenges of migrating to each offe...
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
As organizations shift towards IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection and E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 17th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships at Com...
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
Scrum Alliance has announced the release of its 2015 State of Scrum Report. Almost 5,000 individuals and companies worldwide participated in this year's survey. Most organizations in the market today are still leading and managing under an Industrial Age model. Not only is the speed of change growing exponentially, Agile and Scrum frameworks are showing companies how to draw on the full talents and capabilities of those doing the work in order to continue innovating for success.
SYS-CON Events announced today that MobiDev, a software development company, will exhibit at the 17th International Cloud Expo®, which will take place November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software development company with representative offices in Atlanta (US), Sheffield (UK) and Würzburg (Germany); and development centers in Ukraine. Since 2009 it has grown from a small group of passionate engineers and business managers to a full-scale mobi...
Between the compelling mockups and specs produced by your analysts and designers, and the resulting application built by your developers, there is a gulf where projects fail, costs spiral out of control, and applications fall short of requirements. In his session at @DevOpsSummit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, presented a new approach where business and development users collaborate – each using tools appropriate to their goals and expertise – to build mocku...
SYS-CON Events announced today that VividCortex, the monitoring solution for the modern data system, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The database is the heart of most applications, but it’s also the part that’s hardest to scale, monitor, and optimize even as it’s growing 50% year over year. VividCortex is the first unified suite of database monitoring tools specifically desi...
Graylog, Inc., has added the capability to collect, centralize and analyze application container logs from within Docker. The Graylog logging driver for Docker addresses the challenges of extracting intelligence from within Docker containers, where most workloads are dynamic and log data is not persisted or stored. Using Graylog, DevOps and IT Ops teams can pinpoint the root cause of problems to deliver new applications faster and minimize downtime.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacent...
Learn how you can use the CoSN SEND II Decision Tree for Education Technology to make sure that your K–12 technology initiatives create a more engaging learning experience that empowers students, teachers, and administrators alike.
Mobile, social, Big Data, and cloud have fundamentally changed the way we live. “Anytime, anywhere” access to data and information is no longer a luxury; it’s a requirement, in both our personal and professional lives. For IT organizations, this means pressure has never been greater to deliver meaningful services to the business and customers.
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.