Welcome!

Related Topics: Microservices Expo, Containers Expo Blog, @CloudExpo

Microservices Expo: Blog Feed Post

Now Witness the Power of This Fully Operational Feedback Loop

It’s called a feedback loop, not a feedback black hole.

It’s called a feedback loop, not a feedback black hole.

One of the key components of a successful architecture designed to mitigate operational risk is the ability to measure, monitor and make decisions based on collected “management” data. Whether it’s simple load balancing decisions based on availability of an application or more complex global application delivery traffic steering that factors in location, performance, availability and business requirements, neither can be successful unless the components making decisions have the right information upon which to take action.

Monitoring and management is likely one of the least sought after tasks in the data center. It’s not all that exciting and it often involves (please don’t be frightened by this) integration. Agent-based, agentless, standards-based. Monitoring of the health and performance of resources is critical to understanding how well an “application” is performing on a daily basis. It’s the foundational data used for capacity planning, to determine whether an application is under attack and to enable the dynamism required of a dynamic, intelligent infrastructure supportive of today’s operational goals.

YOU CAN’T REACT to WHAT you CAN’T SEE

We talk a lot about standards and commoditization and how both can enable utility-style computing as well as the integration necessary at the infrastructure layers to improve the overall responsiveness of IT. But we imagedon’t talk a lot about what that means in terms of monitoring and management of resource “health” – performance, capacity and availability.

The ability of any load-balancing service depends upon the ability to determine the status of an application. In an operationally mature architecture that includes the status of all components related to the delivery of that application, including other application services such as middle-ware and databases and external application services. When IT has control over all components, then traditional agent-based approaches work well to provide that information. When IT does not have control over all components, as is increasingly the case, then it cannot collect that data nor access it in real-time. If the infrastructure components upon which successful application delivery relies cannot “see” how any given resource is performing let alone whether it’s available or not, there is a failure to communicate that ultimately leads to poor decision making on the part of the infrastructure.

We know that in a highly virtualized or cloud-computing model of application deployment that it’s important to monitor the health of the resource, not the “server”, because the “server” has become little more than a container, a platform upon which a resource is deployed and made available. With the possibility of a resource “moving” it is even more imperative that operations monitor resources. Consider how IT organizations that may desire to leverage more PaaS (Platform as a Service) to drive application development efforts forward faster. Monitoring and management of those resources must occur at the resource layer; IT has no control or visibility into the underlying platforms – which is kind of the point in the first place.

YOU CAN’T MAKE DECISIONS without FEEDBACK

image

The feedback from the resource must come from somewhere. Whether that’s an agent (doesn’t play well with a PaaS model) or some other mechanism (which is where we’re headed in this discussion) is not as important as getting there in the first place. If we’re going to architect highly responsive and dynamic data centers, we must share all the relevant information in a way that enables decision-making components (strategic points of control) to make the right decisions. To do that resources, specifically applications and application-related resources, must provide feedback.

This is a job for devops if ever there was one. Not the ops who apply development principles like Agile to their operational tasks, but developers who integrate operational requirements and needs into the resources they design, develop and ultimately deploy. We already see efforts to standardize APIs imagedesigned to promote security awareness and information through efforts like CloudAudit. We see efforts to standardize and commoditize APIs that drive operational concerns like provisioning with OpenStack. But what we don’t see is an effort to standardize and commoditize even the simplest of health monitoring methods. No simple API, no suggestion of what data might be common across all layers of the application architecture that could provide the basic information necessary for infrastructure services to take actions appropriately.

The feedback regarding the operational status of an application resource is critical in ensuring that infrastructure is able to make the right decisions at the right time regarding each and every request. It’s about promoting dynamic equilibrium in the architecture; an equilibrium that leads to efficient resource utilization across the data center while simultaneously providing for the best possible performance and availability of services.

MORE OPS in the DEV

It is critical that developers not only understand but take action regarding the operational needs of the service delivery chain. It is critical because in many situations the developer will be the only ones with the means to enable the collection of the very data upon which the successful delivery of services relies. While infrastructure and specifically application delivery services are capable of collaborating with applications to retrieve health-related data and subsequently parse the information into actionable data, the key is that the data be available in the first place. That means querying the application service – whether application or middle-ware and beyond – directly for the data needed to make the right decisions. This type of data is not standard, it’s not out of the box, and it’s not built into the platforms upon which developers build and deploy applications. It must be enabled, and that means code.

That means developers must provide the implementation of the means by which the data is collected; ultimately one hopes this results in a standardized health-monitoring collection API jointly specified by ops and dev. Together.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
As businesses adopt functionalities in cloud computing, it’s imperative that IT operations consistently ensure cloud systems work correctly – all of the time, and to their best capabilities. In his session at @BigDataExpo, Bernd Harzog, CEO and founder of OpsDataStore, presented an industry answer to the common question, “Are you running IT operations as efficiently and as cost effectively as you need to?” He then expounded on the industry issues he frequently came up against as an analyst, and ...
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, will provide a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to ...
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
SYS-CON Events announced today that Massive Networks will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Massive Networks mission is simple. To help your business operate seamlessly with fast, reliable, and secure internet and network solutions. Improve your customer's experience with outstanding connections to your cloud.
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
Given the popularity of the containers, further investment in the telco/cable industry is needed to transition existing VM-based solutions to containerized cloud native deployments. The networking architecture of the solution isolates the network traffic into different network planes (e.g., management, control, and media). This naturally makes support for multiple interfaces in container orchestration engines an indispensable requirement.
Everything run by electricity will eventually be connected to the Internet. Get ahead of the Internet of Things revolution and join Akvelon expert and IoT industry leader, Sergey Grebnov, in his session at @ThingsExpo, for an educational dive into the world of managing your home, workplace and all the devices they contain with the power of machine-based AI and intelligent Bot services for a completely streamlined experience.
Because IoT devices are deployed in mission-critical environments more than ever before, it’s increasingly imperative they be truly smart. IoT sensors simply stockpiling data isn’t useful. IoT must be artificially and naturally intelligent in order to provide more value In his session at @ThingsExpo, John Crupi, Vice President and Engineering System Architect at Greenwave Systems, will discuss how IoT artificial intelligence (AI) can be carried out via edge analytics and machine learning techn...
FinTechs use the cloud to operate at the speed and scale of digital financial activity, but are often hindered by the complexity of managing security and compliance in the cloud. In his session at 20th Cloud Expo, Sesh Murthy, co-founder and CTO of Cloud Raxak, showed how proactive and automated cloud security enables FinTechs to leverage the cloud to achieve their business goals. Through business-driven cloud security, FinTechs can speed time-to-market, diminish risk and costs, maintain continu...
SYS-CON Events announced today that Datera, that offers a radically new data management architecture, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datera is transforming the traditional datacenter model through modern cloud simplicity. The technology industry is at another major inflection point. The rise of mobile, the Internet of Things, data storage and Big...
Consumers increasingly expect their electronic "things" to be connected to smart phones, tablets and the Internet. When that thing happens to be a medical device, the risks and benefits of connectivity must be carefully weighed. Once the decision is made that connecting the device is beneficial, medical device manufacturers must design their products to maintain patient safety and prevent compromised personal health information in the face of cybersecurity threats. In his session at @ThingsExpo...
Existing Big Data solutions are mainly focused on the discovery and analysis of data. The solutions are scalable and highly available but tedious when swapping in and swapping out occurs in disarray and thrashing takes place. The resolution for thrashing through machine learning algorithms and support nomenclature is through simple techniques. Organizations that have been collecting large customer data are increasingly seeing the need to use the data for swapping in and out and thrashing occurs ...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...