Welcome!

Related Topics: Microservices Expo, Containers Expo Blog, @CloudExpo

Microservices Expo: Blog Feed Post

Now Witness the Power of This Fully Operational Feedback Loop

It’s called a feedback loop, not a feedback black hole.

It’s called a feedback loop, not a feedback black hole.

One of the key components of a successful architecture designed to mitigate operational risk is the ability to measure, monitor and make decisions based on collected “management” data. Whether it’s simple load balancing decisions based on availability of an application or more complex global application delivery traffic steering that factors in location, performance, availability and business requirements, neither can be successful unless the components making decisions have the right information upon which to take action.

Monitoring and management is likely one of the least sought after tasks in the data center. It’s not all that exciting and it often involves (please don’t be frightened by this) integration. Agent-based, agentless, standards-based. Monitoring of the health and performance of resources is critical to understanding how well an “application” is performing on a daily basis. It’s the foundational data used for capacity planning, to determine whether an application is under attack and to enable the dynamism required of a dynamic, intelligent infrastructure supportive of today’s operational goals.

YOU CAN’T REACT to WHAT you CAN’T SEE

We talk a lot about standards and commoditization and how both can enable utility-style computing as well as the integration necessary at the infrastructure layers to improve the overall responsiveness of IT. But we imagedon’t talk a lot about what that means in terms of monitoring and management of resource “health” – performance, capacity and availability.

The ability of any load-balancing service depends upon the ability to determine the status of an application. In an operationally mature architecture that includes the status of all components related to the delivery of that application, including other application services such as middle-ware and databases and external application services. When IT has control over all components, then traditional agent-based approaches work well to provide that information. When IT does not have control over all components, as is increasingly the case, then it cannot collect that data nor access it in real-time. If the infrastructure components upon which successful application delivery relies cannot “see” how any given resource is performing let alone whether it’s available or not, there is a failure to communicate that ultimately leads to poor decision making on the part of the infrastructure.

We know that in a highly virtualized or cloud-computing model of application deployment that it’s important to monitor the health of the resource, not the “server”, because the “server” has become little more than a container, a platform upon which a resource is deployed and made available. With the possibility of a resource “moving” it is even more imperative that operations monitor resources. Consider how IT organizations that may desire to leverage more PaaS (Platform as a Service) to drive application development efforts forward faster. Monitoring and management of those resources must occur at the resource layer; IT has no control or visibility into the underlying platforms – which is kind of the point in the first place.

YOU CAN’T MAKE DECISIONS without FEEDBACK

image

The feedback from the resource must come from somewhere. Whether that’s an agent (doesn’t play well with a PaaS model) or some other mechanism (which is where we’re headed in this discussion) is not as important as getting there in the first place. If we’re going to architect highly responsive and dynamic data centers, we must share all the relevant information in a way that enables decision-making components (strategic points of control) to make the right decisions. To do that resources, specifically applications and application-related resources, must provide feedback.

This is a job for devops if ever there was one. Not the ops who apply development principles like Agile to their operational tasks, but developers who integrate operational requirements and needs into the resources they design, develop and ultimately deploy. We already see efforts to standardize APIs imagedesigned to promote security awareness and information through efforts like CloudAudit. We see efforts to standardize and commoditize APIs that drive operational concerns like provisioning with OpenStack. But what we don’t see is an effort to standardize and commoditize even the simplest of health monitoring methods. No simple API, no suggestion of what data might be common across all layers of the application architecture that could provide the basic information necessary for infrastructure services to take actions appropriately.

The feedback regarding the operational status of an application resource is critical in ensuring that infrastructure is able to make the right decisions at the right time regarding each and every request. It’s about promoting dynamic equilibrium in the architecture; an equilibrium that leads to efficient resource utilization across the data center while simultaneously providing for the best possible performance and availability of services.

MORE OPS in the DEV

It is critical that developers not only understand but take action regarding the operational needs of the service delivery chain. It is critical because in many situations the developer will be the only ones with the means to enable the collection of the very data upon which the successful delivery of services relies. While infrastructure and specifically application delivery services are capable of collaborating with applications to retrieve health-related data and subsequently parse the information into actionable data, the key is that the data be available in the first place. That means querying the application service – whether application or middle-ware and beyond – directly for the data needed to make the right decisions. This type of data is not standard, it’s not out of the box, and it’s not built into the platforms upon which developers build and deploy applications. It must be enabled, and that means code.

That means developers must provide the implementation of the means by which the data is collected; ultimately one hopes this results in a standardized health-monitoring collection API jointly specified by ops and dev. Together.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
Data scientists must access high-performance computing resources across a wide-area network. To achieve cloud-based HPC visualization, researchers must transfer datasets and visualization results efficiently. HPC clusters now compute GPU-accelerated visualization in the cloud cluster. To efficiently display results remotely, a high-performance, low-latency protocol transfers the display from the cluster to a remote desktop. Further, tools to easily mount remote datasets and efficiently transfer...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.