Welcome!

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, Containers Expo Blog, @CloudExpo, SDN Journal

@DevOpsSummit: Blog Feed Post

Where Networks and Application Architecture Converge Lies DevOps

All key areas of IT are all converged on a singular focus: applications

On this side is a variant of SDN: network service virtualization (NSV). On the other side is an emerging application architecture: microservices. Where they meet lies devops.

One of the most fascinating things to watch in the technological shifts occurring today is to see them all converging on a singular point: applications. Whether it's securing or delivering, deploying or access, all key areas of IT are all converged on a singular focus: applications.

shifts center on applications

 

From a developer-turned-network-geek perspective, that's doubly interesting. That's because one impacts the other, and vice-versa. One of the trends in application architecture today is a shift toward microservices. I'll oversimplify for a moment and explain that as SOA without all the baggage. A recent post on High Scalability explains the architecture - and the impact on infrastructure requirements:

Where a monolithic application might have been deployed to a small application server cluster, you now have tens of separate services to build, test, deploy and run, potentially in polyglot languages and environments.

All of these services potentially need clustering for failover and resilience, turning your single monolithic system into, say, 20 services consisting of 40-60 processes after we've added resilience.

 

Throw in load balancers and messaging layers for plumbing between the services and the estate starts to become pretty large when compared to that single monolithic application that delivered the equivalent business functionality!

Microservices - Not A Free Lunch!

Now let's shift gears and peek at what's going on over in network land. You might recall we recently discussed network service virtualization. If not, here's a quick summary from Nick Lippis:

NSV seeks to virtualize enterprise appliances, such as firewalls, load balancers, application accelerators, application delivery controllers, Intrusion Protection Systems, WAN optimizers, call managers, etc., instantiated for each application. Each instance of each NSV is created for a specific application. That is, if there are 10 applications that require network services, then each application will be configured with its own instantiation of that service. That is, 10 applications, then 10 NSV firewalls.

In short, NSV seeks to virtualize network services by creating an instance of the network service for each application versus virtualizing a network service once for all applications. NSV hopes to present significant capex and opex relief from hardware appliances, as well as an efficient way of applying network services to applications without chaining or tagging packets and rapid automated, on-demand application deployment.

Lippis Report 217: It’s Network Service Virtualization in the Enterprise rather than Network Function Virtualization

Reading both, one might assume some level of collusion between the two but that's unlikely to be the case. The divide between application architects and networky groups is well established; they really don't play well together. And yet both these trends recognize the need to meet in the middle, in the L4-7 service layer, to provide for scalability and other "plumbing" services.

From a scalability perspective, this is very much a verticle partitioning-based scalability pattern, where load is spread across distinct functional boundaries of a problem space, each handled by different processing units. Those functional boundaries in today's architectures are embodied by microservice definitions. One service is responsible for a discrete function, as the point of microservices is, to a large extent, to decompose monolithic applications into individual, domain (functional) specific services.

Overall, this means services can be scaled individually on-demand, which is far more efficient than scaling a monolithic application. But it does introduce complexity, as there are necessarily more moving parts, and it does tend to complicate monitoring and force the need for more application-centric monitoring.

A Symbiotic Relationship
The application architect recognizes the need and, to some extent, laments the complexity it will introduce. Network service virtualization, on the other side, offers to fulfill the need and recognizes the need for efficiency and, ultimately, simplification in providing them in a "rapid automated, on-demand" fashion.

These issues - the plumbing and the monitoring - fall squarely into the realm of issues that can be resolved by applying devops to operations. Automated provisioning, treating infrastructure as code, and enabling a more holistic view of "applications" are all enabling capabilities of what devops aims to achieve.

For one of the first times I can remember, the operational burden imposed by technological shifts in application architecture is nearly simultaneously being addressed by the technological shifts in the network. In fact, one could argue that the shifts occurring in the network toward network service virtualization are actually enabling the shift in application architecture. Being able to rapidly provision, manage and monitor the L4-7 services necessary to deliver microservices increases the ability to take advantage of the architecture.

Like the question of the chicken and the egg, it really doesn't matter which came first. What matters is that they're complementary and both driving toward the same goal: accelerated application deployment and delivery of an exceptional end user experience.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
Organizations do not need a Big Data strategy; they need a business strategy that incorporates Big Data. Most organizations lack a road map for using Big Data to optimize key business processes, deliver a differentiated customer experience, or uncover new business opportunities. They do not understand what’s possible with respect to integrating Big Data into the business model.
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they b...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous a...
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
Data scientists must access high-performance computing resources across a wide-area network. To achieve cloud-based HPC visualization, researchers must transfer datasets and visualization results efficiently. HPC clusters now compute GPU-accelerated visualization in the cloud cluster. To efficiently display results remotely, a high-performance, low-latency protocol transfers the display from the cluster to a remote desktop. Further, tools to easily mount remote datasets and efficiently transfer...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, will provide a fun and simple way to introduce Machine Leaning to anyone and everyone. Together we will solve a machine learning problem and find an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intellige...
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their ass...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
Amazon is pursuing new markets and disrupting industries at an incredible pace. Almost every industry seems to be in its crosshairs. Companies and industries that once thought they were safe are now worried about being “Amazoned.”. The new watch word should be “Be afraid. Be very afraid.” In his session 21st Cloud Expo, Chris Kocher, a co-founder of Grey Heron, will address questions such as: What new areas is Amazon disrupting? How are they doing this? Where are they likely to go? What are th...