Welcome!

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Microsoft Cloud, Linux Containers, SDN Journal

Containers Expo Blog: Blog Feed Post

The Re-Emergence of the Operating System

Each of the major infrastructure silos has to operate with some fixed environment in mind

Compute started its major architectural transition several years ago with the introduction of virtualization. If you pay attention to any of the IT noise today, it should be clear that storage and networking are going through their own architectural evolutions as well. But another shift is also underway: applications are fundamentally changing as well.

An interesting dynamic in all of this is that it is near impossible for each of the four major IT areas to undergo simultaneous, coordinated evolution. Change is hard enough on its own, but changing multiple variable at once makes it difficult to anchor to anything substantial. And when change does occur along multiple fronts at the same time, the task of determining causation for new found results is challenging at best.

Understanding that business underlies much of the change, the best that the industry can collectively do is to take some things as fixed and then change around that. And so we evolve each of the silos somewhat independently, trying hard to keep in mind that the environment into which they plug is also changing. At our best, we try to intersect the various changes. But when we miss, we tend to aim towards slotting into current architectural paradigms, because that allows us the best chance to make a meaningful difference in production environments. Indeed, always playing out ahead of the horizon might be great for making future progress, but it makes building a business around the resulting innovation nigh impossible.

Moving forward

And so IT as a whole continues the trudge forward.

Each of the major infrastructure silos—compute, storage, and networking—has to operate with some fixed environment in mind. The most basic thing to attach to is application infrastructure. While virtualization has made compute containers that make applications portable (the reason dynamic and change are so prominent in most marketing materials), the applications themselves have remained largely static.

Except, of course, that they haven’t.

Application architectures

Anyone watching the major web-scale players (think: Facebook, Google, Twitter, and the like) will know that the architecture for their applications is actually significantly different than what most enterprises currently think of. Applications tend to be flatter and more distributed, typically running on bare metal to avoid some of the virtualization overhead that exists in a pure hypervisor environment.

This new breed of scale-out applications would appear to mark the beginning of the application evolution. If this is truly a trend that will only increase, how will it impact the evolution of the other silos?

Software-defined everything

The prevailing chatter across the whole of IT is about software-defined everything. The view is that compute, storage, and networking will all work in cahoots to meet application requirements. The rise of controller-based architectures is prominent in both networking and storage roadmaps, and the compute side of the house embraced central control awhile back.

But what happens if the underlying assumption that applications emerge largely unscathed turns out not to be true?

It could be that the future of datacenter architectures will hinge not on the supporting infrastructure but on the applications themselves. If this is true, we could see a re-emergence of something that we haven’t really talked about in quite awhile: the operating system itself. Sure, there is still talk about Linux and all the server tools that come with it, but the actual operating system hasn’t materially changed in quite some time.

Learning from web-scale applications

If we learn anything from the web-scale companies pushing the boundaries for application performance, it should be that the future is not necessarily about the containers in which applications run. It could be about the underlying OS itself. What if the reason massively scaled companies are embracing bare metal isn’t only about the cost? There is certainly a performance aspect to it as well.

One somewhat uncomfortable conclusion here would be that all the infrastructure work involved in handling application portability across a containerized infrastructure could be somewhat transient. I don’t mean to suggest that it is not useful; there will be a relatively long transition to any kind of new application architecture. And even if there is a transition, the persistence of mainframes should tell us all that no change is absolute or all-encompassing. But a scenario where pockets of new-era applications co-exist in data centers with legacy applications seems likely. We are already seeing this with Hadoop, but I would expect to see more applications built on new architectures.

But this does mean that future-proofing datacenter investments requires a bit more nuance than just buying and planning on scale. Highly-distributed application architectures are even more dependent on east-west traffic. For every 1 byte of traffic going in and out of the datacenter, close to 1 Gigabyte transits the datacenter fabric in some current applications. This ratio likely gets even more aggressive over time.

The bottom line

How all of this plays out is anyone’s guess. We will certainly end up with a hybrid environment supporting all kinds of application architectures making use of various underlying infrastructure architectures. But none of us should be surprised when the industry starts talking a bit more broadly about the role of the operating system going forward. And if you are making plans based on a set of assumptions anchored to current architectures, it might be worth expanding the strategic aperture some to consider how this impacts current plans.

And as a final thought, if the operating system changes, is it still defined by the server, or do we end up with large, distributed operating systems? Put differently, what is the definition of the underlying platform? Are we looking at a new era of platform that includes all of compute, storage, networking, and applications? The implications would be dramatic.

[Today’s fun fact: Potatoes have more chromosomes than humans. I wonder if that means Mr. Potato Head has the combined chromosomes or just a subset of a potato’s.]

The post The re-emergence of the Operating System appeared first on Plexxi.

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

Latest Stories
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
As ridesharing competitors and enhanced services increase, notable changes are occurring in the transportation model. Despite the cost-effective means and flexibility of ridesharing, both drivers and users will need to be aware of the connected environment and how it will impact the ridesharing experience. In his session at @ThingsExpo, Timothy Evavold, Executive Director Automotive at Covisint, discussed key challenges and solutions to powering a ride sharing and/or multimodal model in the age ...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
As organizations shift towards IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. Commvault can ensure protection, access and E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his general session at 18th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Part...
"Venafi has a platform that allows you to manage, centralize and automate the complete life cycle of keys and certificates within the organization," explained Gina Osmond, Sr. Field Marketing Manager at Venafi, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Detecting internal user threats in the Big Data eco-system is challenging and cumbersome. Many organizations monitor internal usage of the Big Data eco-system using a set of alerts. This is not a scalable process given the increase in the number of alerts with the accelerating growth in data volume and user base. Organizations are increasingly leveraging machine learning to monitor only those data elements that are sensitive and critical, autonomously establish monitoring policies, and to detect...
"I will be talking about ChatOps and ChatOps as a way to solve some problems in the DevOps space," explained Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, and communicator. For over 30 years across five continents, he has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant.
In his session at @ThingsExpo, Dr. Robert Cohen, an economist and senior fellow at the Economic Strategy Institute, presented the findings of a series of six detailed case studies of how large corporations are implementing IoT. The session explored how IoT has improved their economic performance, had major impacts on business models and resulted in impressive ROIs. The companies covered span manufacturing and services firms. He also explored servicification, how manufacturing firms shift from se...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
IoT solutions exploit operational data generated by Internet-connected smart “things” for the purpose of gaining operational insight and producing “better outcomes” (for example, create new business models, eliminate unscheduled maintenance, etc.). The explosive proliferation of IoT solutions will result in an exponential growth in the volume of IoT data, precipitating significant Information Governance issues: who owns the IoT data, what are the rights/duties of IoT solutions adopters towards t...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...