Blog Feed Post

What are containers, how they relate to Kubernetes, and why this matters to OpenStack

Containers and Kubernetes are hotter than hot because they let developers focus on their applications, without worrying about the underlying infrastructure that delivers them. And while OpenStack didn’t replace AWS, it clearly is a success story in the open infrastructure space. Here’s what you need to know about them, and why they matter to each other.

What’s up with containers?

If you’ve been in IT for a long time, you may have started hearing about containers since the beginning of the 2000’s. However, the concept really began gaining traction around 2014 with the release of Docker 1.0 – a buzz that meanwhile has become a roar.

In a nutshell, containers are a technology that allow developers to quickly create ready-to-run self-contained applications, broken down into components, that can be deployed, tested and updated independently from each other. It also enables them to create a fully functional development environment to work with, isolated from other application or system components.

To better understand the essence of this “new” technology I’ve found it helpful to compare it to Virtual Machines, so bear with me.

While a hypervisor uses the entire device, containers just abstract the operating system kernel. This means, containers don’t require a direct access to the physical hardware. By doing so, they allow for much lower resource consumption and much better cost effectiveness – one of the major differences between containers and VMs.

I keep hearing rumors that creating and running containers was possible way before Docker appeared – however, it needed tons of hacks and was merely a nightmare. The beauty of Docker is that it made containerization easy, so it all can happen with a few commands – therefore the big roar around containers.

Benefits of using containers

  1. Ease-of-use: Containers let developers, systems admins, architects and practically anyone package applications on their laptop and run them unmodified on any public cloud, private cloud, or even bare metal. This accelerates the DevOps lifecycle, enables the super-fast deployment of new services anywhere, and ultimately makes life easier for all involved.
  2. Speed and efficiency: Since containers are isolated code-units running on the kernel, they are very lightweight. Ergo, they take up fewer resources. So, when developers want to create and run a new Docker container, they can do this in seconds, while creating and running VMs might take longer because they must boot up a full virtual operating system every time.
  3. Modularity and scalability: Last, but not least, containers make it easy to break down an application’s functionalities into individual components. For example, a developer might want to have his MongoDB database running in one container and his RabbitMQ server in another one, while his Ruby app is in another. Docker links these containers together to create the application, making it easy to scale or update components independently in the future.

It’s no wonder that everyone is rushing to adopt Docker as fast as possible. But however useful containers may be, without a proper management system their benefits will not be entirely realized.

Welcome to Kubernetes.

What’s up with Kubernetes?

Originally created by Google, Kubernetes 1.0 was released in 2015. Shortly thereafter Google partnered with the Linux Foundation to create the Cloud Native Computing Foundation (CNCF), and donated Kubernetes as a seed technology to the organization. The primary purpose of the CNCF is to promote container technology.

Kubernetes, aka K8s, is an open-source cluster manager software for deploying, running and managing Docker containers at scale. It lets developers focus on their applications, and not worry about the underlying infrastructure that delivers them. And the beauty of it: Kubernetes can run on a multitude of cloud providers, such as AWS, GCE and Azure, on top of the Apache Mesos framework and even locally on Vagrant (VirtualBox).

But what’s the point of Kubernetes?

To better understand the essence of a cluster manager software, imagine you have an important business application running on multiple nodes with hundreds of containers. In a world without Kubernetes, you’d need to manually update hundreds of containers every time your team releases a new application feature. Doing it manually takes a lot of time, is error prone, and errors are bad for your business.

Kubernetes is designed to automate deploying, scaling, and operating application containers. It basically categorizes an application’s closely-related containers into functional groups (“pods”) for easy management and discovery.  On the top of the pod infrastructure, Kubernetes provides another layer, which allows for scheduling and services management of containers.

How does a container know which computer to run on? Kubernetes checks with the scheduler. What if a container crashes? Kubernetes creates a new one. Whenever you need to rollout a new version of your app, Kubernetes has you covered. It automates and simplifies your daily business with containers.

Benefits of using Kubernetes

  1. It’s portable: The philosophy of cloud-native application development can be summarized in one word: “portability”. Being on the front of CNCF, portability is also Kubernetes’s main concept: it eliminates infrastructure lock-in and gives developers complete flexibility to run Kubernetes on any infrastructure, in any cloud they want.
  2. It’s extensible: Because it’s designed extensible, Kubernetes offers freedom of choice when choosing operating systems, container runtimes, storage engines, processor architectures, or cloud platforms. It also lets developers integrate their own applications in the Kubernetes API, as well as to scale or roll out new innovative features through the Kubernetes tooling.
  3. It’s self-healing: Kubernetes continuously performs repairs, guarding your containerized application against any failures that might affect reliability. Thus, it reduces the burden on operators and improves the overall reliability of the system. It also improves developer velocity because the time and energy a developer might otherwise have spent on troubleshooting can instead be spent on developing new features.

Hello, my name is OpenStack, and I am not easy to work with.

If you are a regular follower of this blog, you might have already read about What is OpenStack, which are the most common OpenStack monitoring tools, or how we approach OpenStack monitoring beyond the Elastic (ELK) Stack.

If not, here’s a short recap: OpenStack is an open-source cloud operating system used to develop private- and public-cloud environments. It consists of multiple interdependent microservices, and provides a production-ready IaaS layer for your applications and virtual machines.

Still getting dinged on its complexity, OpenStack currently has around 60 components, also referred to as “services”, six of which are core components, controlling the most important aspects of the cloud. There are components for the compute, networking and storage management of the cloud, for identity, and also for access management. With these, the OpenStack project aims to provide an open alternative to giant cloud providers like AWS, Google Cloud, Microsoft Azure or DigitalOcean.

The reasons behind the explosive growth in OpenStack’s popularity are quite straightforward. Because it offers open-source software for companies looking to deploy their own private cloud infrastructure, it’s strong where most public cloud platforms are weak. Perhaps the biggest advantage of using OpenStack is the vendor-neutral API it offers. Its open API removes the concern of a proprietary, single vendor lock-in for companies and creates maximum flexibility in the cloud.

Since they solve similar problems, but on different layers of the stack, OpenStack and Kubernetes can be a great combination. By using them together, DevOps teams can have more freedom to create cloud-native applications than ever before.


Containers + Kubernetes + OpenStack: the platform of the future?

What we see at our customers is that, however important security and control might be for them, they don’t necessarily want to use OpenStack only. They want so much more:

  • ease of deployment (expected from public cloud providers)
  • control (expected from private clouds)
  • cost efficiency (expected everywhere)
  • flexibility to choose the best place to run any given application
  • scalability
  • reliability
  • security

More often than not, companies bigger than a startup want to enjoy “hybrid” possibilities. They want to control their on-premises infrastructure, but at the same time scale to the public cloud if necessary. But what we experience also is that it’s not always so easy to fully enjoy the benefits of hybrid scenarios. Unfortunately, moving workloads between infrastructures is still a rather difficult task.

This is where Kubernetes can come very handy. Because it powers both private and public clouds, Kubernetes users can unlock the real power of a hybrid infrastructure.

Back where we started: containers

Remember containers from the beginning of the article? Their beauty is that they let you:

  • run containerized applications on OpenStack, or
  • containerize your own OpenStack services by using Docker.

Both ways you can benefit from Kubernetes.

Benefits of running OpenStack on Kubernetes

Due to its great support for cloud-native applications, Kubernetes can make OpenStack cool again: it can enable rolling updates, versioning, and deployments of new OpenStack components and features, thus improving the overall OpenStack lifecycle management. Also, OpenStack users can benefit from self-healing infrastructure, making OpenStack more resilient to the failure of core services and individual compute nodes. Last, but not least, by running OpenStack on Kubernetes, users can also benefit from the resource efficiencies that come with a container-based infrastructure. OpenStack’s Kolla project can be of great help here: it provides production-ready containers and deployment tools for operating OpenStack clouds that are scalable, fast, and reliable.

Benefits of running Kubernetes on OpenStack

On the other hand, by deploying K8s on top of OpenStack, Kubernetes users get access to a robust framework for deploying and managing applications. As more and more enterprises embrace the cloud-native model, they are faced with the challenge of managing hybrid architectures containing public- and private clouds, containers and virtual machines. OpenStack has never been famous for its interoperability – which might be good news for some, but bad news for most. By bringing in containers and Kubernetes, users have the freedom to choose the best cloud environment to run any given application, or part of an application, while still enjoying scalability, control, and security. Kubernetes can be deployed by using Magnum, an OpenStack API service making container orchestration engines available as first-class resources in OpenStack. This gives Kubernetes pods all the benefits of shared infrastructure.

Wrapping it up

Enterprises today want many things, but “using a single cloud infrastructure and being locked into it eventually” is not very high on their list. Instead, they want to reap the benefits of public clouds (e.g. ease of deployment), those of private clouds (e.g. security) and, today more than ever, they want faster time-to-market. Therefore, they increasingly move towards cloud-native technologies and practices.

But however wonderful in theory, setting up and effectively using hybrid architectures is still a difficult task. Most cloud infrastructures – including OpenStack – have not been designed to allow the easy moving of workloads between each other.

But there is Docker, who came and made containerization easy for everyone.

And there is Kubernetes, who came and automated working with containers.

And there is OpenStack, who came and offered a vendor-neutral, secure, production-ready IaaS layer for containerized applications.

By combining the three, enterprises have the chance to fully realize the benefits of hybrid architectures, be more agile, and deliver innovation faster.

But wait, there’s more!

Check out this cool infographic to understand the most important layers of a cloud native stack, as well as the different tools and technologies to build, run and manage cloud native applications.

We also illustrate three typical cloud environments that we see our customers running on OpenShift, Azure, and Cloud Foundry.

Are you already using Docker and Kubernetes, maybe even combined with OpenStack? How do you find this combination? Share your thoughts in the comments section below, as I learn just as much from you as you do from me.

The post What are containers, how they relate to Kubernetes, and why this matters to OpenStack appeared first on Dynatrace blog – monitoring redefined.

Read the original blog entry...

More Stories By APM Blog

APM: It’s all about application performance, scalability, and architecture: best practices, lifecycle and DevOps, mobile and web, enterprise, user experience

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...