By Shirish Nadkarni
It could be said that “cloud containers” remains a hot topic in the IT world in general, especially in security working in the cloud. The world's top technology companies, including Microsoft, Google and Facebook, all use them. For example, Google has said that everything it has runs in containers, and that it runs several billion containers each week.
Containers have seen increased use in production environments over the past decade. They continue the modularisation of DevOps (development operations), enabling developers to adjust separate features without affecting the entire application. Containers promise a streamlined, easy-to-deploy and secure method of implementing specific infrastructure requirements, and are a lightweight alternative to VMs (virtual machines).
Containers rely on virtual isolation to deploy and run applications that access a shared OS (operating system) kernel without the need for VMs. Containers hold all the necessary components, such as files, libraries and environment variables, to run desired software without worrying about platform compatibility. The host OS constrains the container's access to physical resources so a single container cannot consume all of a host's physical resources.
The key thing to recognise with cloud containers is that they are designed to virtualise a single application. Containers create an isolation boundary at the application level rather than at the server level. This isolation means that, if anything goes wrong in that single container — for example, excessive consumption of resources by a process — it only affects that individual container and not the whole VM or whole server.
Major cloud vendors offer containers-as-a-service products, including Amazon Elastic Container Service, AWS Fargate, Google Kubernetes Engine, Microsoft Azure Container Instances, Azure Kubernetes Service, IBM Cloud Kubernetes Service and, last but not least, Alibaba Cloud. Containers can also be deployed on public or private cloud infrastructure without the use of dedicated products from a cloud vendor.
Kubernetes — a portable, extensible, open-source platform
That brings us to the question: What exactly is Kubernetes?
The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight letters between the "K" and the "s". Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.
One of the strongest proponents of Kubernetes is Red Hat, but it was acquired by IBM in July 2019 for almost US$34 billion. IBM incorporating Red Hat’s open-source software to its enterprise IT software and hardware quickly became a more prominent cloud service provider, offering an easier and safer way to work in the cloud.
Put succinctly, Kubernetes is a portable, extensible, open-source platform for managing containerised workloads and services, that facilitates both declarative configuration and automation. Kubernetes services, support and tools are widely available. It has a large, rapidly growing ecosystem, and has become very critical for broadcasters or anyone who wishes to work in the cloud.
In the early days, organisations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues. For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform.
A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilised, and it was expensive for organisations to maintain many physical servers. Therefore, virtualisation was introduced, that allowed you to run multiple VMs on a single physical server's CPU.
Virtualisation allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application.
Virtualisation allows better utilisation of resources than in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more. With virtualisation you can present a set of physical resources as a cluster of disposable virtual machines.
Containers are similar to VMs, but they have relaxed isolation properties to share the OS among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own file system, share of CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.
Why you need Kubernetes and what it can do
Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to be started. Wouldn't it be easier if this behaviour was handled by a system?
That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.
It must be emphasised that Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system. Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, and lets users integrate their logging, monitoring, and alerting solutions.
However, Kubernetes is not monolithic, and these default solutions are optional and pluggable. Kubernetes provides the building blocks for building developer platforms, but preserves user choice and flexibility where it is important.
Kubernetes does not limit the types of applications supported; does not deploy source code and does not build your application; does not provide application-level services such as middleware, data-processing frameworks, databases, caches nor cluster storage systems as built-in services.
Nor does it dictate logging, monitoring, or alerting solutions. Kubernetes does not provide nor mandate a configuration language/system; it provides a declarative API that may be targeted by arbitrary forms of declarative specifications. It also does not provide or adopt any comprehensive machine configuration, maintenance, management or self-healing systems.
Additionally, Kubernetes is not a mere orchestration system. In fact, it almost eliminates the need for orchestration. Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. Centralised control is also not required. This results in a system that is easier to use and more powerful, robust, resilient and extensible.
Go native — training & certification
Kubernetes is at the core of the cloud native movement. Training and certifications from the Linux Foundation and its training partners lets you learn Kubernetes and make your cloud native projects run more successfully.
Kubernetes is built to be used anywhere, allowing you to run your applications across on-site deployments and public clouds … as well as hybrid deployments. So you can run your applications where you need them. While providing many benefits as a managed service, Google App Engine's cost is very high compared to Kubernetes Engine.
Alibaba Cloud, a certified Kubernetes platform, is a latecomer on the scene but it is aggressively making up for lost time. On top of promoting its affordable easy-to-deploy ECS (Elastic Compute Service), it has established 2,800 CDN nodes around the globe. And in Asia, it has set up data centres, outside of China, from Japan, India, Malaysia, Singapore, Indonesia to Australia.
And with 5G round the corner and online shopping attracting hordes of avid Asian shoppers day and night, elastic or automated scaling of cloud computing is becoming an important factor when picking a cloud provider … thus, placing Alibaba Cloud, whose parent is the world’s biggest online retail operator managing millions of transactions every second during peak sales periods, in a very sweet spot among telcos offering e-commerce services to enterprises in their own individual countries.
Jonas Falkena, Senior Expert, Application Implementation Architectures, points out that, at Ericsson, they have created a set of principles especially made for telecom applications based on microservices, containers and state optimised design.
“With our design principles as foundation we've launched the new Ericsson Cloud Native Infrastructure solution, an infrastructure fully optimised for cloud native application, which is an essential part of 5G,” says Falkena.
“The new revenue generating services in the 5G, based on network slices, impose a paradigm shift as the core network now needs to support cloud native network functions and container-based software. This is done by introducing a CNCF (Cloud native computing foundation) certified CaaS (Container as a service) platform based on Kubernetes.”
Among the problems migrating to a cloud native network is that it is easy to get in but surprisingly difficult to get out. The problem, to a large extent, is solved by Alibaba Cloud, whose expanding range of high-performance cloud products, including its Elastic Container Instance, facilitates entering and departing the cloud.
Then there is the issue of pod and container debugging. Several leading companies like Verint, Nokia, Taboola, WhiteSource, Sisense and Tufin have been employing Lightrun in production to reduce their Kubernetes debugging time.
Leonid Blouvshtein, Co-founder & Chief Technology Officer of Lightrun, says, “We are revolutionising R&D workflows, and are leading the Continuous Debugging and Observability movement to transform the future of developing for everyone.
“We are able to easily resolve issues for monolith, microservices, Kubernetes, Docker Swarm, ECS, Big Data workers, serverless, and more.
“Our platform and unique mechanism enable developers to define which data is being collected while their app is running. Developers and R&D departments can debug in any environment, including production and staging, in a stable and secure manner.”
Bugs or no bugs, working in the cloud is no more an option. It’s a matter of survival. Just make sure your content is well protected in K8s containers when migrating or working in the cloud.
Question: What’s your #1 concern when choosing a cloud service provider? Are the provider’s containers immunised against contagion or properly locked down to prevent any leakage infecting your workflow and on-premise infrastructure?
If you like to share you views or experience using open source Kubernetes, please send them to maven@editecintl.com