Containerisation and container-orchestration systems are the most sought after technologies in cloud computing today; so much so that these two have become the buzzwords of cloud computing. But despite the idea of containerization being pretty old, the first sellable product did not appear until the start of the 21st century.
Technically, the history of the virtualization goes as far back as 1979. That was the first time when the chroot system call was introduced in the Unix V7. This command could be used to change the root directory of a process, and hence was the first step towards the process isolation. But arguably the first practical implementation of the containerization, released in 2000, was the FreeBSD Jails.
This lets the users divide their system into smaller subsystems called ‘jails’. After that, a lot of virtualization tools rose to the surface including Linux VServer, Solaris Containers, Open Virtuzzo, and Process Containers. But arguably the most complete and scalable implementation was Linux Containers (LXC).
Quoting Wikipedia, “Docker is a set of the platform as a service (PaaS) products that use OS-level virtualization”. What it means for you is that it is kind of like a virtual machine that puts much less burden on your machine. Everybody has one of those days when everything seems to be running fine on your end. But things come to a screeching halt when it’s run on the client’s machine — or worse, the program crashes altogether. No matter how optimistic you are, it’s quite hard to see them from an optimistic standpoint.
What Docker does to save you from such embarrassing circumstances is to capture a snapshot of the state of your system. This snapshot can then be used later to recreate the same environment in another machine, possibly in the client’s machine.
Docker was first introduced back in 2010 during the Y Combinator by Solomon Hykes and Sebastien Pahl. Solomon Hykes started it as an internal project within dotCloud, a brand later acquired and owned by Germany-based company cloudControl. Originally dotCloud used the Docker in their platform-as-a-service business model. It was released for the public at the PyCon 2013 held at Santa Clara. It was soon made open-source and the code was made available publicly.
It was released for public at the PyCon, 2013. The earlier versions of Docker used to utilize the LXC at its core. About a year later, starting from version 0.9, but later they switched to their own library written in GO, libcontainer. The Moby, a framework made for assembling specialized container systems, was created by Docker in 2017 for helping open research.
The Docker is used for an array of different tasks including portable app deployment, server consolidation, and code pipeline management. Some of the reasons behind the popularity of Docker includes support for auto Docker-image creation, built-in version tracking system, and consistency across different environments.
As the core library used in the backend is written in Go, it needs a fair amount of computing resources to function properly and effectively. It requires 4 GB of RAM for worker nodes and 8 GB of RAM for manager nodes.
Although it’s known to survive even in environments with as low memory as 512 MB, the reliability takes a big hit. It’s also recommended to have at least 3 GB of secondary memory and 4 GB of swap. For the software part, luckily it supports almost all the latest OSs.
Website
Docker is well-documented and maintained. Head over to Docker’s official website to know more.
K8s (the stylized nickname of Kubernetes, also written as K8s) is an open-source, container-orchestration system for fast-forwarding and automating application management and deployment. It provides a fault-tolerant environment on containers that can be used to host and run services such as NGINX and Jenkins.
It’s heavily used in the production environment to ensure fault tolerance in handling other containerization tools like Docker. Being an open-source project, most of the platforms including Amazon AWS, Microsoft Azure, and Google Cloud provide support for Kubernetes.
The popularity of Kubernetes went over the roof over time. With more than 4 major releases in 2017, it quite unsurprisingly became the globally most discussed and second most reviewed project on GitHub.
Founded by Joe Beda, Brendan Burns, and Craig McLuckie, Kubernetes was released in mid-2014. They were soon joined by Brian Grant and Tim Hockin from Google. This project was heavily influenced by Google’s earlier Borg project. Kubernetes was later donated to Cloud Native Computing Foundation (CNCF).
Borg is a cluster management system that Google created earlier. Before the rise of tools like Docker and Kubernetes, Borg was one of the software that most of the users and organizations use.
Quick trivia: Inside Google, Kubernetes project was given the codename ‘Project 7’ – a tribute to the ex-Borg character Seven of Nine from Star Trek. If you take a closer look at the beautifully simple logo of K8s, you would notice there are exactly 7 spokes on the wheel. As Grand Master Oogway said in Kung Fu Panda, “There are no accidents”.
The Borg was originally written in C++, whereas K8s uses Go. The v1.0 was released in July of 2015. From version 1.0, Google partnered with the Linux Foundation and formed the Cloud Native Computing Foundation (CNCF). CNCF was the biggest news back then, and today they are supported by 400+ members. Some of the founding members other than Google were Red Hat, VMware, Intel, IBM, Twitter, Huawei, Docker, etc. Later K8S was donated to CNCF as a seed technology.
Kubernetes gained tremendous popularity over the years. It can be utilized for running stateless apps while being easy to include into the CI/CD workflow.
Kubernetes has a pretty moderate requirement sheet. It supports almost all the popular OSs including Ubuntu, Debian, CentOS, Fedora, RHEL, HypriotOS, and Container Linux. Establishing a Master’s node requires at least 2 GB and worker node requires at least 1 GB of primary memory. A Master node and worker node require a minimum of 4 and 1 vCPUs respectively for working effectively.
More on how Kubernetes works and its implementation can be found on its official website and GitHub codebase.
Docker | Kubernetes | |
Utility |
Standalone containerization software that can be installed on any computer to run containerized applications | Open-source, container-orchestration system |
Maintaining authority | Docker, Inc | Earlier Google, now CNCF. |
Project type | Open-source | Open-source |
Installation | Relatively harder and takes more time | Relatively easier |
Fault tolerance | Low | High |
User base | Evernote, 9GAG, Intuit, Buffer. | Twitter, Spotify, Pinterest, eBay. |
Strictly speaking, the question of “Docker OR Kubernetes?” is heavily misleading. The answer to that question is even simpler – Docker AND Kubernetes. They are meant to be used for different use cases. A core difference between these is that Docker is intended to run on a single node, while Kubernetes is built for running across a cluster of nodes. Kubernetes is more extensive than the simple Docker Swarm.
Integrating these two technologies together can achieve what the individual technologies can not. Together they make the infrastructure more robust. The application would keep running even when some of the nodes might stop running. This also helps in scaling out your application. In case you need more load, simply add more containers to provide a better customer experience.