Containers are what makes the cloud go round. No doubt about it. Utilizing them and working on the cloud is allowing businesses to evolve at a fast-growing pace.
It essentially defines a new way to deliver goods with the extensive use of software. Many companies have already identified a plethora of benefits by transforming their business to join the software economy, aka the digital big bang. I use this expression because the knock on effects are huge. It is no understatement to say that software is literally everywhere. Can anybody contain this explosion (pun intended)? It remains to be seen, but it’s looking good so far.
Of course, this new market landscape is shaped around the major players. Software companies we interact every day like Google, Facebook, Uber, Netflix and many more.
Why containers matter
What is it that all those players in the software based economy are working with currently? Containers, in any flavor. From the old and battle-tested Linux containers, aka LXC, to the recent developments offered to the industry by Docker.
I choose to refer the reader to external resources for the container definition. One can start in the official docker website for instance. Which is really informative and certainly authoritative. For the sake of this post, I suggest we set the ground and agree that containers are a virtualization technology. As such it works as another abstraction layer, which makes it really handy when designing and operating complex systems.
So, what is virtualized with containers like docker?
Software application components is the instantiation for containers, and that includes everything connected with the setup and runtime environment for each of those instances. With this approach we expect to have application code, execution environment and data all contained in there. Data, in this case, should include system-side data like configuration parameters and user data managed through the software application.
Setup – install software universally
As a virtualization solution, container technology allows for isolation between the different provisioned instances even of the same initial source. Here, we meet the concept of the docker image, which introduces the entity to use as the initial template to create instances of. Once a docker image is created for a software application, it provides a complete installation package for itself.
The benefit here is that we avoid a significant overheads posed by the operating system files that server virtualization uses. And at the same time we standardize with specific libraries and configuration choices with great efficiency.
As a result, developers gain access to more and more software components “out-of-the-box”. And they can easily share their own software application with others. And with all the required details and configuration options pre-cooked.
Software distribution as process becomes a lot faster and flexible. Not enough? Portability is another major benefit realized with containers which one can hardly compete.
Runtime – Execute where needed
We’ve seen other attempts to make software packages portable in the past but this time we get two-for-one. This is because the execution environment becomes portable itself! If you sit back and think about it a few minutes this is unique.
With docker images maintained in reasonably small sizes, developers and operators now have a unique opportunity. They can deploy many runtimes on the same operating system environment. And with minimal system resources interference (limited actually to operationally related variables like network ports and such), and with even less operational overhead.
But how can I use containers for my apps?
Creating docker images and putting them to work is a relatively easy task. Most engineers in development teams have already experimented with this. Of course, at some point, operating a large amount of container instances will need some other form on management for those instances themselves.
This is problem tackled by the docker community with their swarm orchestration engine or by Google with their kubernetes (k8s) project. Such solutions are meant to manage clusters of docker hosts from a single entry point.
A new de-facto standard
Based on the above, we can reach the point where we have to identify that containers introduce their application virtualization implementation as a standard way for people working in the software production cycle to distribute their offerings.
As this standard resides both on the development – release side as well as on the runtime – operations side, software design and lifecycle management processes change to adopt continuous delivery models or micro-service architectures.
Working on the cloud makes it easier for new technologies to be utilized by larger communities. In turn, this leads to the evolution of whole market segments. Of course, this comes with a price. That of applying new processes that sometimes require software redesign for cloud-native models and automation. With DevOps practices for the whole or part of those processes.