Three decades ago when HTTP protocol is being developed, no one believed that it will bring such a massive revolution in computer science and technology world. With development of HTTP protocol and web framework, entire world was entered into the true globalization.
However, there is not much focus on the deployment and infrastructure side of the things for the web ecosystem and largely it remains the same over the period. Developers and IT professionals have to spend sleepless nights figuring out software installation, OS and versions, patches and framework dependencies and security issues.
Journey to virtualization
Everything in application development is revolves around the infrastructure. Securing the state of the art, enterprise grade hardware is extremely expensive and required lot of planning and approvals to procure such kind of setup.
Challenges industry faced with bare metal hosting
Though bare metal deployment provides blazingly fast performance, over the period with business applications mature they drive significant customer base, generates lot of the data and the application demands more and more of resources such as computing power, memory and disk space. On the contrary, there are applications which carries routine back office tasks and mostly the provisioned hardware remains underutilized.
In summary, below are the key challenges that industry faced
- Highly expensive (Capex)
- Upgrade limitations ( At certain point hardware upscaling become stagnant)
- Under utilized
- Isolation of application is expensive
- In case of unfortunate events such as fire, or hardware failure, resulting huge loss to business
- provisioning and setting up hardware for the business is time consuming as in several days
- Need dedicated power supply, cooling and security
Virtualization became the savior for IT industry by saving them not just millions of dollars but helped in maintaining those business applications to drive business growth. With hardware virtualization businesses become more robust and predictable. Virtualization significant benefits over bare metal hosting.
- VMs (Virtual machines) created using the virtualization software (e.g. Oracle Virtualbox) literally saved companies millions of dollars in infrastructure spending by deploying several virtual machines on one server.
- Application Isolation becomes very easy and cost effective
- With Multi-tenant model you can deploy several applications and hence effectively utilized the hardware resources
- With VM snapshot, even if hardware fails, you can quickly deploy VMs on different server
- Provisioning and booting up new VMs is matter in several minutes to hours than days.
A brief history of application deployment
Until before a decade and half, when Amazon Web Services introduce cloud computing to the world, the journey of the web application from developers machine to the production server was fairly common. It is mostly clunky, full of emotions, eventful, chaotic, time consuming and most importantly sometimes unpredictable.
When developer setup the application, there are lot of infrastructural and external actors play the equal role in running the application. e.g. CPU, Memory, Hard disks, Networking, OS, runtimes, framework libraries, package dependencies and so on. Knowingly or unknowingly most of the time these key dependencies are taken granted during the development phase.
An era of cloud native applications
Modern applications today are getting built up with cloud native mindset. Cloud native is all about changing the way you think about constructing critical business systems. Cloud-native systems are designed to embrace rapid change, large scale, and resilience. In order to remain competitive in ever changing customer demands and service offerings, businesses need both speed and agility to respond swiftly to complex business needs without impacting performance and overall health of the application.
The main challenge with legacy monolith applications are, they have become extremely complex and bulky over the period. Making even a small change or adding a feature in these application has to be implemented with proper plan, accessing impact on other part of application and deployment window to promote to live environment.
Cloud computing brings lot of new paradigms, patterns and practices along with various productivity tools to access applications and its dependencies to make it cloud portable. It is upto the individual business how to plan and start working on these aspects before it becomes a pressing issue.
What is Containerization and why it matters
Let’s quickly analyse that your business is looking forward to modernise current legacy monolith application to scalable and highly available distributed system by refactoring it into a Microservices. What this means is you’re going to breakout the application in smaller modules.
Let’s assume you have broken your monolith application into 5 smaller moules. So now you have 4 new apps to develop, test and deploy. It just isn’t stop there. You will need to provision more environments to deploy and test these applications. Even with very conservative approach, let’s do capacity planning for this modern application.
- Integration Environment (QA Servers) 1x * 5 apps = 5x VMs
- UAT Environment = 2x * 5 apps = 10x VMs
- PROD Environment with High availability and Disaster Recovery ( HADR) = 4x * 5 apps = 20x VMs
So roughly around 35 general purpose VMs will be required for this application.
Issues with virtual machines to isolate workloads
- Virtual machine contains full blown OS which takes bigger disk space.
- With full blown OS, it requires more time to boot and up and running with applications.
- Non equitable or Poor distribution of CPU resources
- There are lot of companies in the market which offer VM virtualizations. Unfortunately, the snapshot format is not standardized. Even there are tools available to convert into other formats, however this entire process is tedious and time consuming
- With enterprise OS, licensing cost is significant.
- Most of the times application doesn’t need full blown OS resources meaning we’re paying for the CPU allocation even when there is no need.
Just as shipping industries use physical containers to isolate different cargos-for example, to transport in ships and trains; software development technologies increasingly use an approach called containerisation.
A standard package of software-known as a container-bundles an application’s code together with the related configuration files and libraries and with the dependencies required for the app to run.
dotCloud (the company behind Docker) was in the business of hosting apps. They were looking for an efficient way to package and deploy workloads. As a hosting business, you can offer a more competitive product if you can pack more apps in a single server than the competition.
Let’s see below example, there is the file app reading and writing files into file system. & meaning the 3 instances of apps will be running asynchronously. Without proper isolation, the apps may overwrite the files.
$ ./file-io-app & $ ./file-io-app & $ ./file-io-app &
So, is there any way by which we could let processes pretend that they are dealing with dedicated file systems? This is where dotCloud shines. So even if you have a single network interface, you could use a part of it and still pretend it’s the full networking interface.
Containers were existed in the Linux world way before the docker came into existence. However, Docker offered better abstraction over the low-level kernel primitives Namespaces and Control groups, making it convenient to package the processes into bundles called container images.
dotCloud developed a mechanism to define how many resources a process can use and how to partition them.
So in summary,
- Running containers doesn’t require as many resources as virtual machines since there is no hardware to virtualise.
- Containers are processes with a restricted view of the operating system, they have low overheads.
- All containers share the same kernel unlike VMs where each VM has its own kernel.
- Startup time in milliseconds
Leave a Reply