Containerization using Docker: AI End-to-End Series (Part — 6)
- Container deployment is a popular technology that gives developers the ability to construct application environments with speed at scale.
- Containerization is OS-based virtualization that creates multiple virtual units in the userspace, known as Containers.
- Containers share the same host kernel but are isolated from each other through private namespaces and resource control mechanisms at the OS level.
Benefits of Container Deployment
- Run many individual applications on the same number of servers.
- Deliver ready-to-run applications in containers that hold all the codes, libraries, and dependencies any application needs.
- Reduced cost of infrastructure operations — There are usually many containers running on a single VM (Virtual Machine).
- Solution scalability on the microservice/function level — No need to scale instances/VMs.
- Better security — Full application isolation makes it possible to set each application’s major process in separate containers.
- Instant replication of microservices via replicas and deployment sets.
- Flexible routing between services that are natively supported by containerization platforms.
- Full portability between clouds and on-premises locations.
- OS independent — They don’t need an OS to run; only the container engine is deployed on a host OS.
- Fast deployment with hydration of new containers and termination of old containers with the same environments.
- Lightweight — Without an OS, containers are lightweight and less demanding on server resource usage than images.
Why Container Deployment?
- Container deployments can replace many of the tasks previously handled by IT operations.
- When a tool like Docker deploys multiple containers, it places applications in virtual containers that run on the same operating system. This provides a benefit not offered by virtual machines.
- Using a virtual machine requires running an entire guest operating system to deploy a single application.
- This is costly and slow if deploying many applications.
- If you deploy a Docker container, each container has everything needed to run the app and can be easily spun up or down for testing.
- This is how container deployment saves resources like storage, memory, and processing power and speeds up the CI/CD pipeline.
- Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine.
- Each VM includes a full copy of an operating system, the application, necessary binaries, and libraries — taking up tens of GBs. VMs can also be slow to boot.
- Docker is an open-source platform for building, deploying, and managing containerized applications.
- It enables developers to package applications into containers — standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment.
Components of Docker
- Every Docker container starts with a simple text file containing instructions for how to build the Docker container image.
- DockerFile automates the process of Docker image creation. It’s essentially a list of command-line interface (CLI) instructions that Docker Engine will run in order to assemble the image.
- Docker images contain executable application source code as well as all the tools, libraries, and dependencies that the application code needs to run as a container.
- When you run the Docker image, it becomes one instance (or multiple instances) of the container.
- Multiple Docker images can be created from a single base image, and they’ll share the commonalities of their stack.
- Docker containers are the live, running instances of Docker images. While Docker images are read-only files, containers are live, ephemeral, executable content.
- Users can interact with them, and administrators can adjust their settings and conditions using docker commands.
- Faster time to market
- Developer Productivity
- Deployment velocity
- IT infrastructure reduction
- IT operational efficiency
- Faster issue to resolution
- For Windows, Docker Desktop can be installed using a Hypervisor or with WSL 2.
- For the latter, you need to have WSL 2 installed on your system and you can refer to one of our articles for installation instructions.
- You can refer to these guidelines for docker installation in Windows and these guidelines for MacOS.
Creating Necessary Files
- To make our project ready for containerization we need to create 2 extra files named Dockerfile and docker-compose.yml.
- Docker can build images automatically by reading the instructions from a Dockerfile.
- A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
- Using docker build users can create an automated build that executes several command-line instructions in succession.
- Compose is a tool for defining and running multi-container Docker applications.
- With Compose, you use a YAML file to configure your application’s services.
- Then, with a single command, you create and start all the services from your configuration.
- Using Compose is basically a three-step process:
- Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
- Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
- Run docker-compose up and the Docker compose command starts and runs your entire app. You can alternatively run docker-compose up using the docker-compose binary.
- Dockerfile and Docker Compose file will look like this in our project:
Containerisation using Docker
- The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL.
- The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context.
- Command to create/build Docker Image, type the following:
docker build -t face_mask .
- Docker runs processes in isolated containers. A container is a process that runs on a host. The host may be local or remote.
- When an operator executes the docker run, the container process that runs is isolated in that it has its own file system, its own networking, and its own isolated process tree separate from the host.
- Command to run created docker image:
docker run -p 8000:8000 face_mask
Testing App on Local Host
- Now we are ready to test our app on localhost:
- Let’s see whether the prediction is working or not:
- Containerized applications have become a popular choice among DevOps teams and other organizations that have moved away from traditional approaches to software development.
- Container deployments also work well with continuous integration (CI) and continuous delivery (CD) processes and tools.
- In the next article of this series, we will see how to scale our docker-contained application using Kubernetes so that our application is ready for continuous integration and continuous delivery.
Follow us for more upcoming future articles related to Data Science, Machine Learning, and Artificial Intelligence.
Also, Do give us a Clap👏 if you find this article useful as your encouragement catalyzes inspiration for and helps to create more cool stuff like this.