Fundamentals of Docker
Introduction
- Isolation: Containers run in isolated environments, ensuring that applications don't interfere with each other.
- Consistency: Docker ensures that an application behaves the same way in development, testing, and production environments.
- Portability: Containers can run on any system that supports Docker, making it easy to move applications between different environments.
- Scalability: Dock
- Docker Image: A read-only template with instructions for creating a Docker container. Images can be shared via Docker Hub or other registries.
- Docker Container: A runnable instance of a Docker image. Containers are isolated environments that run applications with their dependencies. Containers can be started, stopped, and removed without affecting the image.
- Base Image: The starting point for your image.
- Instructions: Commands that copy files, install dependencies, and set up the environment.
- Final Command: The default command or entry point for the container when it's run.
Docker is a software platform that enables developers to build, package, and run applications in lightweight, portable containers. It provides a standardized environment for developing, deploying, and managing applications across different computing environments.
What is Docker?
Before we dive into the installation and setup, let's briefly explore what Docker is and why it has become a cornerstone of modern software development and deployment.
Docker is an open-source platform designed to automate the deployment, scaling, and management of applications within lightweight, portable, and self-sufficient containers. These containers encapsulate everything an application needs to run, including code, libraries, dependencies, and runtime environments. Unlike traditional virtual machines, Docker containers share the host OS kernel, which makes them extremely efficient and fast to start.
Docker provides an environment that enables developers to build, test, deploy, and run applications by using containerization technology.
The key benefits of Docker include:
Docker images and containers
Docker images are lightweight, stand-alone, executable software packages that include everything needed to run a piece of software: the code, runtime, libraries, environment variables, and configuration files.When we create a Docker container, we’re adding a writable layer on top of the Docker image. We can run many Docker containers from the same Docker image. We can see a Docker container as a runtime instance of a Docker image.
Dockerfile
ADockerfile
is a text file that contains a series of instructions used to build a Docker image. Each instruction in the Dockerfile defines a specific step that Docker follows to construct the image, layer by layer. When the Dockerfile is processed, Docker creates an image based on the commands written in it.
Structure of a Dockerfile:A typical Docekrfile includes:
Key Instructions in a Dockerfile: Let's consider a simple python Flask application and the try to understand how the DOcker images are created.
# Step 1: Use an official Python runtime as a base image
FROM python:3.9-slim
# Step 2: Set the working directory inside the container
WORKDIR /app
# Step 3: Copy the requirements file from the host to the container
COPY requirements.txt .
# Step 4: Install dependencies inside the container
RUN pip install --no-cache-dir -r requirements.txt
# Step 5: Copy the current directory contents to /app in the container
COPY . .
# Step 6: Expose port 5000 (used by Flask)
EXPOSE 5000
# Step 7: Define the command to run the Flask app
CMD ["python", "app.py"]
Here’s a breakdown of common instructions in a Dockerfile:
- FROM: This specifies the base image that your image will be built upon. The base image can be anything, from a bare OS (like ubuntu) to a specific language runtime (like python:3.9). For example in the present example:
FROM python:3.9-slim
, uses a minimal Python 3.9 environment as the base image. - WORKDIR: Sets the working directory inside the Docker container. All subsequent instructions and paths in the Dockerfile will be relative to this directory.
WORKDIR /app
will creates and sets the/app
directory as the working directory inside the container. - COPY: This instruction copies files from your host machine into the Docker container. The source file or directory on your machine is specified first, followed by the destination path inside the container. For example, in our case,
COPY requirements.txt .
command copies therequirements.txt
from the local machine into the current working directory/app
in the container. The secondCOPY . .
copies all the files from the current directory on the machine into/app
. - RUN: This executes a command in the shell during the build process. Commonly used to install dependencies or set up the environment. The
RUN pip install --no-cache-dir -r requirements.txt
installs the Python packages listed inrequirements.txt
usingpip
. - EXPOSE: This informs Docker that the container listens on a specific network port at runtime. It doesn’t actually publish the port, but it serves as documentation for which ports should be exposed. The
EXPOSE 5000
declares that the container will listen on port 5000 (used by Flask in the present example). - CMD: Specifies the command to run when the container starts. Unlike
RUN
, which is executed during the build,CMD
is executed when the container is started. If multipleCMD
instructions are provided, only the last one is used. TheCMD ["python", "app.py"]
runs thepython app.py
command when the container starts, which launches the Flask application. - ENTRYPOINT (Optional): Similar to
CMD
, but it defines the main command to be executed when the container starts, and it can't be overridden. It is often combined withCMD
for arguments.
docker build -t my-python-app .
This command reads the instructions in the Dockerfile and creates an image named my-python-app
. So, a Dockerfile is essentially a recipe that tells Docker how to assemble an image. It’s written in a simple format that lets you define the base environment, install dependencies, copy files, and define what happens when the container is run.
docker-compose.yml
: The docker-compose.yml
file is used with Docker Compose, a tool that helps define and manage multi-container Docker applications. Instead of managing each container separately, docker-compose.yml
allows us to describe how our entire application—comprising multiple containers.
Virtual Machines Overview
Virtual machines (VMs) are software-based emulations of physical computers. They allow the simultaneous operation of multiple operating systems on a single physical machine, optimizing hardware resources.
Key Points:
- VMs are managed by a hypervisor, facilitating the creation and control of virtualized environments.
- Isolation is ensured, as each VM operates independently with its own virtualized hardware resources.
- VMs support various guest operating systems, enabling diverse software environments on the same hardware.
- Resource allocation by the hypervisor optimizes hardware use, allowing multiple VMs on a single server.
Virtual machines find applications in data centers for server consolidation, testing, development, and more. They offer flexibility, allowing easy migration between physical servers and efficient resource management.
Popular virtualization platforms include VMware, Microsoft Hyper-V, and open-source solutions like KVM.
Containerization:
- Concept: Containerization is a lightweight form of virtualization that allows you to package and run applications and their dependencies in isolated units called containers.
- Key Points:
- Containers encapsulate an application and its dependencies, ensuring consistency across different environments
- They provide a standardized and efficient way to deploy, distribute, and manage applications.
In summary, the hypervisor is the layer that enables the creation and management of virtual machines on a physical host. VMs are the instances of virtualized operating systems and applications that run on top of the hypervisor. The hypervisor provides the abstraction and control necessary to efficiently share and allocate resources among multiple VMs.
Difference between the Virtual machines and Dockerr
Aspect | Docker Containers | Virtual Machines |
---|---|---|
Resource Utilization | Containers have faster startup times and consume fewer resources. | VMs require more resources due to the overhead of running multiple OS instances |
Isolation | Containers share the OS kernel but are isolated from each other. | VMs provide stronger isolation as they run separate OS instances. |
OS Dependency | Containers share the host OS kernel, making them more lightweight than VMs. | VMs include a full OS for each instance, leading to higher resource overhead. |
Portability | Containers ensure consistent behavior across different environments, promoting portability. | VMs may face compatibility issues when moved between different hypervisors. |
Common Use Cases of Docker:
- Microservices Architectures: Docker is widely used for implementing microservices architectures, where applications are broken down into smaller, independent services. It is mostly used in Deploying Microservices Architecture in Cloud Environments (AWS, Azure).
- Continuous Integration and Continuous Delivery (CI/CD): Docker is integrated into CI/CD pipelines to automate application builds, deployments, and testing.
- Web Application Development: Docker is a popular choice for developing and deploying web applications, providing a consistent and portable environment.
- Legacy Application Modernization: Docker can be used to modernize legacy applications by packaging them into containers and running them in a modern environment.
- DevOps Practices: Docker plays a crucial role in DevOps practices, enabling rapid application development, testing, and deployment cycles.
Learn Docker: A Step-by-Step Guide
- Start by visiting the Docker desktop download page to acquire the latest version compatible with your operating system, whether it's Windows, Linux, or macOS.
- Download the installer provided on the Docker Desktop download page and execute it. Follow the on-screen instructions to complete the installation process. It's a straightforward process, and the installer will guide you through the necessary steps.
- After the installation is complete, you can find Docker Desktop by searching for "docker" in your system's search bar. Launch Docker Desktop, and it will run on your local machine.
- To fully utilize Docker's capabilities, consider creating an account on docker hub. This step is optional but recommended, as it allows you to manage and view the repositories associated with your Docker images on Docker Hub.
- Now create a project and hence image and push it to docker repository.
Simple Flask application
You can find the Docker hub repository at: https://hub.docker.com/r/arunp77/welcome-app.- Step-1:Create a simple Flask application. For example, create a file named app.py with the following content:
Here decorator## flask app for hello world from flask import Flask import os app = Flask(__name__) @app.route('/', methods=['GET']) def home(): return "Hello world! This is arun." if __name__ == "__main__": app.run(debug=True, host="0.0.0.0", port="5000")
@app.route
is used to decorate thehome()
function, associated it with the '/
' route and specifying that it responds to HTTP Ger requests. Howver, theif __name__ == __main__
ensures that the app is only run when the script is executed directly, not when it's imported as a module. Theapp.run(debug=True, host="0.0.0.0", port="5000")
section of the code runs the Flask app.debug=True
enables debugging mode,host="0.0.0.0"
makes the app accessible from external devices, andport="5000"
sets the port to 5000. - Step-2:Create a file named
requirements.txt
with the following content:
This file specifies the dependencies for your Flask app.Flask == 2.0.1
- Step-3:Create a file named Dockerfile with the following content:
This Dockerfile specifies how to build your Docker image.FROM python:3.8-alpine COPY . /app WORKDIR /app RUN pip install -r requirements.txt CMD python app.py
- Step-4:Open a terminal, navigate to the directory containing your
Dockerfile
,app.py
, andrequirements.txt
, and run the following command:
Here you will find all available images that you have created along with the image just you created.$ docker build -t welcome-app .
- Step-5:
After building the Docker image, you can run a container using the following command:
This command runs the Docker container and maps port 5000 from the container to port 5000 on your host machine.$ docker run -p 5000:5000 welcome-app
- Step-6:Open a web browser and go to http://localhost:5000. You should see the message "Hello, world!".
- Once you have already created the docker image, you should push it to docker hub.
replace 'dockerusername' with your actual docker username. The pushed repostory can be seen in your dockerhub as follows:$ docker push dockerusername/welcome-app:latest
docker ps
(it will list all the images running). Then use following command to stop it:
$ docker stop container_id
Multi-container docker image
- A multi-container Docker image is an image that consists of multiple software components or services, each running in its own container within the same image. This allows you to package and distribute applications with their dependencies and services, ensuring consistency and ease of deployment.
- In a multi-container Docker image, the individual containers within the image work together to provide a complete application or service. Each container has its own environment and runtime, but they can communicate with each other through defined channels like network connections or shared volumes.
- This approach is particularly useful for applications that require multiple interconnected services to function. Instead of deploying each service separately, a multi-container Docker image encapsulates all the necessary components, making it easier to manage and deploy the entire application as a single unit.
- Docker Compose is often used in conjunction with multi-container Docker images to define and manage the configuration of multiple services within an application. It simplifies the orchestration of these services, ensuring they start, stop, and interact with each other seamlessly.
Docker Compose:
Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to define a multi-container application in a single file, then spin up and manage all the containers defined in that file with a single command. This is particularly useful for complex applications where multiple services need to work together.Flask-Redis Hello World App
Project Overview:
he Flask-Redis Hello World App is a simple web application built using Flask, a micro web framework for Python, and Redis, an in-memory data structure store. The project demonstrates how to create a basic web application that displays a "Hello World!" message and keeps track of the number of times the page has been accessed using a Redis database. More details on the projects can be found at the Docker hub repository.Components and files:
- app.py
import time import redis from flask import Flask app = Flask(__name__) cache = redis.Redis(host='redis', port=6379) def get_hit_count(): retries = 5 while True: try: # Corrected the typo here: 'cache.incr' should return the result return cache.incr('hits') except redis.exceptions.ConnectionError as exc: if retries == 0: raise exc retries -= 1 time.sleep(0.5) @app.route('/') def hello(): count = get_hit_count() return 'Hello World! I have been seen {} times.\n'.format(count) if __name__ == "__main__": app.run(debug=True, host="0.0.0.0", port=5000)
- This is the main file of the project, containing the Flask application setup and routes.
- It uses the
redis
library to interact with a Redis database. - The
get_hit_count
function increments a counter in the Redis database each time the page is accessed. - The '
/
' route displays a message along with the count of page visits.
- requirements.txt:
Flask redis==3.5.3
- Lists the Python dependencies for the project.
- Includes Flask for web development and Redis version 3.5.3 for Redis interactions.
- Dockerfile:
FROM python:3.7-alpine WORKDIR /code ENV FLASK_APP=app.py ENV FLASK_RUN_HOST=0.0.0.0 COPY . . RUN pip install -r requirements.txt EXPOSE 5000 CMD ["flask", "run"]
- Configures the Docker image for the project.
- Uses the lightweight Python 3.7 Alpine image as the base.
- Sets the working directory to '
/code
'. - Defines environment variables for Flask application configuration.
- Copies the project files into the image and installs dependencies using
pip
. - Exposes port 5000 and sets the command to run the Flask application.
- docker-compose.yml:
version: "3.0" services: web: build: . image: web-app ports: - "5000:5000" redis: image: "redis:alpine"
- Defines services for the project using Docker Compose.
- The '
web
' service builds the Docker image from the current directory, names it "web-app," and maps port 5000 to the host. - The '
redis
' service uses the official Redis image from Docker Hub.
How to run the project:
- Build the Docker images:
docker-compose build
- Start the Docker Containers:
Access the web application by navigating to http://localhost:5000 in a web browser.docker-compose up
- Stop the Docker Containers:
docker-compose down
Conclusion
The Flask-Redis Hello World App serves as a foundational project for those looking to understand the integration of Flask with Redis and containerization using Docker. It can be extended and modified based on specific requirements and serves as a starting point for more complex web applications.Reference
- For a best description of docker basics, you can look at youtube video: Youtube video.