Docker in the Kubernetes cluster: How it works and best practices
Docker and Kubernetes are two of the most powerful and popular technologies in containerization and container orchestration. Docker is used to containerize applications, while Kubernetes is a container orchestration platform that manages the deployment, scaling, and operation of containerized applications.
In this article, we’ll explore how Docker fits into the Kubernetes cluster, its role in containerized applications, and best practices for using Docker in a Kubernetes environment.
Learn about Docker and Kubernetes
-
docker: Docker is a platform for developing, publishing, and running applications in containers. Containers package an application and all of its dependencies, making it portable, consistent, and easy to deploy across different environments. Docker provides a way to build, deploy, and manage these containers.
-
Kubernetes: Kubernetes (often abbreviated as K8s) is an open source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Kubernetes manages clusters of containers across multiple machines and ensures applications run as expected in a highly available and scalable manner.
While Docker is responsible for containerizing applications, Kubernetes Responsible for efficiently operating, managing and scaling these containers in the cluster.
How Docker and Kubernetes work together
Kubernetes can be used with different container runtimes (software that runs containers), and docker Is one of the most popular container runtimes. Kubernetes uses Docker to run containers within Pods, which are the smallest deployable units in Kubernetes.
Docker and Kubernetes interact as follows:
- Docker containers in Kubernetes Pods: In Kubernetes, containers are encapsulated inside podwhich are the smallest deployable units. A Pod can contain one or more containers that share the same network namespace, storage volumes, and resources.
Kubernetes schedules and manages Pods across nodes in the cluster. Each container within a Pod runs Docker and packages its own image (built using Docker).
-
Docker image:
one Docker image Is a read-only blueprint for building Docker containers. The image contains the application code and all dependencies required to run the application. In Kubernetes, Docker images are pulled from the container registry and used to build containers in Pods. -
Kubernetes scheduler:
The Kubernetes scheduler selects the nodes in the cluster on which Pods (containing Docker containers) will execute based on resource requirements, availability, and other constraints. It ensures that containers are deployed and run on the appropriate nodes in the cluster. -
container runtime:
Kubernetes supports multiple container runtimes. Docker is one of the container runtimes available for Kubernetes, although Kubernetes is moving towards Container d (the industry-standard core container runtime) for handling containers. Docker images can still function in Kubernetes even if the underlying runtime changes.
Docker workflow in Kubernetes cluster
- Build Docker image: Docker images are typically built on the developer’s local machine or in a CI/CD pipeline. Once the image is ready, it is pushed to a container registry such as Docker Hub, Google Container Registry (GCR), Amazon Elastic Container Registry (ECR), or a private registry.
-
Example command to build a Docker image:
docker build -t my-app:v1 .
-
Example command to push image to Docker Hub:
docker push my-app:v1
- Deploy Kubernetes using Docker images: Once the Docker image is available in the login, you can reference the image in the Kubernetes deployment configuration. Kubernetes deployment resources define the desired state of the Pod, including the Docker image to use.
one deploy Ensures that a specified number of identical Pods are always running and manages the Pod lifecycle (including scaling, updating, and rollback).
Kubernetes deployment YAML example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:v1 # Docker image reference
ports:
- containerPort: 8080
To deploy the application, use kubectl
:
kubectl apply -f deployment.yaml
This command tells Kubernetes to build a deployment using a Docker image my-app:v1
And make sure 3 copies of the container are running.
-
Scale and manage containers: Kubernetes makes it easy to scale applications. It can automatically expand the number of Pods based on resource usage, or you can manually expand the number of Pods using the following command
kubectl scale
.
Kubernetes will ensure that 5 Docker container instances are executed in the cluster.
- Service discovery and load balancing: Provided by Kubernetes Serve Expose Docker containers to the outside world. The service provides stable IP addresses and DNS names to access your containers, and it can load balance traffic across Pods.
Kubernetes service YAML example:
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer
kubectl apply -f service.yaml
This YAML setting will expose your application and route traffic to the Pod running the Docker container.
- Monitor and record: Docker containers executing in a Kubernetes cluster can be monitored using the following tools: Prometheus and Grafana. In addition, the following tools can be used to collect and analyze logs generated by Docker containers fluent or ELK stack (Elasticsearch, Logstash and Kibana).
Best practices for using Docker with Kubernetes
- Using multi-stage Docker builds: Multi-stage builds allow you to optimize the size of your Docker image by reducing dependencies on the final image. This is especially important in Kubernetes because smaller images deploy faster and use fewer resources.
-
Example of multi-stage Dockerfile:
# First stage: build the app FROM node:14 AS builder WORKDIR /app COPY . . RUN npm install RUN npm run build # Second stage: create a lightweight runtime image FROM node:14-slim WORKDIR /app COPY --from=builder /app/build . CMD ["node", "server.js"]
- Leveraging Kubernetes health checks:Kubernetes allows you to define vitality and Readiness probe Check the health of Docker containers. These checks ensure that the container is running correctly and restarted if it fails.
Example in deployment YAML:
spec:
containers:
- name: my-app
image: my-app:v1
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
- Limit resource usage: Set resources Require and limit Avoid excessive consumption of CPU and memory resources in the Kubernetes cluster for your Docker containers. This ensures your containers operate effectively alongside other workloads.
Example in deployment YAML:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
-
Store sensitive data securely:
Using Kubernetes secret Store sensitive information such as API keys or database credentials. Avoid hardcoding sensitive data in Docker images or Kubernetes manifests. -
Use imaging scanning and security tools:
Use the following tools to regularly scan Docker images for vulnerabilities Claire, Trivior anchor. This ensures that Docker images deployed in Kubernetes clusters are secure.
in conclusion
Docker provides the foundation for containerized applications, and Kubernetes helps manage these containers at scale in clustered environments. Docker and Kubernetes work seamlessly together to enable developers to efficiently build, deploy and scale applications in a cloud-native environment. By using Docker to package applications and Kubernetes to orchestrate containers, teams can take advantage of containerized microservices architecture to ensure scalability, elasticity, and ease of management.
As Kubernetes continues to grow in popularity, understanding the relationship between Docker and Kubernetes becomes increasingly important for modern application deployment and management.