Docker Basics

Docker Basics - Comprehensive Guide

Docker Basics

Docker has revolutionized the way developers build, ship, and run applications. It allows packaging an application with all its dependencies into a standardized unit called a container. This guide will cover Docker basics, installation, key concepts, and practical hands-on commands suitable for beginners and professionals.

Introduction to Docker

Docker is an open-source platform designed to automate the deployment, scaling, and management of applications. Key advantages of Docker include:

  • Lightweight containers share the host OS kernel.
  • Portable and consistent environments across development, testing, and production.
  • Rapid application deployment.
  • Integration with CI/CD pipelines.

Docker Terminology

  • Image: Read-only template used to create containers.
  • Container: Running instance of a Docker image.
  • Dockerfile: Script with instructions to build a Docker image.
  • Docker Hub: Cloud-based registry to store and share Docker images.
  • Volume: Persistent data storage for Docker containers.

Docker Installation

Docker can be installed on Windows, Linux, and macOS. For Linux, Docker can be installed using the package manager. Here’s an example for Ubuntu:


# Update packages
sudo apt-get update

# Install required packages
sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

# Add Docker’s official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Set up the repository
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

# Verify installation
docker --version

Docker Architecture

Docker architecture consists of the following key components:

1. Docker Daemon (dockerd)

The background service running on the host machine that manages building, running, and monitoring containers.

2. Docker Client

Command-line interface (CLI) that allows users to interact with Docker Daemon using commands like docker run, docker build, etc.

3. Docker Images

Read-only templates used to create Docker containers. They can be stored in registries such as Docker Hub.

4. Docker Containers

Isolated and lightweight environments running applications based on Docker images.

Docker Images

Docker images are the building blocks of Docker. They contain the application code, runtime, libraries, and dependencies required to run an application.

Working with Docker Images


# Pull an image from Docker Hub
docker pull nginx

# List available images
docker images

# Remove an image
docker rmi nginx

Creating a Custom Docker Image

Custom images can be created using a Dockerfile.


# Example Dockerfile
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y python3 python3-pip
COPY app.py /app/app.py
WORKDIR /app
CMD ["python3", "app.py"]

Docker Containers

Containers are running instances of Docker images. They provide isolation, consistency, and portability.

Basic Docker Container Commands


# Run a container
docker run -d --name my-nginx -p 8080:80 nginx

# List running containers
docker ps

# Stop a container
docker stop my-nginx

# Remove a container
docker rm my-nginx

# View container logs
docker logs my-nginx

Dockerfile and Custom Images

A Dockerfile is a text file that contains instructions to build a Docker image. Key instructions include:

  • FROM: Base image.
  • RUN: Execute commands during build.
  • COPY: Copy files into the image.
  • WORKDIR: Set working directory.
  • CMD: Default command to run in container.

Build and Run Custom Image


# Build an image from Dockerfile
docker build -t my-app:1.0 .

# Run a container from the image
docker run -d --name my-app-container my-app:1.0

Docker Compose

Docker Compose is a tool to define and manage multi-container Docker applications using docker-compose.yml files.

Example docker-compose.yml


version: '3'
services:
  web:
    image: nginx
    ports:
      - "8080:80"
  app:
    build: ./app
    ports:
      - "5000:5000"

Compose Commands


# Start services
docker-compose up -d

# Stop services
docker-compose down

# View logs
docker-compose logs

Docker Networking

Docker provides various networking options to connect containers:

  • Bridge Network: Default network for containers on the same host.
  • Host Network: Uses host's network directly.
  • Overlay Network: Connects containers across multiple Docker hosts.

# List networks
docker network ls

# Create a custom network
docker network create my-network

# Run a container on custom network
docker run -d --name web1 --network my-network nginx

Docker Volumes and Data Management

Volumes are the preferred mechanism for persisting data generated by Docker containers.


# Create a volume
docker volume create my-volume

# Run a container with volume
docker run -d -v my-volume:/data --name my-container ubuntu

# List volumes
docker volume ls

# Remove a volume
docker volume rm my-volume

Docker 

  • Use small, lightweight base images to reduce image size.
  • Minimize the number of layers in Dockerfile.
  • Always specify image versions to avoid breaking changes.
  • Use Docker Compose for managing multi-container setups.
  • Use volumes for persistent data instead of storing data inside containers.
  • Regularly scan images for vulnerabilities using tools like Docker Scan.

Docker is an essential tool for modern application development and DevOps practices. Understanding the basics of Docker images, containers, Dockerfile, Compose, networking, and volumes will help you streamline your development workflows and deploy applications efficiently. With consistent practice, Docker can become a key part of your skillset for building scalable, portable, and reliable applications.

logo

AWS

Beginner 5 Hours
Docker Basics - Comprehensive Guide

Docker Basics

Docker has revolutionized the way developers build, ship, and run applications. It allows packaging an application with all its dependencies into a standardized unit called a container. This guide will cover Docker basics, installation, key concepts, and practical hands-on commands suitable for beginners and professionals.

Introduction to Docker

Docker is an open-source platform designed to automate the deployment, scaling, and management of applications. Key advantages of Docker include:

  • Lightweight containers share the host OS kernel.
  • Portable and consistent environments across development, testing, and production.
  • Rapid application deployment.
  • Integration with CI/CD pipelines.

Docker Terminology

  • Image: Read-only template used to create containers.
  • Container: Running instance of a Docker image.
  • Dockerfile: Script with instructions to build a Docker image.
  • Docker Hub: Cloud-based registry to store and share Docker images.
  • Volume: Persistent data storage for Docker containers.

Docker Installation

Docker can be installed on Windows, Linux, and macOS. For Linux, Docker can be installed using the package manager. Here’s an example for Ubuntu:

# Update packages sudo apt-get update # Install required packages sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg \ lsb-release # Add Docker’s official GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg # Set up the repository echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # Install Docker Engine sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io # Verify installation docker --version

Docker Architecture

Docker architecture consists of the following key components:

1. Docker Daemon (dockerd)

The background service running on the host machine that manages building, running, and monitoring containers.

2. Docker Client

Command-line interface (CLI) that allows users to interact with Docker Daemon using commands like

docker run,
docker build, etc.

3. Docker Images

Read-only templates used to create Docker containers. They can be stored in registries such as Docker Hub.

4. Docker Containers

Isolated and lightweight environments running applications based on Docker images.

Docker Images

Docker images are the building blocks of Docker. They contain the application code, runtime, libraries, and dependencies required to run an application.

Working with Docker Images

# Pull an image from Docker Hub docker pull nginx # List available images docker images # Remove an image docker rmi nginx

Creating a Custom Docker Image

Custom images can be created using a Dockerfile.

# Example Dockerfile FROM ubuntu:20.04 RUN apt-get update && apt-get install -y python3 python3-pip COPY app.py /app/app.py WORKDIR /app CMD ["python3", "app.py"]

Docker Containers

Containers are running instances of Docker images. They provide isolation, consistency, and portability.

Basic Docker Container Commands

# Run a container docker run -d --name my-nginx -p 8080:80 nginx # List running containers docker ps # Stop a container docker stop my-nginx # Remove a container docker rm my-nginx # View container logs docker logs my-nginx

Dockerfile and Custom Images

A Dockerfile is a text file that contains instructions to build a Docker image. Key instructions include:

  • FROM: Base image.
  • RUN: Execute commands during build.
  • COPY: Copy files into the image.
  • WORKDIR: Set working directory.
  • CMD: Default command to run in container.

Build and Run Custom Image

# Build an image from Dockerfile docker build -t my-app:1.0 . # Run a container from the image docker run -d --name my-app-container my-app:1.0

Docker Compose

Docker Compose is a tool to define and manage multi-container Docker applications using docker-compose.yml files.

Example docker-compose.yml

version: '3' services: web: image: nginx ports: - "8080:80" app: build: ./app ports: - "5000:5000"

Compose Commands

# Start services docker-compose up -d # Stop services docker-compose down # View logs docker-compose logs

Docker Networking

Docker provides various networking options to connect containers:

  • Bridge Network: Default network for containers on the same host.
  • Host Network: Uses host's network directly.
  • Overlay Network: Connects containers across multiple Docker hosts.
# List networks docker network ls # Create a custom network docker network create my-network # Run a container on custom network docker run -d --name web1 --network my-network nginx

Docker Volumes and Data Management

Volumes are the preferred mechanism for persisting data generated by Docker containers.

# Create a volume docker volume create my-volume # Run a container with volume docker run -d -v my-volume:/data --name my-container ubuntu # List volumes docker volume ls # Remove a volume docker volume rm my-volume

Docker 

  • Use small, lightweight base images to reduce image size.
  • Minimize the number of layers in Dockerfile.
  • Always specify image versions to avoid breaking changes.
  • Use Docker Compose for managing multi-container setups.
  • Use volumes for persistent data instead of storing data inside containers.
  • Regularly scan images for vulnerabilities using tools like Docker Scan.

Docker is an essential tool for modern application development and DevOps practices. Understanding the basics of Docker images, containers, Dockerfile, Compose, networking, and volumes will help you streamline your development workflows and deploy applications efficiently. With consistent practice, Docker can become a key part of your skillset for building scalable, portable, and reliable applications.

Related Tutorials

Frequently Asked Questions for AWS

An AWS Region is a geographical area with multiple isolated availability zones. Regions ensure high availability, fault tolerance, and data redundancy.

AWS EBS (Elastic Block Store) provides block-level storage for use with EC2 instances. It's ideal for databases and other performance-intensive applications.



  • S3: Object storage for unstructured data.
  • EBS: Block storage for structured data like databases.

  • Regions are geographic areas.
  • Availability Zones are isolated data centers within a region, providing high availability for your applications.

AWS pricing follows a pay-as-you-go model. You pay only for the resources you use, with options like on-demand instances, reserved instances, and spot instances to optimize costs.



AWS S3 (Simple Storage Service) is an object storage service used to store and retrieve any amount of data from anywhere. It's ideal for backup, data archiving, and big data analytics.



Amazon RDS (Relational Database Service) is a managed database service supporting engines like MySQL, PostgreSQL, Oracle, and SQL Server. It automates tasks like backups and updates.



  • Scalability: Resources scale based on demand.
  • Cost-efficiency: Pay-as-you-go pricing.
  • Global Reach: Availability in multiple regions.
  • Security: Advanced encryption and compliance.
  • Flexibility: Supports various workloads and integrations.

AWS Auto Scaling automatically adjusts the number of compute resources based on demand, ensuring optimal performance and cost-efficiency.

The key AWS services include:


  • EC2 (Elastic Compute Cloud) for scalable computing.
  • S3 (Simple Storage Service) for storage.
  • RDS (Relational Database Service) for databases.
  • Lambda for serverless computing.
  • CloudFront for content delivery.

AWS CLI (Command Line Interface) is a tool for managing AWS services via commands. It provides scripting capabilities for automation.

Amazon EC2 is a web service that provides resizable compute capacity in the cloud. It enables you to launch virtual servers and manage your computing resources efficiently.

AWS Snowball is a physical device used for data migration. It allows organizations to transfer large amounts of data into AWS quickly and securely.

AWS CloudWatch is a monitoring service that collects and tracks metrics, logs, and events, helping you gain insights into your AWS infrastructure and applications.



AWS (Amazon Web Services) is a comprehensive cloud computing platform provided by Amazon. It offers on-demand cloud services such as compute power, storage, databases, networking, and more.



Elastic Load Balancer (ELB) automatically distributes incoming traffic across multiple targets (e.g., EC2 instances) to ensure high availability and fault tolerance.

Amazon VPC (Virtual Private Cloud) allows you to create a secure, isolated network within the AWS cloud, enabling you to control IP ranges, subnets, and route tables.



Route 53 is a scalable DNS (Domain Name System) web service by AWS. It connects user requests to your applications hosted on AWS resources.

AWS CloudFormation is a service that enables you to manage and provision AWS resources using infrastructure as code. It automates resource deployment through JSON or YAML templates.



AWS IAM (Identity and Access Management) allows you to control access to AWS resources securely. You can define user roles, permissions, and policies to ensure security and compliance.



  • EC2: Provides virtual servers for full control of your applications.
  • Lambda: Offers serverless computing, automatically running your code in response to events without managing servers.

Elastic Beanstalk is a PaaS (Platform as a Service) offering by AWS. It simplifies deploying and managing applications by automatically handling infrastructure provisioning and scaling.



Amazon SQS (Simple Queue Service) is a fully managed message queuing service that decouples and scales distributed systems.

AWS ensures data security through encryption (both at rest and in transit), compliance with standards (e.g., ISO, SOC, GDPR), and access controls using IAM.

AWS Lambda is a serverless computing service that lets you run code in response to events without provisioning or managing servers. You pay only for the compute time consumed.



AWS Identity and Access Management controls user access and permissions securely.

A serverless compute service running code automatically in response to events.

A Virtual Private Cloud for isolated AWS network configuration and control.

Automates resource provisioning using infrastructure as code in AWS.

A monitoring tool for AWS resources and applications, providing logs and metrics.

A virtual server for running applications on AWS with scalable compute capacity.

Distributes incoming traffic across multiple targets to ensure fault tolerance.

A scalable object storage service for backups, data archiving, and big data.

EC2, S3, RDS, Lambda, VPC, IAM, CloudWatch, DynamoDB, CloudFront, and ECS.

Tracks user activity and API usage across AWS infrastructure for auditing.

A managed relational database service supporting multiple engines like MySQL, PostgreSQL, and Oracle.

An isolated data center within a region, offering high availability and fault tolerance.

A scalable Domain Name System (DNS) web service for domain management.

Simple Notification Service sends messages or notifications to subscribers or other applications.

Brings native AWS services to on-premises locations for hybrid cloud deployments.

Automatically adjusts compute capacity to maintain performance and reduce costs.

Amazon Machine Image contains configuration information to launch EC2 instances.

Elastic Block Store provides block-level storage for use with EC2 instances.

Simple Queue Service enables decoupling and message queuing between microservices.

A serverless compute engine for containers running on ECS or EKS.

Manages and groups multiple AWS accounts centrally for billing and access control.

Distributes incoming traffic across multiple EC2 instances for better performance.

A tool for visualizing, understanding, and managing AWS costs and usage over time.

line

Copyrights © 2024 letsupdateskills All rights reserved