intermediate devops Docker 26 · Updated April 2026

Docker Essentials Cheatsheet

A comprehensive guide to Docker for developers, covering essential commands, core concepts, and best practices for Docker 26.

· 8 min read · AI-reviewed

Quick Overview

Docker is an open-source platform that enables developers to build, ship, and run applications in isolated environments called containers. It packages an application and all its dependencies into a standardized unit, ensuring it runs consistently across different computing environments. Docker is crucial for modern development, facilitating faster deployment, simplified scaling, and consistent environments from development to production. This cheatsheet covers Docker version 26.

Install Docker Desktop:

# macOS, Windows, Linux Desktop environments:
# Download from official Docker website: https://docs.docker.com/desktop/install/

Getting Started

To get started with Docker 26, you’ll need to install Docker Desktop for your operating system. Docker Desktop bundles Docker Engine, Docker CLI client, Docker Compose, Kubernetes, and Notary.

Installation

macOS:

  1. Download the Docker Desktop for Mac installer (e.g., Docker.dmg) from the official Docker website.
  2. Double-click Docker.dmg and drag the Docker icon to your Applications folder.
  3. Open Docker.app from your Applications folder. You may need to accept the Docker Subscription Service Agreement.

Windows:

  1. Ensure WSL 2 (Windows Subsystem for Linux 2) is enabled. If not, open PowerShell as administrator and run wsl --install then wsl --update.
  2. Download the Docker Desktop for Windows installer (Docker Desktop Installer.exe).
  3. Double-click the installer and follow the wizard. Ensure “Use WSL 2 instead of Hyper-V” is selected during configuration.
  4. Launch Docker Desktop after installation.

Linux (Ubuntu Example): For Ubuntu 22.04, 24.04, or the latest non-LTS, Docker Desktop is recommended.

  1. Set up Docker’s apt repository:
    # Update package lists
    sudo apt-get update
    # Install necessary packages for apt to use HTTPS repositories
    sudo apt-get install ca-certificates curl gnupg
    # Add Docker's GPG key
    sudo install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    # Add Docker repository to Apt sources
    sudo chmod a+r /etc/apt/keyrings/docker.gpg
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    # Update package lists again after adding the repository
    sudo apt-get update
  2. Install Docker Desktop:
    sudo apt install ./docker-desktop-amd64.deb
    # (Replace with the correct architecture if not amd64)
    Alternatively, install Docker Engine directly:
    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  3. Add your user to the docker group to run Docker commands without sudo (requires a log out/in):
    sudo usermod -aG docker ${USER}
  4. Launch Docker Desktop from your applications menu.

Hello World

Verify your installation by running the hello-world container.

# Pulls the "hello-world" image and runs it as a container
docker run hello-world

If successful, you’ll see a message confirming Docker is working correctly.

Core Concepts

Understanding these fundamental concepts is key to mastering Docker:

ConceptDescription
ImageA lightweight, standalone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files. Images are built from Dockerfiles.
ContainerA runnable instance of an image. You can create, start, stop, move, or delete a container. It’s an isolated process running on the host machine, sharing the host OS kernel but isolated from other containers and the host OS.
DockerfileA text file that contains all the commands a user could call on the command line to assemble an image. It’s a “recipe” for creating Docker images.
VolumeThe recommended mechanism for persisting data generated by and used by Docker containers. Volumes are entirely managed by Docker and are stored in a part of the host filesystem (e.g., /var/lib/docker/volumes/ on Linux).
Bind MountA way to persist data by mounting a file or directory from the host machine directly into a container. Docker doesn’t manage their lifecycle, and they can be stored anywhere on the host filesystem.
NetworkEnables communication between containers, the host, and external networks. Docker provides several network drivers (bridge, host, overlay, none). bridge is the default for isolated containers on the same host.
RegistryA repository for Docker images. Docker Hub is the default public registry, but private registries can also be used. Images are pushed to and pulled from registries.

Essential Commands / API / Syntax

The 80/20 reference for common Docker tasks.

Image Management

Images are the blueprints for containers.

  • Pull an image from a registry (e.g., Docker Hub):

    # Pulls the latest 'ubuntu' image
    docker pull ubuntu
    # Pulls a specific version of 'nginx'
    docker pull nginx:1.25
  • List all local images:

    docker images
    # Or 'docker image ls'
  • Build an image from a Dockerfile:

    # Build image named 'my-app' with tag '1.0' from Dockerfile in current directory
    docker build -t my-app:1.0 .
    # Build with no-cache (useful for troubleshooting build issues)
    docker build --no-cache -t my-app:1.0 .
  • Remove an image:

    # Remove image by ID or name:tag
    docker rmi my-app:1.0
    # Force remove an image (even if used by a container)
    docker rmi -f <IMAGE_ID>

Container Management

Containers are running instances of images.

  • Run a container:

    # Run a container from 'nginx:1.25' image, mapping host port 8080 to container port 80
    docker run -p 8080:80 nginx:1.25
    # Run in detached mode (-d), name it 'webserver', and publish port
    docker run -d --name webserver -p 8080:80 nginx:1.25
    # Run an interactive container with a pseudo-TTY (-it)
    docker run -it ubuntu bash
  • List running containers:

    docker ps
    # List all containers (running and stopped)
    docker ps -a
  • Start/Stop/Restart a container:

    # Start a stopped container by name or ID
    docker start webserver
    # Stop a running container by name or ID
    docker stop webserver
    # Restart a container
    docker restart webserver
  • Remove a container:

    # Remove a stopped container by name or ID
    docker rm webserver
    # Force remove a running container
    docker rm -f webserver
    # Remove all stopped containers
    docker container prune
  • Execute a command in a running container:

    # Run an interactive bash shell inside the 'webserver' container
    docker exec -it webserver bash
    # Run a command (e.g., list files) in the 'webserver' container
    docker exec webserver ls -l /etc/nginx
  • View container logs:

    # View logs for 'webserver' container
    docker logs webserver
    # Follow logs in real-time
    docker logs -f webserver

Volume Management

Manage persistent data for containers.

  • Create a named volume:

    docker volume create my-data
  • List volumes:

    docker volume ls
  • Inspect a volume:

    docker volume inspect my-data
  • Remove a volume:

    # Remove a specific volume (must not be in use)
    docker volume rm my-data
    # Remove all unused local volumes
    docker volume prune

Network Management

Manage how containers communicate.

  • List networks:

    docker network ls
  • Create a custom bridge network:

    docker network create my-app-network
  • Inspect a network:

    docker network inspect my-app-network
  • Connect a running container to a network:

    docker network connect my-app-network webserver
  • Remove a network:

    # Remove a specific network (must not have connected containers)
    docker network rm my-app-network
    # Remove all unused networks
    docker network prune

Docker Compose Commands (as of Docker 26, docker compose is the standard)

Docker Compose is for defining and running multi-container Docker applications.

  • Start services defined in compose.yaml (or docker-compose.yaml):

    # Build images (if needed) and start all services in detached mode
    docker compose up -d
  • Stop and remove containers, networks, and volumes defined in compose.yaml:

    docker compose down
  • List services and their status:

    docker compose ps
  • Execute a command in a service container:

    # Run bash in the 'web' service container
    docker compose exec web bash

Common Patterns

1. Building a Multi-stage Dockerfile

Multi-stage builds are crucial for creating small, secure, and efficient images by separating build-time dependencies from runtime dependencies.

Example: Node.js application

# Stage 1: Build the application
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build

# Stage 2: Create the final lightweight runtime image
FROM node:20-alpine
WORKDIR /app
# Copy only the necessary build artifacts from the 'builder' stage
COPY --from=builder /app/build ./build
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json

# Expose the port your app runs on
EXPOSE 3000

# Command to run the application
CMD ["npm", "start"]

Explanation: The builder stage includes Node.js, npm, and all development dependencies to build the application. The final stage then copies only the essential build output and node_modules into a fresh, minimal Node.js runtime image, significantly reducing the final image size and attack surface.

2. Running a Multi-container Application with Docker Compose

Define an entire application stack (e.g., a web service and a database) in a single compose.yaml file.

compose.yaml example:

# Use version 3 of the Compose file format
version: '3.8'

services:
  web:
    # Build image from Dockerfile in current directory
    build: .
    # Map host port 80 to container port 80
    ports:
      - "80:80"
    # Mount named volume 'web-data' to /app/data in container
    volumes:
      - web-data:/app/data
    # Link to the 'db' service
    depends_on:
      - db
    # Environment variables
    environment:
      DATABASE_URL: postgres://user:password@db:5432/mydatabase

  db:
    image: postgres:16-alpine
    # Mount named volume 'db-data' for persistent database storage
    volumes:
      - db-data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: mydatabase
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
    # Expose port only within the Docker network
    expose:
      - "5432"

volumes:
  web-data:
  db-data:

To run this application:

# Start the application stack
docker compose up -d

This command builds the web service image, pulls the postgres image, creates the necessary networks and volumes, and starts both services.

3. Persistent Data with Volumes for Databases

Using named volumes is the recommended way to persist database data, ensuring data survives container restarts and removals.

# Create a named volume for PostgreSQL data
docker volume create my-postgres-data

# Run a PostgreSQL container, mounting the named volume
docker run -d --name my-postgres \
  -p 5432:5432 \
  -v my-postgres-data:/var/lib/postgresql/data \
  -e POSTGRES_DB=mydatabase \
  -e POSTGRES_USER=myuser \
  -e POSTGRES_PASSWORD=mypassword \
  postgres:16-alpine

The data directory /var/lib/postgresql/data inside the container is now backed by the my-postgres-data volume on the host.

Gotchas & Tips

  • .dockerignore file: Similar to .gitignore, a .dockerignore file prevents unnecessary files (e.g., node_modules, .git, local development artifacts) from being sent to the Docker daemon during the build process. This speeds up builds and reduces image size.

    # Example .dockerignore
    **/node_modules
    .git
    .env
    temp/
  • Image Size Optimization:

    • Multi-stage builds: As shown above, this is the most effective way to reduce image size.
    • Use small base images: alpine variants (e.g., node:20-alpine) are significantly smaller than full OS images. distroless images are even smaller, containing only your app and its runtime dependencies.
    • Combine RUN commands: Each RUN instruction creates a new layer. Combine related commands (e.g., apt-get update && apt-get install -y ...) to minimize layers.
    • Clean up after installation: Remove package caches and temporary files (e.g., rm -rf /var/lib/apt/lists/* for Debian-based images) in the same RUN command.
  • docker run vs docker exec:

    • docker run creates and starts a new container from an image.
    • docker exec runs a command inside an already running container.
    • Tip: If you want to interact with a shell in an existing container for debugging, use docker exec -it <container_name> bash.
  • Volumes vs. Bind Mounts:

    • Volumes are managed by Docker, platform-agnostic, safer for data persistence (especially for databases), and easier to back up/migrate. Recommended for application data.
    • Bind Mounts directly link a host path to a container path. Useful for development (e.g., mounting source code for hot-reloading) or when the host filesystem structure is important. Not managed by Docker.
    • Syntax preference: The --mount flag is generally preferred over -v for explicitness, especially in docker run commands.
  • Running as a non-root user: For security, avoid running container processes as the root user. Define a USER in your Dockerfile and ensure necessary file permissions.

    # Create a non-root user and group
    RUN addgroup --system appgroup && adduser --system --ingroup appgroup appuser
    # Set the user for subsequent commands
    USER appuser

Source: z2h.fyi/cheatsheets/docker-cheatsheet — Zero to Hero cheatsheets for developers.