Skip to main content

Docker

Imagine you're building an AI Agent skyscraper - without a solid foundation of cloud infrastructure, your structure will collapse at the first stream of real users. Here, we'll master Docker containers as universal "trucks" for delivering models, Kubernetes orchestration as a "traffic light" system for request traffic, and learn how to deploy services in AWS/GCP clouds as space stations for your AI Agents. These skills are an oxygen mask for any production project: without them, your brilliant models will remain local scripts on a laptop.

Ask AI Instructions
instruction

Since these topics don't change over time, it's best to study them with a personal tutor - ChatGPT.

The learning process should be as follows:

  • you create a system prompt for ChatGPT (templates), where you describe your background, preferences, level of detail in explanations, etc.
  • copy the topic from the list (triple click), and ask ChatGPT to explain this topic to you
  • if you want to delve deeper, ask clarifying questions

At the moment, this is the most convenient way to learn the basics. In addition to concepts, you can study additional materials in the Gold, Silver, and Extra sections.

  1. Gold - be sure to study before communicating with ChatGPT
  2. Ask AI - ask questions on each unfamiliar topic
  3. Silver - secondary materials
  4. Extra - in-depth topics

Golden

Docker

Ask AI

Ask ChatGPT

Docker: 20 must-know topics for a GenAI engineer

  1. Containerization architecture: comparing virtual machines and Docker
  2. Docker vs Conda/venv: key differences and application scenarios
  3. Step-by-step Docker installation on different operating systems
  4. Basic Docker CLI commands: managing containers and images
  5. Creating a Dockerfile: syntax and practical templates
  6. Working with Docker images: building, tagging, and publishing
  7. Docker Hub: basic operations (Overview)
  8. Working with volumes: practical use of volumes and bind mounts
  9. Network models in Docker: basic connection types (Briefly)
  10. Microservice architecture: basic principles (Briefly)
  11. Setting up environments with Docker Compose: creating and debugging configs
  12. Optimizing Docker images: reducing size and speeding up assembly
  13. Image layers: caching and dependency mechanism (Overview)
  14. Packaging Python applications: dependencies and environment in a container
  15. Configuring GPU in Docker for machine learning: a complete guide
  16. Data storage: local and cloud solutions (Brief overview)
  17. Working with cloud registries: practical examples of ECR/Artifact Registry
  18. Choosing a base image: Alpine vs Ubuntu (Comparison and recommendations)
  19. Configuring container autostart via systemd (Workshop)
  20. Performance optimization: cold vs hot start in production environments

Silver

Practical examples

Multi-stage image build

# Build stage: install dependencies in a temporary image
FROM python:3.9-slim as builder
COPY requirements.txt .
RUN pip install --user -r requirements.txt # --user for isolation in .local

# Final image: copy only what is needed from builder
FROM python:3.9-alpine # Alpine Linux - minimalist image
COPY --from=builder /root/.local /root/.local # Copy installed packages
COPY . /app # Add application source code
ENV PATH=/root/.local/bin:$PATH # Add path to installed packages
CMD ["python", "/app/main.py"] # Application entry point

Systemd service for autostart

# /etc/systemd/system/ml-service.service
[Unit]
Description=ML Service # Service name
After=network.target # Run after network initialization

[Service]
Type=simple
WorkingDirectory=/opt/ml # Working directory with docker-compose.yml
ExecStart=/usr/bin/docker-compose up # Main startup command
Restart=always # Automatic restart on crash

[Install]
WantedBy=multi-user.target # Run on system startup

# Service activation:
# sudo systemctl daemon-reload
# sudo systemctl enable ml-service
# sudo systemctl start ml-service

Startup time of different images (cold start)

ImageSizeStartup timeUsage
python:alpine58MB1.2sProduction API
ubuntu:latest77MB2.1sDevelopment/testing
nvidia/cuda4.7GB8.5sML training
  • Cold start - time from docker run to application readiness
  • Hot start (after stop/restart) is usually 30-40% faster

Extra