Docker Compose for Homelab: Running 15+ Services Like a Pro

4 min read
Docker Compose for Homelab: Running 15+ Services Like a Pro

Docker Compose is a game-changer for managing multiple containers in a homelab environment. If you’re running 15+ services—like Ollama, FastAPI APIs, WordPress sites, n8n workflows, or Vaultwarden—you’ll appreciate how Docker Compose simplifies orchestration. In this post, I’ll walk through practical tips for networking, volumes, restart policies, and resource limits to keep your setup running smoothly.


Networking in a Homelab: Exposing Services Without Chaos

One of the biggest challenges when running multiple containers is managing network connectivity between them. With Docker Compose, you can define custom networks that services can join, ensuring they can communicate without exposing unnecessary ports to the outside world.

For example, let’s say you’re running Ollama and FastAPI together. You want Ollama to be accessible internally by FastAPI but not exposed to the internet. Here’s how you’d set it up in docker-compose.yml:

version: '3'
services:
  ollama:
    image: ollamastudio/ollama
    ports:
      - "11434:11434"
    networks:
      - app_network

  fastapi:
    image: darrenbetney/fastapi-template:latest
    depends_on:
      - ollama
    networks:
      - app_network

networks:
  app_network:
    driver: bridge

This setup creates a dedicated network (app_network) that both services can use. FastAPI can reach Ollama using its internal service name (ollama:11434), while the ports section ensures Ollama is accessible externally on port 11434.


Volumes: Persistence Without the Hassle

Volumes are essential for ensuring data persistence across container restarts. Whether you’re running WordPress sites or n8n workflows, you’ll want your data to survive reboots or updates.

Here’s how to define volumes for a WordPress installation:

version: '3'
services:
  wordpress:
    image: wordpress:latest
    ports:
      - "8000:80"
    volumes:
      - ./wordpress_data:/var/www/html
    networks:
      - app_network

  mysql:
    image: mysql:8.0
    volumes:
      - ./mysql_data:/var/lib/mysql
    networks:
      - app_network

In this example, wordpress_data and mysql_data are directories in your project that will persist data across container restarts. This is especially useful for homelab environments where you might experiment with different setups without losing work.


Restart Policies: Keeping Your Services Up and Running

Unexpected downtime can be frustrating when managing multiple services. Docker Compose allows you to define restart policies that automatically bring services back online if they crash or fail.

For critical services like Vaultwarden, a restart policy ensures high availability:

version: '3'
services:
  vaultwarden:
    image: vaultwarden/server:latest
    ports:
      - "8080:80"
    volumes:
      - ./vaultwarden_data:/var/lib/vaultwarden
    networks:
      - app_network
    restart: unless-stopped

The restart: unless-stopped policy means the container will automatically restart unless explicitly stopped or removed. For less critical services, you might use on-failure instead.


Resource Limits: Keeping Your System Responsive

Running 15+ containers can strain your hardware resources. Docker Compose lets you set resource limits to ensure fair usage and prevent one service from hogging all the CPU or memory.

For example, if you’re running a GPU-accelerated model like Ollama with CUDA support:

version: '3'
services:
  ollama_cuda:
    image: ollamastudio/ollama-cuda:latest
    ports:
      - "11434:11434"
    devices:
      - /dev/nvidia:/dev/nvidia
    resources:
      limits:
        cpus: '2'
        gpus: all
        memory: 4G

This configuration allocates up to 2 CPUs, all available GPUs, and 4GB of memory to the Ollama service. You can adjust these values based on your hardware capabilities.


Automating with Compose Files: Streamlining Your Workflow

Docker Compose files aren’t just for defining services—they’re a powerful way to automate your entire homelab setup. Use environment variables and conditional blocks to tailor configurations for different environments.

For instance, you could create separate docker-compose files for development, testing, and production:

# docker-compose.dev.yml
version: '3'
services:
  app:
    image: darrenbetney/fastapi-template:latest
    ports:
      - "8000:80"
    environment:
      - DEBUG=true

# docker-compose.prod.yml
version: '3'
services:
  app:
    image: darrenbetney/fastapi-template:production
    ports:
      - "8000:80"
    environment:
      - DEBUG=false

You can then switch environments using the --file flag:

docker-compose --file docker-compose.dev.yml up
# or
docker-compose --file docker-compose.prod.yml up

Wrapping Up

Docker Compose is an indispensable tool for managing a homelab with multiple services. By leveraging networking, volumes, restart policies, and resource limits, you can create a robust, scalable, and maintainable setup. Whether you’re running AI models, APIs, or WordPress sites, Docker Compose provides the flexibility to adapt your environment as your needs grow.

If you’re looking for real-world examples or want to streamline your workflow further, check out Quartalis—where Darren Betney shares insights and tools for building production-grade AI systems. Happy containerizing!

Need this built for your business?

Get In Touch