Building a Complete Self-hosted AI Development Environment

Introduction

In today’s AI landscape, having a secure, efficient, and self-contained development environment is crucial. This guide presents a comprehensive solution that combines best-in-class open-source tools for AI development, all running locally on your infrastructure.

Key Components

  • Ollama: Run state-of-the-art language models locally
  • n8n: Create automated AI workflows
  • Qdrant: Vector database for semantic search
  • Unstructured: Advanced document processing
  • Argilla: Data labeling and validation
  • Opik: Model evaluation and monitoring
  • JupyterLab: Interactive development environment

Benefits

  • Complete data privacy and control
  • No cloud dependencies
  • Cost-effective solution
  • Customizable infrastructure
  • Seamless tool integration

Prerequisites

Hardware Requirements

  • CPU: 4+ cores recommended
  • RAM: 16GB minimum, 32GB recommended
  • Storage: 50GB+ free space
  • GPU: NVIDIA GPU with 8GB+ VRAM (optional)

Software Requirements

# Check Docker version
docker --version
# Should be 20.10.0 or higher

# Check Docker Compose version
docker compose version
# Should be 2.0.0 or higher

# Check Git version
git --version
# Should be 2.0.0 or higher

System Preparation

# Create project directory
mkdir -p ai-development-environment
cd ai-development-environment

# Create required subdirectories
mkdir -p notebooks
mkdir -p shared
mkdir -p n8n/backup
mkdir -p data/documents
mkdir -p data/processed
mkdir -p data/vectors

Directory Structure

ai-development-environment/
├── docker-compose.yml
├── .env
├── notebooks/
│   ├── examples/
│   └── templates/
├── shared/
│   ├── documents/
│   └── processed/
├── n8n/
│   └── backup/
└── data/
    ├── documents/
    ├── processed/
    └── vectors/

Configuration Files

Environment Variables (.env)

# Database Configuration
POSTGRES_USER=n8n
POSTGRES_PASSWORD=n8n
POSTGRES_DB=n8n

# n8n Security
N8N_ENCRYPTION_KEY=1234567890
N8N_USER_MANAGEMENT_JWT_SECRET=1234567890

# Service Configuration
JUPYTER_TOKEN=masterclass
ARGILLA_PASSWORD=masterclass

# Resource Limits
POSTGRES_MAX_CONNECTIONS=100
ELASTICSEARCH_HEAP_SIZE=1g

Docker Compose Configuration

Create docker-compose.yml:

version: '3.8'

volumes:
  n8n_storage:
    driver: local
  postgres_storage:
    driver: local
  ollama_storage:
    driver: local
  qdrant_storage:
    driver: local
  open-webui:
    driver: local
  jupyter_data:
    driver: local
  opik_data:
    driver: local
  elasticsearch_data:
    driver: local

networks:
  demo:
    driver: bridge
    ipam:
      config:
        - subnet: 172.28.0.0/16

services:
  jupyter:
    image: jupyter/datascience-notebook:lab-4.0.6
    networks: ['demo']
    ports:
      - "8888:8888"
    volumes:
      - jupyter_data:/home/jovyan
      - ./notebooks:/home/jovyan/work
      - ./shared:/home/jovyan/shared
    environment:
      - JUPYTER_ENABLE_LAB=yes
      - JUPYTER_TOKEN=${JUPYTER_TOKEN}
      - OPENAI_API_KEY=${OPENAI_API_KEY}
    command: start-notebook.py --NotebookApp.token='${JUPYTER_TOKEN}'
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8888/api"]
      interval: 30s
      timeout: 10s
      retries: 3

  unstructured:
    image: quay.io/unstructured-io/unstructured-api:latest
    networks: ['demo']
    ports:
      - "8000:8000"
    volumes:
      - ./shared:/home/unstructured/shared
    command: --port 8000 --host 0.0.0.0
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

  opik:
    image: comet/opik:latest
    networks: ['demo']
    ports:
      - "5173:5173"
    volumes:
      - opik_data:/root/opik
      - ./shared:/root/shared
    environment:
      - OPIK_BASE_URL=http://localhost:5173/api
    restart: unless-stopped

  argilla:
    image: argilla/argilla-server:latest
    networks: ['demo']
    ports:
      - "6900:6900"
    environment:
      - ARGILLA_ELASTICSEARCH=http://elasticsearch:9200
      - DEFAULT_USER_PASSWORD=${ARGILLA_PASSWORD}
    depends_on:
      elasticsearch:
        condition: service_healthy
    restart: unless-stopped

  elasticsearch:
    image: elasticsearch:8.11.0
    networks: ['demo']
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=false
      - ES_JAVA_OPTS=-Xms512m -Xmx${ELASTICSEARCH_HEAP_SIZE}
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
    healthcheck:
      test: ["CMD-SHELL", "curl -s http://localhost:9200/_cluster/health | grep -vq '\"status\":\"red\"'"]
      interval: 20s
      timeout: 10s
      retries: 5
  # Workflow Automation
  n8n:
    <<: *service-n8n
    container_name: n8n
    restart: unless-stopped
    ports:
      - 5678:5678
    volumes:
      - n8n_storage:/home/node/.n8n
      - ./n8n/backup:/backup
      - ./shared:/data/shared
    depends_on:
      postgres:
        condition: service_healthy
      n8n-import:
        condition: service_completed_successfully

  n8n-import:
    <<: *service-n8n
    container_name: n8n-import
    entrypoint: /bin/sh
    command:
      - "-c"
      - "n8n import:credentials --separate --input=/backup/credentials && n8n import:workflow --separate --input=/backup/workflows"
    volumes:
      - ./n8n/backup:/backup
    depends_on:
      postgres:
        condition: service_healthy

  # Chat Interface
  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    networks: ['demo']
    restart: unless-stopped
    container_name: open-webui
    ports:
      - "3000:8080"
    extra_hosts:
      - "host.docker.internal:host-gateway"
    volumes:
      - open-webui:/app/backend/data

  # Language Models
  ollama-cpu:
    profiles: ["cpu"]
    <<: *service-ollama

  ollama-gpu:
    profiles: ["gpu-nvidia"]
    <<: *service-ollama
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

  ollama-pull-llama-cpu:
    profiles: ["cpu"]
    <<: *init-ollama
    depends_on:
      - ollama-cpu

  ollama-pull-llama-gpu:
    profiles: ["gpu-nvidia"]
    <<: *init-ollama
    depends_on:
      - ollama-gpu

This completes the docker-compose.yml configuration, combining the original starter kit services with our additional AI development tools. The setup provides a complete environment for AI development, document processing, and workflow automation.

Service Integration Examples

Python Code Examples

Create a new notebook in JupyterLab with these integration examples:

# Document Processing Pipeline
import requests
from pathlib import Path

# Unstructured API Integration
def process_document(file_path):
    with open(file_path, 'rb') as f:
        response = requests.post(
            'http://unstructured:8000/general/v0/general',
            files={'files': f}
        )
    return response.json()

# Ollama Integration
def query_llm(prompt):
    response = requests.post(
        'http://ollama:11434/api/generate',
        json={'model': 'llama3.1', 'prompt': prompt}
    )
    return response.json()

# Qdrant Integration
from qdrant_client import QdrantClient

def store_embeddings(vectors, metadata):
    client = QdrantClient(host='qdrant', port=6333)
    client.upsert(
        collection_name="documents",
        points=vectors,
        payload=metadata
    )

AI Templates and Workflows

Document Processing Workflow

  1. Upload documents to shared directory
  2. Process with Unstructured API
  3. Generate embeddings with Ollama
  4. Store in Qdrant
  5. Query through n8n workflows

Docker Compose Profiles

The project uses different Docker Compose profiles to accommodate various hardware configurations:

For NVIDIA GPU Users

docker compose --profile gpu-nvidia pull
docker compose create && docker compose --profile gpu-nvidia up

This profile enables GPU acceleration for Ollama, providing faster inference times for language models[1].

For Apple Silicon (M1/M2)

docker compose pull
docker compose create && docker compose up

Since GPU access isn’t available in Docker on Apple Silicon, this profile runs without GPU specifications[1].

For CPU-only Systems

docker compose --profile cpu pull
docker compose create && docker compose --profile cpu up

This profile configures services to run on CPU only, suitable for systems without dedicated GPUs[1].

Service Configurations

Core Services

  • n8n: Workflow automation platform with AI capabilities
  • Ollama: Local LLM service with configurable GPU/CPU profiles
  • Qdrant: Vector database for embeddings
  • PostgreSQL: Database backend for n8n
  • Open WebUI: Chat interface for model interaction

Additional Services

  • Unstructured: Document processing service
  • Argilla: Data labeling platform
  • Opik: Model evaluation tools
  • JupyterLab: Development environment

Volume Management

Each service has dedicated persistent storage:

  • n8n_storage
  • postgres_storage
  • ollama_storage
  • qdrant_storage
  • elasticsearch_data
  • jupyter_data

Networking

All services communicate through a shared ‘demo’ network, allowing internal service discovery and communication