In the era of AI-first business, relying on restrictive SaaS automation limits is a strategic vulnerability. When you build on rented land, you face per-execution pricing. You deal with API rate limits. You also risk third-party data snooping.
You aren’t building an ecosystem. You are building a dependency.
At Thinkpeak.ai, we believe in transforming static operations. We want to build dynamic, self-driving ecosystems. That starts with owning the infrastructure that powers your logic.
Self-hosting n8n via Docker is the “hello world” of this transformation. It offers significant advantages:
- Zero Execution Limits: Run 10 or 10 million workflows. The cost remains the same.
- Total Data Privacy: Your customer PII never leaves your Virtual Private Server (VPS).
- AI Sovereignty: Run local Large Language Models (LLMs) directly alongside your automation engine. This avoids OpenAI’s token costs.
This guide is not a quick tutorial. This is an enterprise-grade blueprint. We will deploy a secure, scalable, and AI-ready n8n stack using Docker Compose.
Prerequisites: The Hardware for Your Digital Workforce
Before we touch the config files, you need the right infrastructure. It must handle Bespoke Internal Tools and heavy data lifting.
1. VPS Specifications
A standard $5/month VPS is insufficient for 2026. You need more power to run n8n alongside a database and a local AI agent. We recommend the following specs:
- CPU: 2 vCPUs minimum. Use 4+ if running local LLMs.
- RAM: 4GB minimum. 8GB+ is recommended for AI workloads.
- Storage: 40GB+ NVMe SSD. Logs and Docker overlays grow fast.
2. The Software Environment
Ensure your server runs a fresh install of Ubuntu 24.04 LTS. You will need:
- Docker Engine: The runtime that powers the containers.
- Docker Compose (V2): For orchestrating the multi-container stack.
- A Domain Name: You cannot get valid SSL certificates for a bare IP address. Point a subdomain to your VPS IP.
Step 1: The Enterprise Architecture (Postgres over SQLite)
Most tutorials suggest using the default SQLite database. Do not do this. SQLite locks under heavy concurrent loads.
A “Self-Driving Ecosystem” generates heavy loads. We will use PostgreSQL from day one.
Create your project directory:
mkdir thinkpeak-stack
cd thinkpeak-stack
touch docker-compose.yml .env
The Base docker-compose.yml
Here is the production-ready configuration. We separate the database, the application, and the reverse proxy (Traefik). This ensures automatic SSL management.
version: '3.8'
services:
traefik:
image: traefik:v3.0
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.myresolver.acme.tlschallenge=true"
- "--certificatesresolvers.myresolver.acme.email=${SSL_EMAIL}"
- "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./letsencrypt:/letsencrypt"
networks:
- n8n-network
postgres:
image: postgres:16-alpine
restart: always
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME}
- POSTGRES_NON_ROOT_USER=${DB_USER}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- n8n-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}"]
interval: 5s
timeout: 5s
retries: 5
n8n:
image: n8nio/n8n:latest
restart: always
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${DB_NAME}
- DB_POSTGRESDB_USER=${DB_USER}
- DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=${N8N_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_PASSWORD}
- N8N_HOST=${SUBDOMAIN}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://${SUBDOMAIN}
- N8N_ENCRYPTION_KEY=${ENCRYPTION_KEY}
ports:
- "127.0.0.1:5678:5678"
volumes:
- n8n_data:/home/node/.n8n
networks:
- n8n-network
depends_on:
postgres:
condition: service_healthy
labels:
- "traefik.enable=true"
- "traefik.http.routers.n8n.rule=Host(`${SUBDOMAIN}`)"
- "traefik.http.routers.n8n.entrypoints=websecure"
- "traefik.http.routers.n8n.tls.certresolver=myresolver"
volumes:
n8n_data:
postgres_data:
networks:
n8n-network:
driver: bridge
Step 2: Security & Environment Variables
The .env file is your control center. It keeps your credentials out of your repository. It also allows you to rotate keys easily.
Your .env File:
# Domain Configuration
SUBDOMAIN=n8n.yourcompany.com
SSL_EMAIL=admin@yourcompany.com
# Database Secrets
DB_USER=n8n_user
DB_PASSWORD=REPLACE_WITH_SECURE_PASSWORD
DB_NAME=n8n_db
# n8n Auth
N8N_USER=admin
N8N_PASSWORD=REPLACE_WITH_SECURE_PASSWORD
ENCRYPTION_KEY=REPLACE_WITH_RANDOM_STRING_32_CHARS
Critical Security Warning: The N8N_ENCRYPTION_KEY encrypts your credentials in the database. This includes API keys for Slack or Google Sheets. If you lose this key, you lose access to all your connected accounts. Back this up immediately.
Does configuring reverse proxies feel like a distraction? Thinkpeak.ai: The Automation Marketplace offers pre-architected workflows. These templates skip the DevOps setup.
Step 3: Integrating Local AI (Ollama)
This is where our AI-first philosophy comes alive. By adding Ollama to your stack, you can run “Digital Employees.” These are autonomous agents capable of reasoning without sending data to OpenAI.
Add this service to your docker-compose.yml under services:
ollama:
image: ollama/ollama:latest
container_name: ollama
volumes:
- ollama_data:/root/.ollama
networks:
- n8n-network
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
Note: Remove the deploy block if your VPS does not have a GPU. It will run on CPU, but slower.
Connecting n8n to Ollama
- In n8n, use the HTTP Request Node.
- URL:
http://ollama:11434/api/generate. We use the service nameollamabecause they share the Docker network. - JSON Payload:
{
"model": "llama3",
"prompt": "Summarize this email for the sales team: {{ $json.body }}",
"stream": false
}
You now have a private, free-to-use intelligence layer inside your automation stack.
Step 4: Scaling for Enterprise (Queue Mode)
Your “self-driving ecosystem” will grow. A single n8n instance might choke on heavy loads. This happens when processing thousands of leads from a Cold Outreach Hyper-Personalizer campaign.
To scale, we switch to Queue Mode. This involves adding Redis to manage job queues. We also deploy separate Worker containers.
Add Redis to docker-compose.yml:
redis:
image: redis:alpine
restart: always
networks:
- n8n-network
Add the Worker Service:
n8n-worker:
image: n8nio/n8n:latest
command: worker
restart: always
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${DB_NAME}
- DB_POSTGRESDB_USER=${DB_USER}
- DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379
- N8N_ENCRYPTION_KEY=${ENCRYPTION_KEY}
networks:
- n8n-network
depends_on:
- postgres
- redis
Update the Main n8n Service:
Add these environment variables to your main n8n block:
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379
This architecture allows you to scale horizontally. Need more power? Simply spin up more worker containers.
Need a system that scales to millions of rows? Our Bespoke Internal Tools & Custom App Development service builds infrastructure like this for you. We architect the backend and the scaling logic.
Consult with Thinkpeak Engineering
Step 5: The “Self-Healing” Backup Strategy
A robust system protects itself. Instead of relying on manual backups, we will use n8n to back up n8n. This recursive automation is a hallmark of the Thinkpeak.ai approach.
The Strategy:
- Create an n8n workflow that triggers every night at 3 AM.
- Use the n8n Public API node to export all workflows and credentials (encrypted).
- Use the AWS S3 node to upload the JSON export. You can also use Google Drive or Dropbox.
- Push a notification to Slack/Teams confirming the backup.
This ensures your intellectual property is safe. Even if your server fails, your business logic remains secure in the cloud.
Conclusion
You have now deployed a stack that rivals the internal tooling of Fortune 500 companies. You have:
- Postgres for bulletproof data integrity.
- Traefik for military-grade security.
- Ollama for private AI reasoning.
- Queue Mode for infinite scalability.
This is more than just “hosting software.” You have built the engine for your company’s proprietary software stack.
However, maintaining this stack requires vigilance. Updates can break things. Databases need maintenance. Security patches are critical.
Ready to automate but not ready to be a SysAdmin?
Thinkpeak.ai bridges the gap. Need speed? Browse our Automation Marketplace for plug-and-play workflows. Need scale? Hire our Bespoke Engineering team.
We build Digital Employees and custom low-code apps. We ensure every piece of software you own talks to each other intelligently.
Start Building Your Self-Driving Ecosystem Today
Frequently Asked Questions (FAQ)
Can I run n8n locally on my laptop instead of a VPS?
Yes, the Docker Setup works on Windows, Mac, and Linux. However, your automations stop when you close your laptop. For business-critical operations like the Inbound Lead Qualifier, a 24/7 VPS is mandatory.
How do I update n8n without losing data?
We used Docker Volumes (n8n_data), so your data is persistent. To update, run docker-compose pull to download the latest images. Then run docker-compose up -d to recreate containers with new code.
Is it safe to expose n8n to the internet?
Yes, provided you use Basic Auth and SSL via Traefik. For extra security, you can configure Traefik to only allow access from your office IP address or VPN.




