{"id":16649,"date":"2025-12-14T10:38:19","date_gmt":"2025-12-14T10:38:19","guid":{"rendered":"https:\/\/thinkpeak.ai\/n8n-docker-setup-guide\/"},"modified":"2025-12-14T10:38:19","modified_gmt":"2025-12-14T10:38:19","slug":"n8n-docker-kurulum-kilavuzu","status":"publish","type":"post","link":"https:\/\/thinkpeak.ai\/tr\/n8n-docker-kurulum-kilavuzu\/","title":{"rendered":"n8n Docker Kurulum K\u0131lavuzu: Kendi Kendine Bar\u0131nd\u0131rma ve \u00d6l\u00e7eklendirme"},"content":{"rendered":"<p>In the era of <b id=\"ai-first-business\">AI-first business<\/b>, relying on restrictive SaaS automation limits is a strategic vulnerability. When you build on rented land, you face per-execution pricing. You deal with API rate limits. You also risk third-party data snooping.<\/p>\n<p>You aren&#8217;t building an ecosystem. You are building a dependency.<\/p>\n<p>At <b id=\"thinkpeak-ai\">Thinkpeak.ai<\/b>, we believe in transforming static operations. We want to build dynamic, <b id=\"self-driving-ecosystems\">self-driving ecosystems<\/b>. That starts with owning the infrastructure that powers your logic.<\/p>\n<p><b id=\"self-hosting-n8n\">Self-hosting n8n<\/b> via Docker is the &#8220;hello world&#8221; of this transformation. It offers significant advantages:<\/p>\n<ul>\n<li><b id=\"zero-execution-limits\">Zero Execution Limits<\/b>: Run 10 or 10 million workflows. The cost remains the same.<\/li>\n<li><b id=\"total-data-privacy\">Total Data Privacy<\/b>: Your customer PII never leaves your Virtual Private Server (VPS).<\/li>\n<li><b id=\"ai-sovereignty\">AI Sovereignty<\/b>: Run local Large Language Models (LLMs) directly alongside your automation engine. This avoids OpenAI\u2019s token costs.<\/li>\n<\/ul>\n<p>This guide is not a quick tutorial. This is an enterprise-grade blueprint. We will deploy a secure, scalable, and AI-ready n8n stack using <b id=\"docker-compose\">Docker Compose<\/b>.<\/p>\n<h2>Prerequisites: The Hardware for Your Digital Workforce<\/h2>\n<p>Before we touch the config files, you need the right infrastructure. It must handle <b id=\"bespoke-internal-tools\">Bespoke Internal Tools<\/b> and heavy data lifting.<\/p>\n<h3>1. VPS Specifications<\/h3>\n<p>A standard $5\/month VPS is insufficient for 2026. You need more power to run n8n alongside a database and a local AI agent. We recommend the following specs:<\/p>\n<ul>\n<li><b id=\"cpu-requirements\">CPU<\/b>: 2 vCPUs minimum. Use 4+ if running local LLMs.<\/li>\n<li><b id=\"ram-requirements\">RAM<\/b>: 4GB minimum. 8GB+ is recommended for AI workloads.<\/li>\n<li><b id=\"storage-requirements\">Storage<\/b>: 40GB+ NVMe SSD. Logs and Docker overlays grow fast.<\/li>\n<\/ul>\n<h3>2. The Software Environment<\/h3>\n<p>Ensure your server runs a fresh install of <b id=\"ubuntu-lts\">Ubuntu 24.04 LTS<\/b>. You will need:<\/p>\n<ul>\n<li><b id=\"docker-engine\">Docker Engine<\/b>: The runtime that powers the containers.<\/li>\n<li><b id=\"docker-compose-v2\">Docker Compose (V2)<\/b>: For orchestrating the multi-container stack.<\/li>\n<li><b id=\"domain-name\">A Domain Name<\/b>: You cannot get valid SSL certificates for a bare IP address. Point a subdomain to your VPS IP.<\/li>\n<\/ul>\n<h2>Step 1: The Enterprise Architecture (Postgres over SQLite)<\/h2>\n<p>Most tutorials suggest using the default SQLite database. <b id=\"do-not-use-sqlite\">Do not do this<\/b>. SQLite locks under heavy concurrent loads.<\/p>\n<p>A &#8220;Self-Driving Ecosystem&#8221; generates heavy loads. We will use <b id=\"postgresql\">PostgreSQL<\/b> from day one.<\/p>\n<p>Create your project directory:<\/p>\n<pre><code class=\"language-bash\">mkdir thinkpeak-stack\ncd thinkpeak-stack\ntouch docker-compose.yml .env<\/code><\/pre>\n<h3>The Base docker-compose.yml<\/h3>\n<p>Here is the <b id=\"production-ready-configuration\">production-ready configuration<\/b>. We separate the database, the application, and the reverse proxy (Traefik). This ensures automatic SSL management.<\/p>\n<pre><code class=\"language-yaml\">version: '3.8'\n\nservices:\n  traefik:\n    image: traefik:v3.0\n    command:\n      - \"--api.insecure=true\"\n      - \"--providers.docker=true\"\n      - \"--providers.docker.exposedbydefault=false\"\n      - \"--entrypoints.web.address=:80\"\n      - \"--entrypoints.websecure.address=:443\"\n      - \"--certificatesresolvers.myresolver.acme.tlschallenge=true\"\n      - \"--certificatesresolvers.myresolver.acme.email=${SSL_EMAIL}\"\n      - \"--certificatesresolvers.myresolver.acme.storage=\/letsencrypt\/acme.json\"\n    ports:\n      - \"80:80\"\n      - \"443:443\"\n    volumes:\n      - \"\/var\/run\/docker.sock:\/var\/run\/docker.sock:ro\"\n      - \".\/letsencrypt:\/letsencrypt\"\n    networks:\n      - n8n-network\n\n  postgres:\n    image: postgres:16-alpine\n    restart: always\n    environment:\n      - POSTGRES_USER=${DB_USER}\n      - POSTGRES_PASSWORD=${DB_PASSWORD}\n      - POSTGRES_DB=${DB_NAME}\n      - POSTGRES_NON_ROOT_USER=${DB_USER}\n    volumes:\n      - postgres_data:\/var\/lib\/postgresql\/data\n    networks:\n      - n8n-network\n    healthcheck:\n      test: [\"CMD-SHELL\", \"pg_isready -U ${DB_USER} -d ${DB_NAME}\"]\n      interval: 5s\n      timeout: 5s\n      retries: 5\n\n  n8n:\n    image: n8nio\/n8n:latest\n    restart: always\n    environment:\n      - DB_TYPE=postgresdb\n      - DB_POSTGRESDB_HOST=postgres\n      - DB_POSTGRESDB_PORT=5432\n      - DB_POSTGRESDB_DATABASE=${DB_NAME}\n      - DB_POSTGRESDB_USER=${DB_USER}\n      - DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}\n      - N8N_BASIC_AUTH_ACTIVE=true\n      - N8N_BASIC_AUTH_USER=${N8N_USER}\n      - N8N_BASIC_AUTH_PASSWORD=${N8N_PASSWORD}\n      - N8N_HOST=${SUBDOMAIN}\n      - N8N_PORT=5678\n      - N8N_PROTOCOL=https\n      - WEBHOOK_URL=https:\/\/${SUBDOMAIN}\n      - N8N_ENCRYPTION_KEY=${ENCRYPTION_KEY}\n    ports:\n      - \"127.0.0.1:5678:5678\"\n    volumes:\n      - n8n_data:\/home\/node\/.n8n\n    networks:\n      - n8n-network\n    depends_on:\n      postgres:\n        condition: service_healthy\n    labels:\n      - \"traefik.enable=true\"\n      - \"traefik.http.routers.n8n.rule=Host(`${SUBDOMAIN}`)\"\n      - \"traefik.http.routers.n8n.entrypoints=websecure\"\n      - \"traefik.http.routers.n8n.tls.certresolver=myresolver\"\n\nvolumes:\n  n8n_data:\n  postgres_data:\n\nnetworks:\n  n8n-network:\n    driver: bridge<\/code><\/pre>\n<h2>Step 2: Security &#038; Environment Variables<\/h2>\n<p>The <code>.env<\/code> file is your control center. It keeps your credentials out of your repository. It also allows you to <b id=\"rotate-keys\">rotate keys<\/b> easily.<\/p>\n<p><b>Your .env File:<\/b><\/p>\n<pre><code class=\"language-bash\"># Domain Configuration\nSUBDOMAIN=n8n.yourcompany.com\nSSL_EMAIL=admin@yourcompany.com\n\n# Database Secrets\nDB_USER=n8n_user\nDB_PASSWORD=REPLACE_WITH_SECURE_PASSWORD\nDB_NAME=n8n_db\n\n# n8n Auth\nN8N_USER=admin\nN8N_PASSWORD=REPLACE_WITH_SECURE_PASSWORD\nENCRYPTION_KEY=REPLACE_WITH_RANDOM_STRING_32_CHARS<\/code><\/pre>\n<p><b>Critical Security Warning:<\/b> The <b id=\"n8n-encryption-key\">N8N_ENCRYPTION_KEY<\/b> encrypts your credentials in the database. This includes API keys for Slack or Google Sheets. If you lose this key, you lose access to all your connected accounts. <b id=\"back-up-immediately\">Back this up immediately.<\/b><\/p>\n<p>Does configuring reverse proxies feel like a distraction? <b id=\"thinkpeak-ai-marketplace\">Thinkpeak.ai: The Automation Marketplace<\/b> offers pre-architected workflows. These templates skip the DevOps setup.<\/p>\n<p><a href=\"https:\/\/thinkpeak.ai\">Explore the Marketplace<\/a><\/p>\n<h2>Step 3: Integrating Local AI (Ollama)<\/h2>\n<p>This is where our <b id=\"ai-first-philosophy\">AI-first philosophy<\/b> comes alive. By adding <b id=\"ollama\">Ollama<\/b> to your stack, you can run &#8220;Digital Employees.&#8221; These are autonomous agents capable of reasoning without sending data to OpenAI.<\/p>\n<p>Add this service to your <code>docker-compose.yml<\/code> under <code>services<\/code>:<\/p>\n<pre><code class=\"language-yaml\">  ollama:\n    image: ollama\/ollama:latest\n    container_name: ollama\n    volumes:\n      - ollama_data:\/root\/.ollama\n    networks:\n      - n8n-network\n    deploy:\n      resources:\n        reservations:\n          devices:\n            - driver: nvidia\n              count: 1\n              capabilities: [gpu]<\/code><\/pre>\n<p><i>Note: Remove the deploy block if your VPS does not have a GPU. It will run on CPU, but slower.<\/i><\/p>\n<h3>Connecting n8n to Ollama<\/h3>\n<ol>\n<li>In n8n, use the <b id=\"http-request-node\">HTTP Request Node<\/b>.<\/li>\n<li><b>URL:<\/b> <code>http:\/\/ollama:11434\/api\/generate<\/code>. We use the service name <code>ollama<\/code> because they share the Docker network.<\/li>\n<li><b>JSON Payload:<\/b><\/li>\n<\/ol>\n<pre><code class=\"language-json\">{\n  \"model\": \"llama3\",\n  \"prompt\": \"Summarize this email for the sales team: {{ $json.body }}\",\n  \"stream\": false\n}<\/code><\/pre>\n<p>You now have a private, free-to-use intelligence layer inside your automation stack.<\/p>\n<h2>Step 4: Scaling for Enterprise (Queue Mode)<\/h2>\n<p>Your &#8220;self-driving ecosystem&#8221; will grow. A single n8n instance might choke on heavy loads. This happens when processing thousands of leads from a <b id=\"cold-outreach-hyper-personalizer\">Cold Outreach Hyper-Personalizer<\/b> campaign.<\/p>\n<p>To scale, we switch to <b id=\"queue-mode\">Queue Mode<\/b>. This involves adding <b id=\"redis\">Redis<\/b> to manage job queues. We also deploy separate <b id=\"worker-containers\">Worker containers<\/b>.<\/p>\n<p><b>Add Redis to docker-compose.yml:<\/b><\/p>\n<pre><code class=\"language-yaml\">  redis:\n    image: redis:alpine\n    restart: always\n    networks:\n      - n8n-network<\/code><\/pre>\n<p><b>Add the Worker Service:<\/b><\/p>\n<pre><code class=\"language-yaml\">  n8n-worker:\n    image: n8nio\/n8n:latest\n    command: worker\n    restart: always\n    environment:\n      - DB_TYPE=postgresdb\n      - DB_POSTGRESDB_HOST=postgres\n      - DB_POSTGRESDB_PORT=5432\n      - DB_POSTGRESDB_DATABASE=${DB_NAME}\n      - DB_POSTGRESDB_USER=${DB_USER}\n      - DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}\n      - EXECUTIONS_MODE=queue\n      - QUEUE_BULL_REDIS_HOST=redis\n      - QUEUE_BULL_REDIS_PORT=6379\n      - N8N_ENCRYPTION_KEY=${ENCRYPTION_KEY}\n    networks:\n      - n8n-network\n    depends_on:\n      - postgres\n      - redis<\/code><\/pre>\n<p><b>Update the Main n8n Service:<\/b><\/p>\n<p>Add these environment variables to your main <code>n8n<\/code> block:<\/p>\n<pre><code class=\"language-yaml\">      - EXECUTIONS_MODE=queue\n      - QUEUE_BULL_REDIS_HOST=redis\n      - QUEUE_BULL_REDIS_PORT=6379<\/code><\/pre>\n<p>This architecture allows you to scale horizontally. Need more power? Simply spin up more worker containers.<\/p>\n<p>Need a system that scales to millions of rows? Our <b id=\"bespoke-internal-tools-service\">Bespoke Internal Tools &#038; Custom App Development<\/b> service builds infrastructure like this for you. We architect the backend and the scaling logic.<\/p>\n<p><a href=\"https:\/\/thinkpeak.ai\">Consult with Thinkpeak Engineering<\/a><\/p>\n<h2>Step 5: The &#8220;Self-Healing&#8221; Backup Strategy<\/h2>\n<p>A robust system protects itself. Instead of relying on manual backups, we will use n8n to back up n8n. This <b id=\"recursive-automation\">recursive automation<\/b> is a hallmark of the Thinkpeak.ai approach.<\/p>\n<p><b>The Strategy:<\/b><\/p>\n<ol>\n<li>Create an n8n workflow that triggers every night at 3 AM.<\/li>\n<li>Use the <b id=\"n8n-public-api\">n8n Public API<\/b> node to export all workflows and credentials (encrypted).<\/li>\n<li>Use the <b id=\"aws-s3-node\">AWS S3<\/b> node to upload the JSON export. You can also use Google Drive or Dropbox.<\/li>\n<li>Push a notification to Slack\/Teams confirming the backup.<\/li>\n<\/ol>\n<p>This ensures your intellectual property is safe. Even if your server fails, your business logic remains secure in the cloud.<\/p>\n<h2>Conclusion<\/h2>\n<p>You have now deployed a stack that rivals the internal tooling of Fortune 500 companies. You have:<\/p>\n<ul>\n<li><b id=\"postgres-data-integrity\">Postgres<\/b> for bulletproof data integrity.<\/li>\n<li><b id=\"traefik-security\">Traefik<\/b> for military-grade security.<\/li>\n<li><b id=\"ollama-ai-reasoning\">Ollama<\/b> for private AI reasoning.<\/li>\n<li><b id=\"queue-mode-scalability\">Queue Mode<\/b> for infinite scalability.<\/li>\n<\/ul>\n<p>This is more than just &#8220;hosting software.&#8221; You have built the engine for your company&#8217;s proprietary software stack.<\/p>\n<p>However, maintaining this stack requires vigilance. Updates can break things. Databases need maintenance. Security patches are critical.<\/p>\n<p><b>Ready to automate but not ready to be a SysAdmin?<\/b><\/p>\n<p><b id=\"thinkpeak-ai-services\">Thinkpeak.ai<\/b> bridges the gap. Need speed? Browse our Automation Marketplace for plug-and-play workflows. Need scale? Hire our Bespoke Engineering team.<\/p>\n<p>We build <b id=\"digital-employees\">Digital Employees<\/b> and custom low-code apps. We ensure every piece of software you own talks to each other intelligently.<\/p>\n<p><a href=\"https:\/\/thinkpeak.ai\">Start Building Your Self-Driving Ecosystem Today<\/a><\/p>\n<h2>Frequently Asked Questions (FAQ)<\/h2>\n<h3>Can I run n8n locally on my laptop instead of a VPS?<\/h3>\n<p>Yes, the Docker Setup works on Windows, Mac, and Linux. However, your automations stop when you close your laptop. For business-critical operations like the <b id=\"inbound-lead-qualifier\">Inbound Lead Qualifier<\/b>, a 24\/7 VPS is mandatory.<\/p>\n<h3>How do I update n8n without losing data?<\/h3>\n<p>We used <b id=\"docker-volumes\">Docker Volumes<\/b> (<code>n8n_data<\/code>), so your data is persistent. To update, run <code>docker-compose pull<\/code> to download the latest images. Then run <code>docker-compose up -d<\/code> to recreate containers with new code.<\/p>\n<h3>Is it safe to expose n8n to the internet?<\/h3>\n<p>Yes, provided you use <b id=\"basic-auth\">Basic Auth<\/b> and SSL via Traefik. For extra security, you can configure Traefik to only allow access from your office IP address or VPN.<\/p>\n<h2>Resources<\/h2>\n<ul>\n<li><a href=\"https:\/\/docs.n8n.io\/hosting\/installation\/server-setups\/docker-compose\/\" rel=\"nofollow noopener\" target=\"_blank\">n8n Docker Compose Setup Guide<\/a><\/li>\n<li><a href=\"https:\/\/mikeholownych.com\/blog\/n8n-docker-setup-guide-production\/\" rel=\"nofollow noopener\" target=\"_blank\">n8n on Docker: Complete Setup Guide with Docker Compose<\/a><\/li>\n<li><a href=\"https:\/\/www.codecademy.com\/article\/run-n8n-with-docker\" rel=\"nofollow noopener\" target=\"_blank\">How to Run n8n with Docker (Beginner&#8217;s Guide)<\/a><\/li>\n<li><a href=\"https:\/\/www.hostinger.com\/tutorials\/how-to-self-host-n8n-with-docker\" rel=\"nofollow noopener\" target=\"_blank\">How to Host n8n with Docker<\/a><\/li>\n<li><a href=\"https:\/\/phoenixnap.com\/kb\/install-n8n-docker\" rel=\"nofollow noopener\" target=\"_blank\">How to Install n8n on Docker<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Postgres, Traefik, Redis ve yerel yapay zeka ile g\u00fcvenli, \u00f6l\u00e7eklenebilir kendi kendine bar\u0131nd\u0131rma i\u00e7in ad\u0131m ad\u0131m n8n Docker kurulumu - kurumsal kullan\u0131ma haz\u0131r.<\/p>","protected":false},"author":2,"featured_media":16648,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[103],"tags":[],"class_list":["post-16649","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-business-process-automation"],"_links":{"self":[{"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/posts\/16649","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/comments?post=16649"}],"version-history":[{"count":0,"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/posts\/16649\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/media\/16648"}],"wp:attachment":[{"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/media?parent=16649"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/categories?post=16649"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thinkpeak.ai\/tr\/wp-json\/wp\/v2\/tags?post=16649"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}