Requex.me LogoRequex.me

Documentation

Browse by section

Keep all guides, tool docs, automation recipes, and comparison pages in one navigable place.

Docs Home
Docs

Foundation docs for getting started fast, understanding key terms, and tracking what has changed.

Guides

Start with fundamentals, then move into provider-specific webhook testing and production hardening.

Tool Docs

These pages explain what each tool does, when to use it, and how it fits into a webhook debugging workflow.

Automation Docs

Use these setup guides when you want forwarding rules, custom responses, security checks, or multi-destination fanout.

Compare

Use these pages to compare developer workflows, pricing tradeoffs, and feature differences between webhook tools.

Quick Answer

Queue mode splits n8n into a main process (accepts webhooks) and one or more worker processes (execute them), connected by Redis. You need it when a single n8n process cannot keep up with incoming webhooks or when long-running workflows block fast ones. It adds operational complexity — Redis, multiple containers, worker health checks — so for webhook-only workloads, a hosted tool that handles scaling for you is often a simpler answer.

n8n Queue Mode Explained

Queue mode is n8n's answer to scaling past a single Node.js process. This guide covers what it is, when you actually need it, how to set it up, and when switching tools is the cheaper path.

Last updated: April 2026 · 11 min read

What Queue Mode Does

By default, n8n runs as one process. That process accepts webhooks, runs the workflow, and writes executions to the database. It works fine at low volume.

Queue mode splits this into two roles:

  • Main (webhook receiver): accepts the incoming webhook, persists a job in Redis, and returns 200.
  • Worker (executor): pulls jobs from Redis, runs the workflow, writes executions to Postgres.

You can run multiple workers in parallel. Redis coordinates which worker picks up which job.

Signs You Need Queue Mode

  • Webhook timeouts — sender sees 504 because a long workflow is blocking the main process
  • CPU pinned on the n8n container during busy periods
  • One slow workflow (big DB query, AI call) is preventing other fast workflows from running
  • You receive bursts of hundreds of webhooks per second
  • You want zero-downtime deploys of new n8n versions

Signs You Do Not Need Queue Mode

  • You are running fewer than 100 executions per hour
  • Your workflows complete in under 2 seconds each
  • The n8n container CPU sits below 20%
  • You have no bursty traffic pattern

For these workloads, regular mode plus a bigger VPS is cheaper than the ops cost of Redis + workers.

Setting Up Queue Mode (High Level)

The typical docker-compose setup looks like this:

services:
  redis:
    image: redis:7
    volumes: [redis_data:/data]

  postgres:
    image: postgres:16
    environment:
      POSTGRES_DB: n8n
      POSTGRES_USER: n8n
      POSTGRES_PASSWORD: <secret>

  n8n-main:
    image: n8nio/n8n
    environment:
      EXECUTIONS_MODE: queue
      QUEUE_BULL_REDIS_HOST: redis
      DB_TYPE: postgresdb
      DB_POSTGRESDB_HOST: postgres
      WEBHOOK_URL: https://n8n.example.com
    ports: ['5678:5678']

  n8n-worker:
    image: n8nio/n8n
    command: worker
    environment:
      EXECUTIONS_MODE: queue
      QUEUE_BULL_REDIS_HOST: redis
      DB_TYPE: postgresdb
      DB_POSTGRESDB_HOST: postgres

To scale, add more n8n-worker replicas. In a real deployment you would also add health checks, resource limits, log aggregation, and TLS-terminating proxies in front of Redis.

Gotcha: if Redis goes down, webhooks silently pile up in the main process's memory until it crashes. Monitor Redis carefully.

The Hidden Cost of Queue Mode

  • More infra: Redis + Postgres + multiple n8n containers = 3–5 services instead of 1
  • Version alignment: main and workers must run the same n8n version; rolling updates are tricky
  • Worker health: workers can die silently. You need liveness probes or the queue backs up.
  • Redis sizing: if you hold executions in Redis, memory spikes can OOM the box
  • Observability: tracking down "which worker failed this job" requires structured logging

When to Switch Instead of Scaling n8n

Queue mode is a solid answer for teams with dedicated platform engineering. For solo devs and small teams where the extra Redis instance is the last thing you wanted to debug at midnight, a hosted webhook automation tool handles scaling for you:

  • No Redis to manage
  • No worker fleet to monitor
  • No version-alignment dance during updates
  • Incoming webhooks are persisted before execution (no memory loss on crash)

Requex.me takes this approach: every incoming webhook is persisted to Postgres before the workflow runs, and execution is horizontally scaled under the hood. You do not configure any of it.

Diagnostic Checklist Before Enabling Queue Mode

  1. Is CPU actually pinned? Check docker stats.
  2. Are the slow workflows fixable? (Add caching, batch DB calls, remove synchronous AI calls.)
  3. Can you vertically scale the box instead? 4GB → 16GB RAM usually buys 10× throughput for free.
  4. Can you move slow workflows to an async pattern (webhook replies fast, Queues the work)?
  5. Is the bottleneck actually n8n or Postgres? Check Postgres CPU and locks first.

Most teams who enable queue mode could have stayed on regular mode with one of the above fixes. Start there.

FAQ

How many workers do I need?

Start with 2. Watch the queue length — if it grows, add more. Each worker handles one execution at a time by default (controlled by N8N_CONCURRENCY_PRODUCTION_LIMIT).

Can I use queue mode without Redis?

No. Redis is the queue backend. If you do not want to manage Redis, stay on regular mode or pick a managed service.

Does queue mode affect the webhook response latency?

Yes, positively. The main process returns 200 as soon as the job is in Redis, even if the workflow takes 30 seconds to run. Senders see consistent fast responses.

Can I use the Respond to Webhook node in queue mode?

Yes, but it is the one operation that loses the latency benefit — the main process waits for the worker to finish before responding.

Related Resources