Taming Node.js Background Jobs with BullMQ
Author
Amit Verma
Date Published

Last winter we woke up to a stuck invoice queue because a cron job and a manual retry script fought over the same Redis connection. That morning pushed me to formalize how we run background jobs in Node.js.
BullMQ became the backbone because it pairs nicely with TypeScript and gives us visibility into every job lifecycle stage.
• Each queue gets its own scheduler and worker instances to avoid lock contention.
• Retries back off exponentially and emit metrics so dashboards tell us when payloads misbehave.
1import { Queue, QueueScheduler, Worker } from "bullmq";23const connection = { host: process.env.REDIS_HOST, port: 6379 };45const invoiceQueue = new Queue("invoice", { connection });6new QueueScheduler("invoice", { connection });78export const enqueueInvoice = async (payload) => {9 await invoiceQueue.add("generate", payload, {10 attempts: 3,11 backoff: { type: "exponential", delay: 5000 }12 });13};1415new Worker(16 "invoice",17 async (job) => {18 const { customerId } = job.data;19 return generateInvoice(customerId);20 },21 { connection }22);
We keep job payloads tiny and pass identifiers rather than blobs so failed jobs can be replayed without bloating Redis.
Health checks ping the queue depth every minute; if the backlog spikes we alert and temporarily route traffic to a read-only mode.
That small amount of ceremony has saved us from the late-night panic that used to follow every seasonal traffic spike.