How to implement Job Queues in Nuxt / Nitro

While Nuxt is primarily known for its powerful full-stack capabilities, many developers overlook its potential for handling background jobs. Whether you need to process emails, generate reports or handle long-running tasks asynchronously, job queues can be a crucial part of your application architecture.

So, can you use job queues in Nuxt / Nitro? – Yes, you can!

In this guide, I'll walk you through setting up job queues in a Nuxt/Nitro application using BullMQ, one of the most popular and well-maintained queue libraries for Node.js.

That said, BullMQ is not the only options. There are other great queue libraries include: bee-queue, kue, resque, etc. In principle, implementing a job queue in Nuxt/Nitro works similarly across these libraries, so feel free to pick the one that best fits your needs.

Now, let's dive straight in to the details!

Update: This article was updated on 29 Jul, 2026 with improved typesscript support for useQueue and useWorker in the example code.

Setting up

This article assumes you know how to install dependencies, a.k.a npm install and how to run docker images.

First, let's install BullMQ and ioredis into your Nuxt or Nitro project.

npm install bullmq ioredis

You will need to have Redis running somewhere. There are a lot of great options (which this article won't cover).

For development purpose, you can run the following docker run command to spin up a Valkey (Redis-compatible) instance.

docker run -it -p 6379:6379 valkey/valkey:7

You now have an instance of valkey running at

redis://default@127.0.0.1:6379

You can now add it to your .env file like so:

REDIS_URL=redis://default@127.0.0.1:6379

Create a Queue composable

In your server/utils, create a new utils file called queue.ts:

server/utils/queue.ts
import { Queue, type QueueOptions } from "bullmq";
import { Redis } from "ioredis";

// we use a map to store the queue instances, so that we can reuse them
const queueMap = new Map<string, Queue>();

export const useQueue = <
  DataType = any,
  ResultType = any,
  NameType extends string = string,
>(
  name: string,
  opts?: Partial<Omit<QueueOptions, "connection">>,
) => {
  if (queueMap.has(name)) {
    return queueMap.get(name)! as Queue<DataType, ResultType, NameType>;
  }

  const { REDIS_URL } = process.env;

  if (!REDIS_URL) {
    throw new Error("env REDIS_URL is not defined");
  }

  const connection = new Redis(REDIS_URL);

  queueMap.set(name, new Queue<DataType, ResultType, NameType>(name, { ...opts, connection }));

  return queueMap.get(name) as Queue<DataType, ResultType, NameType>;
};

Create a Worker composable

In your server/utils, create a new utils file called worker.ts:

server/utils/worker.ts
import { type Processor, type WorkerOptions, Worker } from "bullmq";
import { Redis } from "ioredis";

// we use a map to store the worker instances, so that we can reuse them
const workerMap = new Map<string, Worker>();

export const useWorker = <
  DataType = any,
  ResultType = any,
  NameType extends string = string,
>(
  name: string,
  fn: string | URL | Processor<DataType, ResultType, NameType>,
  opts?: Partial<Omit<WorkerOptions, "connection">>,
) => {
  if (workerMap.has(name)) {
    return workerMap.get(name) as Worker<DataType, ResultType, NameType>;
  }

  const { REDIS_URL } = process.env;

  if (!REDIS_URL) {
    throw new Error("env REDIS_URL is not defined");
  }

  const connection = new Redis(REDIS_URL, {
    /**
     * Is better to set this as `null`
     * @see https://docs.bullmq.io/guide/connections#maxretriesperrequest
     */
    maxRetriesPerRequest: null,
  });

  workerMap.set(name, new Worker<DataType, ResultType, NameType>(name, fn, { ...opts, connection }));

  return workMap.get(name) as Worker<DataType, ResultType, NameType>;
};

Usage

To create or add to a queue, you can now use the useQueue() composable that you just created.

Here's an example:

Example
const queue = useQueue("generate-report"); // "generate-report" is the queue's name

// create data to pass to worker
const data = { from: "2025-01", to: "2025-02" };

// add job name and data to queue
const job = await queue.add("sales", data);

Now, in your server/plugins, create a new file generate-report-worker.ts:

server/plugins/generate-report-worker.ts
export default defineNitroPlugin(() => {
  // for Nuxt, skip initialising worker on pre-render
  if (import.meta.prerender) return;

  useWorker(
    "generate-report", // use the same name as your queue
    async (job) => {
      const data = job.data; // { from: "2025-01", to: "2025-02" }
      const name = job.name; // "sales"

      // perform work code here
    },
  );
});

That's it! You now have a working Job Queue for your project.

Bonus tips

Set worker's concurrency

If you want to set the default concurrency for your worker instance:

Example
const worker = useWorker("send-email", { concurrency: 5 });

Learn more about concurrency here.

Delay a queue

You can delay some tasks to be executed later in time.

Example
const queue = useQueue("delayed-task");

await queue.add("do-this-later", someData, { delay: 1000 * 60 }); // 60 seconds

Custom Job ID

You can also set your own custom "Job ID". By default, BullMQ generates automatically as an increasing counter. You can overwrite this behaviour like so:

Example
const queue = useQueue("special-task-with-id");

await queue.add("special-order", someData, { jobId: someCustomId });

In your worker instance, you can access the jobId like so:

Example
useWorker("special-task-with-id", async (job) => {
  const jobId = job.id;

  // do something with `jobId`
});

Retrying failed jobs

As your queues process jobs, it is inevitable that over time some of these jobs will fail. You can setup a backoff strategy easily like so:

Example
const queue = useQueue("critical-task");

await queue.add("retry-able", someData, {
  attempts: 5, // maximum attempts before it gave up
  backoff: {
    type: "exponential", // "exponential" or "fixed"
    delay: 1000, // delay in milliseconds
  },
});

There are two types of backoff strategy:

  • exponential will retry after 2 ^ (attempts - 1) * delay milliseconds.
  • fixed will retry after delay milliseconds.

Learn more about backoff strategy here.

Use callbacks when a job is completed or failed

You can attach callbacks whenever a job is completed or failed. For example, you might want to trigger other task elsewhere when it is completed or send a push notification when a job has failed.

Example
const worker = useWorker("some-job", async (job) => {
  // do job
});

worker.on("completed", async (job) => {
  // do something when completed
});

worker.on("failed", async (job) => {
  // do something when failed
});

Learn about Events here.

Dedicated worker instance

When you find that worker tasks are becoming compute-heavy (generating large reports), you can offset the tasks to a new Nitro app. Simply copy useWorker() utils over and setup it up there!

Since queues persist in Redis, you can reuse the composables everywhere for as long as they are connected to the same Redis instance.

24 Feb 2025 nuxt, nitro, programming