Skip to main content
Using Redis with your JavaScript application on Sevalla enhances performance by enabling fast in-memory data access, supports real-time features such as notifications or chat, and facilitates reliable background job processing, helping your app scale efficiently and handle high traffic. This guide covers Redis integration for caching, real-time features, using ioredis, and background job processing with bullmq.

Installation

Install the required dependencies:
npm install ioredis dotenv
npm install -D @types/ioredis
  • ioredis: High-performance Redis client for Node.js with full TypeScript support.
  • dotenv: Load environment variables from .env file.
  • @types/ioredis: TypeScript definitions for ioredis (usually not needed as ioredis has built-in types).

Environment variables with dotenv

Add Redis configuration to your .env file:
# Redis Configuration
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=yourpassword
REDIS_DB=0

# Or use Redis URL (alternative)
REDIS_URL=redis://:password@localhost:6379/0
Load environment variables at the top of your entry file:
// src/index.ts
import "dotenv/config";

// ... rest of your code

Redis connection setup

Create a Redis client configuration:
// src/config/redis.ts
import Redis from "ioredis";

// Option 1: Using individual parameters
const redis = new Redis({
  host: process.env.REDIS_HOST || "localhost",
  port: Number(process.env.REDIS_PORT) || 6379,
  password: process.env.REDIS_PASSWORD,
  db: Number(process.env.REDIS_DB) || 0,
  retryStrategy: (times) => {
    const delay = Math.min(times * 50, 2000);
    return delay;
  },
  maxRetriesPerRequest: 3,
});

// Option 2: Using Redis URL
const redisFromUrl = new Redis(
  process.env.REDIS_URL || "redis://localhost:6379"
);

// Handle connection events
redis.on("connect", () => {
  console.log("Redis client connected");
});

redis.on("error", (err) => {
  console.error("Redis connection error:", err);
});

redis.on("ready", () => {
  console.log("Redis client ready");
});

redis.on("close", () => {
  console.log("Redis connection closed");
});

export default redis;

Basic usage examples

Simple key-value operations

import redis from "@src/config/redis";

// Set a value
const key = 'foo';
const { value, ttl } = request.body; // ttl in seconds

if (ttl) {
  await redis.setex(key, ttl, JSON.stringify(value));
} else {
  await redis.set(key, JSON.stringify(value));
}

// Get a value
const key = 'foo';
const value = await redis.get(key);

// Delete a value
const key = 'foo';
await redis.del(key);

Caching database queries

import redis from "@src/config/redis";
import pool from "@src/config/database";

const id = '1234';
const cacheKey = `user:${id}`;

// Try to get from cache first
const cached = await redis.get(cacheKey);
if (cached) {
  console.log("Cache hit");
  return { source: "cache", user: JSON.parse(cached) }
}

// If not in cache, query database
console.log("Cache miss");
const result = await pool.query("SELECT * FROM users WHERE id = $1", [
  id,
]);

const user = result.rows[0];

await redis.setex(cacheKey, 300, JSON.stringify(user));

return { source: "database", user };

// ---
// Invalidate cache when user is updated
const { name, email } = request.body;

const result = await pool.query(
  "UPDATE users SET name = $1, email = $2 WHERE id = $3 RETURNING *",
  [name, email, id]
);

// Invalidate cache
await redis.del(`user:${id}`);

Pub/Sub Pattern

// src/services/pubsub.ts
import Redis from "ioredis";

// Create separate Redis clients for pub/sub
const publisher = new Redis({
  host: process.env.REDIS_HOST || "localhost",
  port: Number(process.env.REDIS_PORT) || 6379,
  password: process.env.REDIS_PASSWORD,
});

const subscriber = new Redis({
  host: process.env.REDIS_HOST || "localhost",
  port: Number(process.env.REDIS_PORT) || 6379,
  password: process.env.REDIS_PASSWORD,
});

// Subscribe to a channel
subscriber.subscribe("notifications", (err, count) => {
  if (err) {
    console.error("Failed to subscribe:", err);
  } else {
    console.log(`Subscribed to ${count} channel(s)`);
  }
});

// Handle incoming messages
subscriber.on("message", (channel, message) => {
  console.log(`Received message from ${channel}:`, message);
  // Process the message
});

// Publish a message
export const publishNotification = async (message: string) => {
  await publisher.publish("notifications", message);
};

export { publisher, subscriber };

Rate limiting

// src/plugins/rateLimit.ts
import redis from "@src/config/redis";

interface RateLimitOptions {
  max: number;
  window: number; // seconds
}

const rateLimitPlugin = async (
  server,
  opts
) => {
  server.addHook("onRequest", async (request, reply) => {
    const ip = request.ip;
    const key = `rate_limit:${ip}`;

      const current = await redis.incr(key);

      if (current === 1) {
        await redis.expire(key, opts.window);
      }

      if (current > opts.max) {
        const ttl = await redis.ttl(key);
        return reply.status(429).send({
          error: "Too many requests",
          retryAfter: ttl,
        });
      }
  });
};

Advanced Redis operations

Hash operations

// Set multiple fields in a hash
await redis.hset("user:1000", "name", "John Doe", "email", "[email protected]");

// Get a single field
const name = await redis.hget("user:1000", "name");

// Get all fields
const user = await redis.hgetall("user:1000");

// Increment a field
await redis.hincrby("user:1000", "visits", 1);

List operations

// Push to list
await redis.lpush("jobs", JSON.stringify({ id: 1, task: "send-email" }));

// Pop from list (blocking)
const job = await redis.brpop("jobs", 0); // Wait indefinitely

// Get list length
const length = await redis.llen("jobs");

Set operations

// Add members to set
await redis.sadd("online_users", "user1", "user2", "user3");

// Check if member exists
const isMember = await redis.sismember("online_users", "user1");

// Get all members
const members = await redis.smembers("online_users");

// Remove member
await redis.srem("online_users", "user1");

Sorted set operations

// Add members with scores
await redis.zadd("leaderboard", 100, "player1", 200, "player2", 150, "player3");

// Get top players (highest scores)
const topPlayers = await redis.zrevrange("leaderboard", 0, 9, "WITHSCORES");

// Get player rank
const rank = await redis.zrevrank("leaderboard", "player1");

// Increment score
await redis.zincrby("leaderboard", 10, "player1");

Job queues with BullMQ

BullMQ is a robust, Redis-based job queue for Node.js. It is ideal for offloading heavy tasks (like sending emails, video processing, or generating reports) to background processes.

Installation

npm install bullmq

Shared connection configuration

BullMQ manages its own connections (one for the queue, one for the worker, and one for blocking commands). It is best to share the connection options rather than a single client instance.
// src/config/queue-config.ts
import { ConnectionOptions } from 'bullmq';
import dotenv from 'dotenv';

dotenv.config();

export const connectionOptions: ConnectionOptions = {
  host: process.env.REDIS_HOST || 'localhost',
  port: Number(process.env.REDIS_PORT) || 6379,
  password: process.env.REDIS_PASSWORD,
  db: Number(process.env.REDIS_DB) || 0,
};

1. Defining a queue (Producer)

This code typically runs in your web server API (e.g., when a user signs up).
// src/queues/emailQueue.ts
import { Queue } from 'bullmq';
import { connectionOptions } from '../config/queue-config';

// Create a new queue instance
export const emailQueue = new Queue('email-sending', {
  connection: connectionOptions
});

// Helper to add jobs
export const addEmailJob = async (email: string, subject: string) => {
  await emailQueue.add('send-welcome', {
    email,
    subject,
    timestamp: new Date()
  }, {
    attempts: 3,           // Retry 3 times on failure
    backoff: {
      type: 'exponential', // Wait longer between retries
      delay: 1000,
    },
    removeOnComplete: true // Auto-remove successful jobs
  });
};

2. Processing jobs (Worker)

The worker processes jobs from the queue. In production, this often runs as a separate service or process.
// src/workers/emailWorker.ts
import { Worker, Job } from 'bullmq';
import { connectionOptions } from '../config/queue-config';

// Define the processor
const worker = new Worker('email-sending', async (job: Job) => {
  console.log(`Processing job ${job.id} for ${job.data.email}`);

  // Simulate heavy task (e.g., API call to SendGrid/SES)
  await new Promise(resolve => setTimeout(resolve, 2000));
  
  if (Math.random() < 0.1) throw new Error("Random mail server failure!");

  return { sent: true, messageId: '12345' };
}, {
  connection: connectionOptions,
  concurrency: 5 // Process 5 jobs at the same time
});

// Event listeners for logging
worker.on('completed', (job) => {
  console.log(`Job ${job.id} completed!`);
});

worker.on('failed', (job, err) => {
  console.error(`Job ${job.id} failed: ${err.message}`);
});

export default worker;

3. Delayed jobs

You can schedule jobs to run in the future using the delay option. This relies on Redis keyspace notifications.
// Schedule a follow-up email 24 hours from now
await emailQueue.add('send-followup', {
  userId: '123'
}, {
  delay: 24 * 60 * 60 * 1000 // 24 hours in milliseconds
});

Sevalla Redis service

To use Redis on Sevalla:
  1. Create a Redis service in your Sevalla dashboard
  2. Copy the connection credentials from the service details
  3. Add Redis environment variables to your application:
    REDIS_HOST=your-redis-host
    REDIS_PORT=6379
    REDIS_PASSWORD=your-password
    
    Or use a single URL:
    REDIS_URL=redis://:password@host:6379/0
    
  4. Deploy your application - it will automatically connect to Redis

Best practices

  1. Set appropriate TTLs - Don’t let cache grow indefinitely, use SETEX or EXPIRE
  2. Handle errors gracefully - Always wrap Redis calls in try-catch blocks
  3. Close connections on shutdown - Use redis.quit() in SIGTERM handler
  4. Use pipeline for multiple commands - Reduce network round-trips:
    const pipeline = redis.pipeline();
    pipeline.set("key1", "value1");
    pipeline.set("key2", "value2");
    pipeline.get("key1");
    const results = await pipeline.exec();
    
  5. Monitor memory usage - Use redis.info('memory') to track memory
  6. Use appropriate data structures - Choose the right Redis data type for your use case
  7. Implement cache invalidation - Delete stale cache when data changes
  8. Use Redis for session storage - Instead of memory-based sessions for horizontal scaling
  9. Set connection timeout - Configure connectTimeout and retryStrategy

Common issues

Connection refused

If you get “connection refused” errors:
  • Check that Redis is running: redis-cli ping
  • Verify the host and port in your configuration
  • Ensure the firewall allows connections on port 6379
  • On Sevalla, verify the Redis service is running

Authentication errors

If you get authentication errors:
  • Verify the password is correct
  • Check if Redis requires authentication: redis-cli CONFIG GET requirepass
  • Ensure the password is set in the environment variables

Memory issues

If Redis runs out of memory:
  • Set maxmemory and maxmemory-policy in Redis config
  • Use appropriate TTLs for cached data
  • Monitor memory usage: redis-cli INFO memory
  • Consider using Redis eviction policies

Connection timeouts

For production, configure the retry strategy:
const redis = new Redis({
  host: process.env.REDIS_HOST,
  port: Number(process.env.REDIS_PORT),
  password: process.env.REDIS_PASSWORD,
  retryStrategy: (times) => {
    const delay = Math.min(times * 50, 2000);
    return delay;
  },
  connectTimeout: 10000,
  maxRetriesPerRequest: 3,
});

Performance tips

  1. Use pipelining for bulk operations:
    const pipeline = redis.pipeline();
    for (let i = 0; i < 1000; i++) {
      pipeline.set(`key:${i}`, `value:${i}`);
    }
    await pipeline.exec();
    
  2. Use SCAN instead of KEYS:
    // ❌ Bad - blocks Redis
    const keys = await redis.keys("user:*");
    
    // ✅ Good - non-blocking
    const stream = redis.scanStream({ match: "user:*" });
    stream.on("data", (keys) => {
      // Process keys
    });
    
  3. Use Redis transactions when needed:
    await redis.multi().incr("counter").expire("counter", 3600).exec();
    
  4. Enable automatic pipelining:
    const redis = new Redis({
      host: process.env.REDIS_HOST,
      enableAutoPipelining: true,
    });