Skip to main content
Using Sevalla’s object storage, powered by Cloudflare R2, with your JavaScript or TypeScript application provides secure, scalable, and persistent storage for files such as images, videos, documents, or application-generated assets. Its S3-compatible API allows seamless integration with popular JS/TS libraries, enabling easy programmatic access for uploads, downloads, and file management. This makes it ideal for handling user uploads, storing backend outputs, caching media, or serving assets to your frontend, all while keeping your data private and reliably accessible across deployments and application instances. We recommend using the AWS S3 SDK for object storage, file uploads, and asset management in JavaScript/TypeScript applications.

Installation

Install the required dependencies:
npm install @aws-sdk/client-s3
  • @aws-sdk/client-s3: AWS SDK v3 for S3 operations (modular and tree-shakeable).
Optional for advanced features:
npm install @aws-sdk/s3-request-presigner  # For pre-signed URLs

Environment variables

Add the following environment variables to your application from the Object storage service details:
S3_REGION=your_region_here
S3_ACCESS_KEY_ID=your_access_key_here
S3_SECRET_ACCESS_KEY=your_secret_key_here
S3_BUCKET_NAME=your_bucket_name
S3_ENDPOINT=https://s3.sevalla.storage # Optional

Setting up the client

// src/config/s3.ts
import { S3Client } from "@aws-sdk/client-s3";

const s3Client = new S3Client({
  region: process.env.S3_REGION || "us-east-1",
  credentials: {
    accessKeyId: process.env.S3_ACCESS_KEY_ID || "",
    secretAccessKey: process.env.S3_SECRET_ACCESS_KEY || "",
  },
  endpoint: process.env.S3_ENDPOINT, // Optional
  forcePathStyle: process.env.S3_ENDPOINT ? true : false,
});

export default s3Client;
Note: AWS SDK v3 uses modular imports, which means you only import the commands you need, resulting in smaller bundle sizes.

Basic usage examples

Upload a file

import { PutObjectCommand } from "@aws-sdk/client-s3";
import s3Client from "@src/config/s3";

const command = new PutObjectCommand({
  Bucket: process.env.S3_BUCKET_NAME!,
  Key: 'my-file-key.txt',
  Body: buffer,
  ContentType: data.mimetype,
  // Optional
  Metadata: {
    uploadedBy: "user123",
    originalName: data.filename,
  },
});

await s3Client.send(command);

Download a file

import { GetObjectCommand } from "@aws-sdk/client-s3";
import s3Client from "@src/config/s3";

const command = new GetObjectCommand({
  Bucket: process.env.S3_BUCKET_NAME!,
  Key: 'my-file-key.txt',
});

const response = await s3Client.send(command);

const str = await response.Body?.transformToString();

Delete a file

import { DeleteObjectCommand } from "@aws-sdk/client-s3";
import s3Client from "@src/config/s3";

const command = new DeleteObjectCommand({
  Bucket: process.env.S3_BUCKET_NAME!,
  Key: 'my-file-key.txt',
});

await s3Client.send(command);

List files in bucket

import { ListObjectsV2Command } from "@aws-sdk/client-s3";
import s3Client from "@src/config/s3";

const command = new ListObjectsV2Command({
  Bucket: process.env.S3_BUCKET_NAME!,
  Prefix: '',
  MaxKeys: 100,
});

const response = await s3Client.send(command);

const files =
  response.Contents?.map((item) => ({
    key: item.Key,
    size: item.Size,
    lastModified: item.LastModified,
    etag: item.ETag,
  })) || [];

Generate pre-signed URL (Temporary access)

import { GetObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import s3Client from "@src/config/s3";

const key = 'my-file-key.txt';
const expiresIn = 3600; // Default: 1 hour

const command = new GetObjectCommand({
  Bucket: process.env.S3_BUCKET_NAME!,
  Key: key,
});

// Generate a pre-signed URL valid for specified time
const url = await getSignedUrl(s3Client, command, { expiresIn });

Advanced S3 operations

Multi-part upload (Large files)

import { Upload } from "@aws-sdk/lib-storage";
import s3Client from "@src/config/s3";

async function uploadLargeFile(file: Buffer | Readable, key: string, mimetype: string) {
  const parallelUploads3 = new Upload({
    client: s3Client,
    params: {
        Bucket: process.env.S3_BUCKET_NAME!,
        Key: key,
        Body: file,
        ContentType: mimetype
    },
    // Optional: concurrency configuration
    queueSize: 4, 
    partSize: 1024 * 1024 * 5, 
  });

  parallelUploads3.on("httpUploadProgress", (progress) => {
    console.log(progress);
  });

  await parallelUploads3.done();
}

Copy objects

import { CopyObjectCommand } from "@aws-sdk/client-s3";

async function copyFile(sourceKey: string, destinationKey: string) {
  const command = new CopyObjectCommand({
    Bucket: process.env.S3_BUCKET_NAME!,
    CopySource: `${process.env.S3_BUCKET_NAME}/${sourceKey}`,
    Key: destinationKey,
  });

  await s3Client.send(command);
}

Get object metadata

import { HeadObjectCommand } from "@aws-sdk/client-s3";

async function getMetadata(key: string) {
  const command = new HeadObjectCommand({
    Bucket: process.env.S3_BUCKET_NAME!,
    Key: key,
  });

  const response = await s3Client.send(command);
  return {
    contentType: response.ContentType,
    contentLength: response.ContentLength,
    lastModified: response.LastModified,
    metadata: response.Metadata,
    etag: response.ETag,
  };
}

Set object ACL (Access control)

import { PutObjectAclCommand } from "@aws-sdk/client-s3";

async function makePublic(key: string) {
  const command = new PutObjectAclCommand({
    Bucket: process.env.S3_BUCKET_NAME!,
    Key: key,
    ACL: "public-read",
  });

  await s3Client.send(command);
}

Best Practices

  1. Use streaming for large files - Don’t load entire files into memory
  2. Implement retry logic - S3 operations can fail, implement exponential backoff
  3. Use pre-signed URLs - For direct client uploads/downloads, reduce server load
  4. Set appropriate CORS - Configure bucket CORS if accessing from browser
  5. Use multipart upload - For files > 100MB
  6. Implement file validation - Check file types and sizes before upload
  7. Use proper naming conventions - Organize files with prefixes (folders)
  8. Enable versioning - For important files, enable S3 versioning
  9. Monitor storage costs - Use lifecycle policies to move/delete old files
  10. Implement error handling - Always wrap S3 operations in try-catch blocks
  11. Use content types - Always set the correct ContentType when uploading
  12. Secure credentials - Never commit S3 credentials, use environment variables

Common Issues

Access Denied Errors

If you get “Access Denied” errors:
  • Verify access key and secret key are correct
  • Verify the bucket policy allows the operations
  • Ensure credentials match the service details

Connection Timeout

If uploads/downloads timeout:
  • Verify S3_ENDPOINT is correct
  • Increase timeout in S3 client config:
    const s3Client = new S3Client({
      region: process.env.S3_REGION,
      credentials: {
        /*...*/
      },
      requestHandler: {
        connectionTimeout: 30000, // 30 seconds
        requestTimeout: 300000, // 5 minutes
      },
    });
    

Large File Upload Issues

For large files:
  • Use multipart upload for files > 100MB
  • Implement progress tracking
  • Handle network interruptions with retry logic

CORS Errors (Browser Uploads)

Fine-tune your bucket CORS configuration on the Sevalla dashboard.

Performance Tips

  1. Parallelize uploads:
    const uploadPromises = files.map((file) => uploadFile(file));
    await Promise.all(uploadPromises);
    
  2. Cache frequently accessed files:
    // Use Redis to cache small files
    const cached = await redis.get(`s3:${key}`);
    if (cached) {
      return Buffer.from(cached, "base64");
    }
    
  3. Implement pagination for listing:
    async function listAllObjects(prefix: string) {
      const allObjects = [];
      let continuationToken;
    
      do {
        const command = new ListObjectsV2Command({
          Bucket: process.env.S3_BUCKET_NAME!,
          Prefix: prefix,
          ContinuationToken: continuationToken,
        });
    
        const response = await s3Client.send(command);
        allObjects.push(...(response.Contents || []));
        continuationToken = response.NextContinuationToken;
      } while (continuationToken);
    
      return allObjects;
    }
    

Security Best Practices

  1. Enable encryption at rest - Use server-side encryption:
    const command = new PutObjectCommand({
      Bucket: process.env.S3_BUCKET_NAME!,
      Key: key,
      Body: buffer,
      ServerSideEncryption: "AES256",
    });
    
  2. Scan files for viruses before storing
  3. Implement rate limiting for upload endpoints
  4. Validate file types based on content, not just extension
  5. Use signed URLs for sensitive content