Performance

Performance Optimization

Build fast, scalable integrations with efficient pagination, smart caching, and optimized API usage. Learn how to handle large catalogs and avoid rate limits.

Key Concepts

  • Pagination — Fetch data in efficient chunks
  • Caching — Reduce API calls with smart caching
  • Rate Limits — Stay within limits, avoid throttling

Pagination

All list endpoints support pagination:

ParameterDefaultMaxDescription
limit50200Number of items per page
offset0Number of items to skip

Basic Pagination

# First page (items 1-50)
GET /stores/{storeId}/products?limit=50&offset=0
 
# Second page (items 51-100)
GET /stores/{storeId}/products?limit=50&offset=50

Response Structure

{
  "items": [...],    // Array of resources
  "total": 347       // Total count across all pages
}

Fetch All Items

async function fetchAllProducts(storeId) {
  const allProducts = [];
  let offset = 0;
  const limit = 200; // Use max for fewer requests
 
  while (true) {
    const { items, total } = await fetch(
      `/stores/${storeId}/products?limit=${limit}&offset=${offset}`,
      { headers: { Authorization: `Bearer ${apiKey}` } }
    ).then(r => r.json());
 
    allProducts.push(...items);
    if (allProducts.length >= total) break;
    offset += limit;
  }
 
  return allProducts;
}

💡 Use limit=200 (max) when fetching large datasets to minimize API calls.

Caching Strategies

Next.js Server-Side Caching

async function getProducts() {
  const res = await fetch(url, {
    headers: { Authorization: `Bearer ${apiKey}` },
    next: {
      revalidate: 60, // Cache for 60 seconds
      tags: ['products'],
    },
  });
  return res.json();
}
 
// Revalidate via webhook
import { revalidateTag } from 'next/cache';
revalidateTag('products');

In-Memory Cache

const cache = new Map();
 
async function cachedFetch(key, fetcher, ttlSeconds = 300) {
  const cached = cache.get(key);
  
  if (cached && Date.now() < cached.expiresAt) {
    return cached.data;
  }
 
  const data = await fetcher();
  cache.set(key, {
    data,
    expiresAt: Date.now() + (ttlSeconds * 1000),
  });
 
  return data;
}

Cache TTL Recommendations

ResourceTTLReason
Products (list)1-5 minStock updates via webhook
Categories5-15 minRarely changes
Single Product1-2 minMay need current stock
OrdersNo cacheAlways need current status

Rate Limits

PlanRate LimitBurst
Free60 req/min10
Pro300 req/min50
EnterpriseCustomCustom

Handling 429 Errors

async function rateLimitedFetch(url, options) {
  const response = await fetch(url, options);
 
  if (response.status === 429) {
    const retryAfter = response.headers.get('Retry-After');
    const delayMs = retryAfter ? parseInt(retryAfter) * 1000 : 60000;
 
    await sleep(delayMs);
    return fetch(url, options);
  }
 
  return response;
}

Best Practices

  • Use webhooks instead of polling
  • Cache responses to reduce calls
  • Batch operations where possible
  • Use max pagination limit (200)

Optimization Tips

1. Parallel Requests

// ❌ Slow - sequential
const products = await fetchProducts();
const categories = await fetchCategories();
 
// ✅ Fast - parallel
const [products, categories] = await Promise.all([
  fetchProducts(),
  fetchCategories(),
]);

2. Use Webhooks

// ❌ Polling (hits rate limits)
setInterval(() => fetchProducts(), 60000);
 
// ✅ Webhooks (instant, no rate limits)
app.post('/webhooks', (req, res) => {
  invalidateCache(req.body.data.id);
  res.status(200).send('OK');
});

3. Selective Fetching

# ❌ Fetch all, filter client-side
GET /products
 
# ✅ Filter server-side
GET /products?status=published&categoryId=cat_abc&limit=20

4. Preload Critical Data

async function initializeCache() {
  const [categories, featured] = await Promise.all([
    fetchAllCategories(),
    fetchProducts({ featured: true }),
  ]);
 
  cache.set('categories', categories);
  cache.set('featured', featured);
}
 
initializeCache().then(() => app.listen(3000));

Performance Checklist

  • Using maximum pagination limit (200)
  • Caching responses with appropriate TTLs
  • Using webhooks instead of polling
  • Making parallel requests where possible
  • Filtering server-side, not client-side
  • Handling rate limits with backoff
  • Preloading critical data on startup

Related Guides