/ Docs

Async Module #

Asynchronous data loading with lazy loading and placeholders.

Overview #

The async module handles all data-related operations in vlist, designed for efficient handling of large datasets (1M+ items):

  • Adapter Pattern: Fetch data from any API with a single read function
  • Sparse Storage: Memory-efficient chunked storage with LRU eviction
  • Placeholder System: Smart loading state indicators while data loads
  • Velocity-Based Loading: Skip loads during fast scrolling, preload in scroll direction

Module Structure #

src/features/async/
├── index.ts       # Module exports
├── feature.ts      # withAsync() feature
├── manager.ts     # Data manager (main coordinator)
├── sparse.ts      # Sparse storage implementation
└── placeholder.ts # Placeholder generation

withAsync Configuration #

The withAsync feature is the main API for adding async data loading to vlist:

import { vlist, withAsync } from 'vlist'

const list = vlist({
  container: '#app',
  item: { height: 48, template: renderItem }
})
.use(withAsync({
  adapter: {
    read: async ({ offset, limit }) => {
      const res = await fetch(`/api/items?offset=${offset}&limit=${limit}`)
      const data = await res.json()
      return { items: data.items, total: data.total }
    }
  },
  autoLoad: true,  // Load data immediately (default: true)
  storage: {
    chunkSize: 100,      // Items per chunk (default: 100)
    maxCachedItems: 10000 // Max items in memory (default: 5000)
  },
  loading: {
    cancelThreshold: 5,   // Cancel load above this velocity (default: 5)
    preloadThreshold: 2,  // Preload above this velocity (default: 2)
    preloadAhead: 50      // Items to preload (default: 50)
  }
}))
.build()

Configuration Options #

adapter (required) #

Async data source with a read function:

  • Input: { offset: number, limit: number, cursor: string | undefined, signal: AbortSignal }
    • offset — zero-based start index for this chunk
    • limit — number of items requested
    • cursor — the cursor returned by the previous sequential response, or undefined. vlist forwards it automatically on the next in-sequence request; any non-sequential request (jump, random access) always receives undefined
    • signal — AbortSignal tied to this chunk's lifecycle; pass to fetch() to cancel stale in-flight requests automatically
  • Output: Promise<{ items: T[], total: number, hasMore?: boolean, cursor?: string }>

autoLoad (optional) #

Whether to automatically load initial data. Default: true

  • Set to false if you need to initialize state before loading
  • Call list.reload() when ready to load data

total (optional) #

Initial total count when autoLoad: false. Prevents vlist from thinking the list is empty.

storage (optional) #

Configure sparse storage behavior:

  • chunkSize: Number of items per chunk (default: 100)
    • Important: Should match your API pagination size for optimal performance
    • Example: If your API returns 25 items per page, use chunkSize: 25
  • maxCachedItems: Maximum items in memory before eviction (default: 5000)

loading (optional) #

Velocity-based loading optimization:

  • cancelThreshold: Velocity (px/ms) above which loading is skipped (default: 5)
  • preloadThreshold: Velocity (px/ms) for preloading ahead (default: 2)
  • preloadAhead: Number of items to preload in scroll direction (default: 50)

Complete Integration #

import { vlist, withAsync } from 'vlist';

const list = vlist({
  container: '#app',
  item: {
    height: 48,
    template: (item, index) => {
      if (isPlaceholderItem(item)) {
        return `<div class="item loading">${item.name}</div>`;
      }
      return `<div class="item">${item.name}</div>`;
    },
  },
})
.use(withAsync({
  adapter: {
    // Always pass signal to fetch() — vlist cancels stale chunks automatically
    // when the user scrolls away, freeing browser connections immediately.
    read: async ({ offset, limit, signal }) => {
      const response = await fetch(
        `/api/items?offset=${offset}&limit=${limit}`,
        { signal }
      );
      const data = await response.json();
      return {
        items: data.items,
        total: data.total,
        hasMore: data.hasMore,
      };
    },
  },
}))
.build();

// Events
list.on('load:start', ({ offset, limit }) => {
  showLoadingIndicator();
});

list.on('load:end', ({ items, total }) => {
  hideLoadingIndicator();
  updateTotalCount(total);
});

list.on('error', ({ error, context }) => {
  if (context === 'loadMore') {
    showRetryButton();
  }
});

Usage Examples #

Deferred Loading #

const list = vlist({ /* ... */ })
.use(withAsync({
  adapter: myAdapter,
  autoLoad: false,  // Don't load immediately
  total: 0          // Start with empty list
}))
.build()

// Later, after setting filters/state:
list.reload()  // Now load data

// Or reload with a saved snapshot (skips initial load automatically):
const saved = sessionStorage.getItem('my-list');
const snapshot = saved ? JSON.parse(saved) : undefined;
await list.reload({ snapshot })

Custom Chunk Size #

const list = vlist({ /* ... */ })
.use(withAsync({
  adapter: myAdapter,
  storage: {
    chunkSize: 25  // Match API page size
  }
}))
.build()

Deferred Loading with Snapshot Restore #

A common SPA pattern: save scroll position when leaving a view, restore it when returning.

// Save snapshot before navigating away
const snapshot = list.getScrollSnapshot();
sessionStorage.setItem('category-A', JSON.stringify(snapshot));

// Later, switch to category B — restore its snapshot if available
const saved = sessionStorage.getItem('category-B');
const categoryBSnapshot = saved ? JSON.parse(saved) : undefined;
await list.reload(categoryBSnapshot ? { snapshot: categoryBSnapshot } : undefined);

When snapshot has meaningful data (total > 0 and index > 0), reload() automatically skips the initial page-1 load and calls restoreScroll(snapshot) to jump directly to the saved position. If the snapshot is empty or at the top, it falls back to a normal reload.

Snapshot-Aware Reload #

The reload() method accepts a ReloadOptions object:

interface ReloadOptions {
  skipInitialLoad?: boolean;  // Skip the page-1 load after reset
  snapshot?: ScrollSnapshot;  // Restore scroll position after reset
}

Passing a snapshot replaces what used to require manual coordination:

Before (manual):

const hasRestorable = snapshot && snapshot.total > 0 && snapshot.index > 0;
await list.reload(hasRestorable ? { skipInitialLoad: true } : undefined);
if (hasRestorable && list.restoreScroll) {
  list.restoreScroll(snapshot);
}

After (vlist handles it):

await list.reload({ snapshot });
// vlist automatically: skips loadInitial, calls restoreScroll

When a snapshot with meaningful data is provided, reload():

  1. Resets state (clears data, invalidates DOM)
  2. Skips loadInitial() — no wasted page-1 request
  3. Calls restoreScroll(snapshot) — bootstraps total, sets scroll position, loads the target page

Placeholder Detection in Templates #

import { isPlaceholderItem } from 'vlist';

item: {
  height: 48,
  template: (item, index, state) => {
    if (isPlaceholderItem(item)) {
      return `<div class="item loading">${item.name}</div>`;
    }
    return `<div class="item">${item.name}</div>`;
  },
}

Key Concepts #

Sparse Storage Key Concepts #

Instead of loading all items into memory, vlist uses sparse storage:

Total: 1,000,000 items
Loaded: Only items 0-99, 5000-5099 (200 items in memory)
Memory saved: 99.98%

Items are stored in chunks (default: 100 items per chunk):

Chunk 0:  items 0-99
Chunk 1:  items 100-199
Chunk 50: items 5000-5099
...

Placeholder Generation #

When items aren't loaded, placeholders are shown:

// Real item
{ id: 'user-1', name: 'John Doe', email: 'john@example.com' }

// Placeholder (analyzed structure)
{ id: '__placeholder_0', _isPlaceholder: true, name: '████████', email: '███████████████' }

Data Flow #

User scrolls to new range
    ↓
DataManager.ensureRange(start, end)
    ↓
Check SparseStorage for missing items
    ↓
Missing? → Call Adapter.read()
    ↓
Store items in SparseStorage
    ↓
Emit 'load:end' event
    ↓
Renderer re-renders with real data

Chunk-Based Loading #

Items are loaded in chunk-aligned boundaries for efficiency:

// Request: load items 50-150
// Chunk size: 100

// Aligned to chunks:
// Chunk 0: items 0-99 (includes 50-99)
// Chunk 1: items 100-199 (includes 100-150)

// Actually loaded: items 0-199

This reduces redundant loads when scrolling back and forth.

Deduplication #

The data manager prevents duplicate loading:

// Scroll handler calls ensureRange rapidly
await dataManager.ensureRange(100, 200);  // Starts loading
await dataManager.ensureRange(100, 200);  // Returns existing promise
await dataManager.ensureRange(100, 200);  // Returns existing promise

// Only ONE API call is made

Memory Management #

Eviction Strategy #

When memory limits are reached, chunks far from the visible area are evicted:

Visible: items 5000-5050
Buffer: 200 items
Keep zone: 4800-5250

Evict chunks outside keep zone using LRU

Configuration Guidelines #

List Size Chunk Size Max Cached Buffer
< 10K 100 5,000 200
10K - 100K 100 10,000 500
100K - 1M 100 10,000 500
> 1M 100 10,000 500

Stats Monitoring #

const stats = dataManager.getStorage().getStats();

console.log({
  totalItems: stats.totalItems,          // 1,000,000
  cachedItems: stats.cachedItems,        // 5,000
  cachedChunks: stats.cachedChunks,      // 50
  memoryEfficiency: stats.memoryEfficiency  // 0.995 (99.5%)
});

Performance Optimizations #

Stale Request Cancellation #

vlist tracks every in-flight chunk request with an AbortController. When the user scrolls to a new position, any request whose chunk is more than 2 chunk-widths from the new viewport is canceled immediately:

User jumps from offset 0 → offset 35,500,000
  → All in-flight requests at offset ~0 are aborted
  → Only 2-3 new requests fire for the target position
  → Browser's 6-connection HTTP/1.1 pool never saturates

This keeps concurrent requests to at most 3 (current chunk + 1 on each side), well under the browser's 6-connection HTTP/1.1 limit. Without this, a fast scroll through a large list queues dozens of requests — the connections fill up and each completion appears slow even when the server is fast.

Always pass signal to fetch() so the browser actually cancels the TCP request when the AbortController fires:

read: async ({ offset, limit, signal }) => {
  const res = await fetch(`/api/items?offset=${offset}&limit=${limit}`, { signal });
  // ...
}

If signal is not passed, vlist still removes the request from its tracking maps (so it won't block new requests), but the underlying HTTP connection stays open until the server responds.

Client-Computed Cursor for High-Performance Pagination #

vlist natively forwards server-returned cursors for sequential reads (AdapterParams.cursor). For APIs that use opaque page tokens — Elasticsearch scroll, Firestore startAfter, etc. — return the token in AdapterResponse.cursor and vlist will pass it back on the next in-sequence request automatically.

For SQL-style keyset pagination (e.g. WHERE col > ? AND id > ?), a client-computed cursor is more powerful: it works for any offset, not just sequential reads, because the adapter derives the anchor value from previously loaded data rather than relying on the server to issue a token. Implement it with a local cache in the adapter:

// Cache the last item of each chunk: offset → { val, id }
// Keyed by offset within the current sort/filter state.
const chunkCursorCache = new Map();

const adapter = {
  read: async ({ offset, limit, signal }) => {
    const params = new URLSearchParams({ offset, limit, sort, direction });

    // Attach keyset cursor from the previous chunk if available
    const prevCursor = chunkCursorCache.get(offset - limit);
    if (prevCursor) {
      params.set('cursorVal', prevCursor.val);
      params.set('cursorId', prevCursor.id);
    }

    const res = await fetch(`/api/items?${params}`, { signal });
    const data = await res.json();

    // Cache cursor for the next sequential chunk
    if (data.items.length > 0) {
      const last = data.items.at(-1);
      chunkCursorCache.set(offset, { val: last[sortKey], id: last.id });
    }

    return { items: data.items, total: data.total, hasMore: data.hasMore };
  }
};

// Clear cursor cache whenever sort/filter state changes — cursors
// are only valid for the same query state as when they were collected.
function onSortOrFilterChange() {
  chunkCursorCache.clear();
  list.reload();
}

On a 40M-row table, this reduces sequential scroll chunk latency from O(N) OFFSET scans to O(log N) index seeks — from seconds to milliseconds at any depth.

Batched LRU Timestamps #

Sparse storage uses LRU (Least Recently Used) eviction to manage memory. Each chunk tracks when it was last accessed. Rather than calling Date.now() on every storage.get() call during rendering, vlist batches timestamp updates via touchChunksForRange(start, end):

  • Before: ~20-50 Date.now() calls per frame (one per visible item)
  • After: 1 Date.now() call per frame (batched for the entire render range)

This is called automatically by the renderer before accessing items for a range.

Direct Getters vs getState() #

The data manager exposes both getState() (returns a full DataState object) and individual getters (getTotal(), getCached(), getIsLoading(), getHasMore()). The direct getters are used on hot paths to avoid object allocation:

// ✅ Hot path — zero allocation
const total = dataManager.getTotal();
const cached = dataManager.getCached();
const isLoading = dataManager.getIsLoading();

// ❌ Avoid on hot paths — allocates DataState object
const { total, cached, isLoading } = dataManager.getState();

getState() is still useful for diagnostics, logging, or infrequent reads where the allocation cost is negligible.

Internals #

The following low-level APIs power the async system. Most users never call these directly — they are used internally by withAsync and exposed for advanced use cases like custom feature authoring.

Data Manager #

createDataManager #

Creates a data manager instance.

function createDataManager<T extends VListItem>(
  config: DataManagerConfig<T>
): DataManager<T>;

interface DataManagerConfig<T extends VListItem> {
  /** Async data adapter */
  adapter?: VListAdapter<T>;
  
  /** Initial items (optional) */
  initialItems?: T[];
  
  /** Initial total count (if known) */
  initialTotal?: number;
  
  /** Sparse storage configuration */
  storage?: SparseStorageConfig;
  
  /** Placeholder configuration */
  placeholder?: PlaceholderConfig;
  
  /** Items per load request (default: 50) */
  pageSize?: number;
  
  /** Callback when state changes */
  onStateChange?: (state: DataState<T>) => void;
  
  /** Callback when items are loaded */
  onItemsLoaded?: (items: T[], offset: number, total: number) => void;
  
  /** Callback when items are evicted */
  onItemsEvicted?: (count: number) => void;
}

DataManager Interface #

interface DataManager<T extends VListItem> {
  // State
  getState: () => DataState<T>;
  
  // Direct getters (hot-path optimized, zero object allocation)
  getTotal: () => number;
  getCached: () => number;
  getIsLoading: () => boolean;
  getHasMore: () => boolean;
  
  // Storage access
  getStorage: () => SparseStorage<T>;
  getPlaceholders: () => PlaceholderManager<T>;
  
  // Item access
  getItem: (index: number) => T | undefined;
  getItemById: (id: string | number) => T | undefined;
  getIndexById: (id: string | number) => number;
  isItemLoaded: (index: number) => boolean;
  getItemsInRange: (start: number, end: number) => T[];
  
  // Data operations
  setTotal: (total: number) => void;
  setItems: (items: T[], offset?: number, total?: number) => void;
  updateItem: (index: number, updates: Partial<T>) => boolean;
  removeItem: (id: string | number) => boolean;
  
  // Loading
  loadRange: (start: number, end: number) => Promise<void>;
  ensureRange: (start: number, end: number) => Promise<void>;
  loadInitial: () => Promise<void>;
  loadMore: () => Promise<boolean>;
  reload: () => Promise<void>;
  
  // Memory management
  evictDistant: (visibleStart: number, visibleEnd: number) => void;
  
  // Lifecycle
  clear: () => void;
  reset: () => void;
}

DataState Interface #

interface DataState<T extends VListItem> {
  /** Total items (declared, may be larger than loaded) */
  total: number;
  
  /** Number of items in memory */
  cached: number;
  
  /** Whether data is loading */
  isLoading: boolean;
  
  /** Pending load ranges */
  pendingRanges: Range[];
  
  /** Error from last operation */
  error: Error | undefined;
  
  /** Whether more items exist */
  hasMore: boolean;
  
  /**
   * Last cursor returned by the adapter response.
   * Forwarded back to AdapterParams.cursor on the next sequential request.
   */
  cursor: string | undefined;
}

Sparse Storage Internals #

createSparseStorage #

Creates sparse storage for efficient large list handling.

function createSparseStorage<T extends VListItem>(
  config?: SparseStorageConfig
): SparseStorage<T>;

interface SparseStorageConfig {
  /** Number of items per chunk (default: 100) */
  chunkSize?: number;
  
  /** Maximum items to keep in memory (default: 5000) */
  maxCachedItems?: number;
  
  /** Extra items to keep around visible range (default: 200) */
  evictionBuffer?: number;
  
  /** Callback when items are evicted */
  onEvict?: (evictedCount: number, evictedRanges: number[]) => void;
}

SparseStorage Interface #

interface SparseStorage<T extends VListItem> {
  // Configuration
  readonly chunkSize: number;
  readonly maxCachedItems: number;
  
  // Total management
  getTotal: () => number;
  setTotal: (total: number) => void;
  
  // Item access
  get: (index: number) => T | undefined;
  has: (index: number) => boolean;
  set: (index: number, item: T) => void;
  setRange: (offset: number, items: T[]) => void;
  delete: (index: number) => boolean;
  
  // Range operations
  getRange: (start: number, end: number) => (T | undefined)[];
  isRangeLoaded: (start: number, end: number) => boolean;
  getLoadedRanges: () => Range[];
  findUnloadedRanges: (start: number, end: number) => Range[];
  
  // Chunk operations
  getChunkIndex: (itemIndex: number) => number;
  isChunkLoaded: (chunkIndex: number) => boolean;
  touchChunk: (chunkIndex: number) => void;
  
  // LRU timestamp management
  touchChunksForRange: (start: number, end: number) => void;
  
  // Eviction
  evictDistant: (visibleStart: number, visibleEnd: number) => number;
  evictToLimit: () => number;
  
  // Statistics
  getStats: () => SparseStorageStats;
  getCachedCount: () => number;
  
  // Lifecycle
  clear: () => void;
  reset: () => void;
}

interface SparseStorageStats {
  totalItems: number;
  cachedItems: number;
  cachedChunks: number;
  chunkSize: number;
  maxCachedItems: number;
  memoryEfficiency: number;  // 1 - (cachedItems / totalItems)
}

Placeholder Manager #

createPlaceholderManager #

Creates a placeholder manager for loading states.

function createPlaceholderManager<T extends VListItem>(
  config?: PlaceholderConfig
): PlaceholderManager<T>;

interface PlaceholderConfig {
  /** Enable placeholder generation (default: true) */
  enabled?: boolean;
  
  /** Character used for masking text (default: '█') */
  maskCharacter?: string;
  
  /** Add random variance to text lengths (default: true) */
  randomVariance?: boolean;
  
  /** Maximum items to sample for structure analysis (default: 20) */
  maxSampleSize?: number;
  
  /** Custom placeholder generator */
  customGenerator?: (index: number) => VListItem;
}

PlaceholderManager Interface #

interface PlaceholderManager<T extends VListItem> {
  /** Analyze data structure from sample items */
  analyzeStructure: (items: T[]) => void;
  
  /** Check if structure has been analyzed */
  hasAnalyzedStructure: () => boolean;
  
  /** Generate a single placeholder item */
  generate: (index: number) => T;
  
  /** Generate multiple placeholder items */
  generateRange: (start: number, end: number) => T[];
  
  /** Check if an item is a placeholder */
  isPlaceholder: (item: unknown) => boolean;
  
  /** Get the placeholder flag key */
  getPlaceholderKey: () => string;
  
  /** Clear analyzed structure */
  clear: () => void;
}

Utility Functions #

// Check if item is a placeholder
function isPlaceholderItem(item: unknown): boolean;

// Filter out placeholders from array
function filterPlaceholders<T extends VListItem>(items: T[]): T[];

// Count non-placeholder items
function countRealItems<T extends VListItem>(items: (T | undefined)[]): number;

// Replace placeholders with real items
function replacePlaceholders<T extends VListItem>(
  target: (T | undefined)[],
  items: T[],
  offset: number
): number;

// Merge adjacent/overlapping ranges
function mergeRanges(ranges: Range[]): Range[];

// Calculate ranges that need to be loaded
function calculateMissingRanges(
  needed: Range,
  loaded: Range[],
  chunkSize: number
): Range[];

See Also #

Examples #

  • Velocity Loading — Smart loading that skips during fast scrolling, preloads in scroll direction
  • Social Feed — Variable-height posts with auto-measurement and async loading
  • Window Scroll — Document-level infinite scroll with withPage + withAsync