Back to Blog
Architecture
Local-First
Engineering

Building a Fast Local File System in the Browser

How we used IndexedDB, simple flat structures, and smart memory caching to build a file explorer that feels native.

Vinod & Inoka
2025-12-02
6 min read
Building a Fast Local File System in the Browser

"Occam's razor" suggests that the simplest explanation is usually the best one. In software engineering, the simplest architecture is often the most performant one.

When designing the file system for Luma, we had a choice: build a complex, graph-based structure to represent the nested folders, or keep it dead simple. We chose the latter, and it resulted in a system that is incredibly fast, efficient, and robust.

Here is a deep dive into the local-first architecture that powers Luma's bookmarks and folders.

The Core Stack

Our local data layer rests on three pillars:

  1. IndexedDB (IDB): The persistent storage layer.
  2. Zustand: The in-memory state manager.
  3. Flat Data Structure: The architectural pattern.

1. The Flat-File System Architecture

It is tempting to model a file system as a nested tree of objects, where each folder contains an array of children. While intuitive, this structure becomes a nightmare for performance. Deeply nested updates require traversing the tree, and syncing this structure with a backend (like Google Drive) is error-prone.

Instead, we adopted a Flat-File System. Every folder and page is a standalone record in a flat database table. The hierarchy is maintained purely through a reference to the parent folder.

Conceptually, it looks like this:

  • Each Folder or Page is a single row in the database.
  • It contains a parentId field which points to the folder it belongs to.
  • It contains a userId field to partition data between users.

This means there is no "nesting" in the database layer. A folder at depth 10 is stored exactly the same way as a folder at depth 1.

2. IndexedDB & Composite Indexes

We use IndexedDB as our primary local database. To make data fetching instantaneous, we leverage Composite Indexes.

We define a specific index on the combination of Parent ID and User ID.

Why is this fast?

When you open a folder, we don't need to traverse a tree or recursively search through a JSON blob. We simply ask the database: "Give me all items where parentId is X and userId is Y."

This is an O(log n) operation (effectively constant time for the user), regardless of how deep the folder structure is or how many files you have.

3. Smart Memory Caching with Zustand

Fetching from IndexedDB is fast, but fetching from RAM is faster. We use Zustand to manage our in-memory state.

Our design philosophy is Lazy Loading. We do not load the entire file system into memory on startup. That would be wasteful. Instead, we load data only when you need it.

The Fetch-Strategy

When a user navigates to a specific folder, our custom data fetching hook performs a smart check:

  1. Check Memory: Does the in-memory store already have the contents for this folder?
  2. If Yes: Render immediately. Zero latency.
  3. If No: Fetch the specific items from IndexedDB, populate the in-memory cache, and then render.

This approach ensures that Luma's memory footprint remains low, growing only as you explore more folders. It creates a "warm" cache for your most-used directories while keeping the initial load time nearly instantaneous.

4. Simplicity Enables Sync

This flat architecture has a massive side benefit: Synchronization.

Because every item is independent, syncing with external providers like Google Drive becomes much simpler. We don't need to diff complex trees. We just sync individual items based on their IDs and timestamps. If a folder moves, we just update its parentId reference, and the entire subtree "moves" instantly without needing to update every single child item.

Conclusion

Luma's speed doesn't come from complex algorithms or heavy frameworks. It comes from choosing the simplest data structure that works.

By combining the raw performance of IndexedDB with a flat architecture and smart memory caching, we've built a file system that feels instant, works offline, and scales effortlessly.

Sometimes, the best engineering decision is just to keep it simple!