There is a specific kind of dread that hits when you open the Firebase Console and see the Firestore "Usage" bar glowing red.

For Atmos Football — a casual weekly 5-a-side stats and team generator with maybe 20–30 active users — I recently discovered we were hitting 60,000+ document reads per day. The Spark (free) plan has a hard limit of 50,000 reads. We weren't just using the service; we were blowing past the quota daily and risking sudden "resource exhausted" errors for a handful of weekend warriors checking their player stats and heatmaps.

The most surprising part? It wasn't caused by a surge in popularity. It was a series of early architectural "conveniences" that had quietly turned into a costly time bomb. Each decision made sense in isolation. Together they compounded into something none of them predicted.

The Diagnosis: How We Were Burning Reads

When I ran a proper cost audit, the problems became obvious. We had been treating Firestore like a local variable instead of a billed database. The patterns were optimised for development speed. Development speed is not the same thing as production cost. The gap between those two things is where the bill accumulates.

The three main culprits were:

  1. The onSnapshot Trap
    Almost every screen (Player Stats, Team Generator, Game History) used real-time listeners. Every app open, every tab switch, and every time someone updated a score triggered re-fetches across multiple listeners.

  2. Zero Caching
    Switching tabs or closing/reopening the app always triggered fresh fetches. There was no session cache and no way to know if the data had actually changed.

  3. Inefficient Group Loading
    The app loaded every group's configuration and games independently. If a user belonged to three groups, that multiplied into N × M reads just to show the home screen.

These patterns felt reasonable when I built them, but they compounded fast. A single user opening the app on two devices could easily generate hundreds of unnecessary reads per session.

The Fixes: Engineering for the Free Tier

The goal was simple: get safely under 50k reads per day without hurting the user experience.

Fix 1: Replace onSnapshot with getDocs + session cache
Real-time listeners are powerful for chat apps, but Atmos Football doesn't need sub-second updates while players are still on the pitch.

We switched to on-demand getDocs calls wrapped in a simple in-memory session cache.

Before (expensive):

useEffect(() => {
  const unsub = onSnapshot(collection(db, 'groups', groupId, 'games'), snap => { ... });
  return unsub;
}, [groupId]);

After (much cheaper):

const cache = useRef(new Map());

async function getGames(groupId) {
  if (cache.current.has(groupId)) return cache.current.get(groupId);
  const snap = await getDocs(...);
  const data = ...;
  cache.current.set(groupId, data);
  return data;
}

Fix 2: The dataVersion Silver Bullet
This was the biggest win. We added a single tiny document (metadata/global) containing just an incrementing integer dataVersion.

On app load the app performs one cheap read to check the version. If it matches the value stored in localStorage, the app skips the full game fetch and uses the cached data instead.

Only when a new game is added (or an admin increments the version) does the app trigger a full collection read. One document read replaced what used to be dozens or hundreds of game-document reads for 99% of app opens.

Fix 3: Consolidated Loading + Field Allowlists
We stopped loading every group at startup — only the active group is fetched by default. Other groups lazy-load when switched to. We also tightened write rules with field allowlists so minor updates don't unnecessarily invalidate everyone's cache.

The Results

After deploying the changes, daily reads dropped from 60,000+ to consistently under 1,000 — a reduction of over 95%.

Metric Before After Improvement
Daily Reads 62,400 ~850 98.6% reduction
Load Time 2.4s (network) 0.2s (cache) 91% faster
Cost Looming overages $0 (free tier) Sustained free tier

The app not only stays comfortably inside the Spark plan limits — it also feels noticeably snappier. Tab switching is now instant because the data is already in local cache.

When Real-Time Listeners Are Still Worth It

onSnapshot is the right tool when users genuinely need live updates — chat apps, collaborative editing, or a live match clock.

For Atmos Football, the cost of slightly stale data (a few minutes old) is far lower than the cost of constant listeners. Rule of thumb: use real-time listeners only when the cost of staleness exceeds the cost of the extra reads.

If you do keep them, always unsubscribe on unmount and avoid subscribing on every render or tab focus.

The Broader Lesson

Firestore's pricing is per-operation, not per-byte or per-user. Early decisions that feel harmless in development can create massive ongoing costs once real usage begins — not because the decisions were wrong at the time, but because the environment they were made for no longer exists once real users arrive.

This is the part that is easy to miss: the architecture that works in development is not the architecture that survives production. The assumptions that felt reasonable when you were the only user stop being reasonable the moment you are not. Audit your reads before you have a problem, because by the time the bar turns red, the architecture has already been wrong for months.

Real-time listeners are seductive because they are easy to write. Easy to write is not the same as right to use. For an app where the cost of slightly stale data is low, a getDocs call with a session cache and a single dataVersion check costs almost nothing and buys back the headroom to grow. Save real-time listeners for the features where staleness genuinely matters — when the cost of not knowing outweighs the cost of always asking.

— James
(Organiser & developer of Atmos Football)