Case File / Souvik SenCase File
Back to case index
CASE 002

NotifyFlux

Documented systems buildFiled under Real-time Systems

Multi-tenant notification stack: Express and Socket.IO, MongoDB change streams, Redis-backed socket fan-out, JWT-scoped tenancy, and a React admin console for delivery visibility.

Lab register
Investigation depth

4 logged considerations

Architecture notes

3 structural threads

Evidence items

3 collected signals

Jump to section

ICase briefing
Investigation lab

Case briefing

Section I

NotifyFlux was built as a full-stack exercise in SaaS-style notifications: multiple tenants, live delivery feedback, and an admin surface that stays usable when throughput grows. Constraints included keeping isolation provable (no accidental cross-tenant reads on the socket layer), making local and Docker-based runs reproducible, and exposing health and metrics for operations. The trigger was the gap between “we stored a notification” and “subscribers saw it quickly, in the right tenant context.”

Investigation lab

Initial signal

Section II
Symptom

What broke first

The failure mode was not only latency—it was inconsistent visibility: admins polling REST lists could believe the system was quiet while events were still mid-flight, and scaling sockets without a shared adapter risked users missing updates or joining the wrong logical room. Without change-driven updates, the backend either over-fetched or fell out of sync with the database.

Investigation lab

Evidence collected

Section III
Evidence 01

Stream-driven updates

Change streams anchor pushes to database commits instead of ad hoc polling loops.

Evidence 02

Redis-backed Socket.IO

The adapter path supports multiple API instances without splitting room state per process.

Evidence 03

Operator-facing web console

The React admin is part of the architecture, not an afterthought, so delivery behaviour is visible during development and review.

Investigation lab

Investigation path

Section IV
Failure surfaces that shaped the delivery model—not hypotheticals.
Failure mode

Invisible mid-flight work

REST polling can make the system look quiet while events are still moving through sockets—operators need UI that reflects commits and delivery, not only last fetch.

Failure mode

Room coherence across processes

Without a shared adapter, horizontal scaling splits socket state per node and risks missed or mis-scoped broadcasts; tenant-scoped rooms must stay aligned with JWT claims.

Message path from commit to tenant-scoped fan-out.
Active stage

Persisted write

Notifications land in MongoDB with tenant scope; the database remains the source of truth before anything is broadcast.

Investigation notes pulled from the build log.
  1. Log 01

    Mapped the lifecycle: write to Mongo → propagate to interested subscribers → confirm delivery state. Polling the collection on an interval was ruled out for hot paths because it duplicates work and still races with concurrent writers.

  2. Log 02

    Adopted MongoDB change streams so the API reacts to data the database already committed, then evaluated Socket.IO with a Redis adapter so multiple API instances share room state—tradeoff: operational dependency on Redis versus single-node socket limits.

  3. Log 03

    Separated tenant and user concerns in the socket layer (room naming, JWT claims carrying `tenantId`) so connection setup failures show up as auth problems, not silent drops.

  4. Log 04

    Treated the React/Vite admin app as part of the system: if operators cannot see stream health and recent events, the backend observability work does not reach its audience.

Reliability tradeoffs
Tradeoff 01Operational dependency on Redis

The Redis adapter trades an extra moving part for horizontal socket scale. Single-node Socket.IO is simpler to run locally but caps fan-out when the API replicas grow.

Tradeoff 02Replica-set Mongo for streams

Change streams expect replica-set semantics. Local and compose setups document that constraint so stream wiring fails clearly in dev instead of silently in prod.

Investigation lab

Resolution

Section V
Active view

Ingestion and persistence live in MongoDB; change streams feed the real-time layer so clients receive pushes tied to committed writes. Socket.IO uses a Redis adapter so room membership and broadcasts stay coherent across processes.

Investigation lab

Outcome

Section VI
Long-form result

The stack runs end-to-end from `docker compose up` with seeded demo data, and the documentation ties the moving parts together (streams, Redis adapter, SPA). The case is evidence of backend and real-time ownership plus a deliberate admin experience—not a headless demo without an operational front end.

Investigation lab

References

Section VII