4 logged considerations
NotifyFlux
Multi-tenant notification stack: Express and Socket.IO, MongoDB change streams, Redis-backed socket fan-out, JWT-scoped tenancy, and a React admin console for delivery visibility.
3 structural threads
3 collected signals
Jump to section
Case briefing
NotifyFlux was built as a full-stack exercise in SaaS-style notifications: multiple tenants, live delivery feedback, and an admin surface that stays usable when throughput grows. Constraints included keeping isolation provable (no accidental cross-tenant reads on the socket layer), making local and Docker-based runs reproducible, and exposing health and metrics for operations. The trigger was the gap between “we stored a notification” and “subscribers saw it quickly, in the right tenant context.”
Initial signal
What broke first
The failure mode was not only latency—it was inconsistent visibility: admins polling REST lists could believe the system was quiet while events were still mid-flight, and scaling sockets without a shared adapter risked users missing updates or joining the wrong logical room. Without change-driven updates, the backend either over-fetched or fell out of sync with the database.
Evidence collected
Stream-driven updates
Change streams anchor pushes to database commits instead of ad hoc polling loops.
Redis-backed Socket.IO
The adapter path supports multiple API instances without splitting room state per process.
Operator-facing web console
The React admin is part of the architecture, not an afterthought, so delivery behaviour is visible during development and review.
Investigation path
Invisible mid-flight work
REST polling can make the system look quiet while events are still moving through sockets—operators need UI that reflects commits and delivery, not only last fetch.
Room coherence across processes
Without a shared adapter, horizontal scaling splits socket state per node and risks missed or mis-scoped broadcasts; tenant-scoped rooms must stay aligned with JWT claims.
Persisted write
Notifications land in MongoDB with tenant scope; the database remains the source of truth before anything is broadcast.
- Log 01
Mapped the lifecycle: write to Mongo → propagate to interested subscribers → confirm delivery state. Polling the collection on an interval was ruled out for hot paths because it duplicates work and still races with concurrent writers.
- Log 02
Adopted MongoDB change streams so the API reacts to data the database already committed, then evaluated Socket.IO with a Redis adapter so multiple API instances share room state—tradeoff: operational dependency on Redis versus single-node socket limits.
- Log 03
Separated tenant and user concerns in the socket layer (room naming, JWT claims carrying `tenantId`) so connection setup failures show up as auth problems, not silent drops.
- Log 04
Treated the React/Vite admin app as part of the system: if operators cannot see stream health and recent events, the backend observability work does not reach its audience.
Tradeoff 01Operational dependency on Redis
The Redis adapter trades an extra moving part for horizontal socket scale. Single-node Socket.IO is simpler to run locally but caps fan-out when the API replicas grow.
Tradeoff 02Replica-set Mongo for streams
Change streams expect replica-set semantics. Local and compose setups document that constraint so stream wiring fails clearly in dev instead of silently in prod.
Resolution
Ingestion and persistence live in MongoDB; change streams feed the real-time layer so clients receive pushes tied to committed writes. Socket.IO uses a Redis adapter so room membership and broadcasts stay coherent across processes.
Tenancy is enforced at the data model and request boundary: JWTs carry tenant identity, and socket rooms map tenants and users to dedicated channels—delivery logic does not rely on a single global broadcast.
`NotifyFlux.Api` hosts HTTP, sockets, and stream processing; `NotifyFlux.Web` is the Vite + React operator UI consuming the same API and listening for live events. Production-style Docker Compose runs Nginx in front of the SPA with `try_files` for client routing. The repository’s `docs/` folder and packaged architecture diagram spell out scaling and integration in more detail than this summary.
Outcome
The stack runs end-to-end from `docker compose up` with seeded demo data, and the documentation ties the moving parts together (streams, Redis adapter, SPA). The case is evidence of backend and real-time ownership plus a deliberate admin experience—not a headless demo without an operational front end.
References
GitHub
github.com