IC

Iheb Chatti

Full-stack product engineering, scalable APIs, async workflows, and cloud delivery

Published Mar 8, 2026Updated Mar 28, 20264 min readIntermediate

Observability for Workflow Systems Means Explaining State

Good observability helps teams explain user-visible state, replay decisions, and workflow timelines instead of merely collecting more technical signals.

BackendDevOps

Hook

The hard part of most incidents is not detection. It is explanation. Engineering knows something is wrong, support has a user asking for answers, and nobody can say with confidence what the system already did or what is safe to do next.

The Real Problem

Traditional observability stacks are good at telling you whether infrastructure is unhealthy. Workflow systems need more than that. They need to explain how a business action moved through requests, jobs, callbacks, and manual intervention. If telemetry stops at HTTP status, queue depth, and service error rates, teams still cannot reconstruct the user journey.

This matters because most production incidents are coordination incidents. The API may be healthy while the workflow is blocked. Workers may be running while the same operation is retrying indefinitely. The frontend may reflect a stale read model that is technically expected but product-visible enough to trigger support escalation. Observability has to connect those layers.

What Breaks in Practice

  • A dashboard shows no obvious platform failure, but users are trapped in a non-terminal state.
  • Request IDs exist for the API and nowhere else, so job and callback activity cannot be stitched together.
  • Alerts fire on transient dependency spikes but miss hours-long completion lag on critical flows.
  • Support teams rely on screenshots because the system has no readable workflow timeline.
  • Engineers know a message failed but cannot determine whether the side effect already happened remotely.

Key Decisions

1. Use one business correlation path end to end

I want a workflow or operation identifier that survives the request, the queue, the callback, and the operator action. Without that, incident debugging turns into manual log archaeology.

2. Emit business events, not only technical logs

Errors matter, but so do accepted transitions, rejected transitions, replay attempts, manual overrides, and completion milestones. Structured business events make it possible to answer product questions without reverse-engineering them from infrastructure noise.

3. Give support readable state timelines

Not every question should require an engineer. For workflows that matter to users, I prefer operator and support tooling that shows the current status, the transition history, and the last failed step in plain terms.

4. Alert on blocked business flow, not only unhealthy services

A system can be technically up and product-wise broken. Metrics such as stuck-state duration, completion lag, retry exhaustion, and callback backlog are often more useful than generic saturation alarms.

5. Preserve replay context

When a job fails, teams need to know what input it saw, what step it was performing, and whether the external effect is known, unknown, or suppressed. That context is crucial to safe recovery.

Tradeoffs

  • Richer event models improve explanation but increase storage, event design work, and redaction requirements.
  • Support-facing timelines reduce engineering interruption while demanding stronger access control and product polish.
  • Fewer, better alerts increase signal quality, though they require more disciplined operational definitions.
  • End-to-end correlation makes incidents faster to debug but depends on consistent instrumentation across teams.

Production Patterns

  • Structured logs enriched with workflow ID, actor, transition, and dependency.
  • Distributed traces that extend beyond request/response into background execution.
  • Audit timelines for user-visible state changes and manual interventions.
  • Dashboards for completion latency, stuck operations, replay count, and callback delay.
  • Dead-letter tooling that records not just the exception, but the business meaning of the failed step.

What I'd Improve

I would start with support workflows instead of engineering dashboards. Teams often instrument what is easy to collect and only later discover they did not instrument what matters during a real customer-facing incident.

See also