Two posts argue that durable workflows can live inside Postgres. The pieces—“Workflows: Durable Execution with Just Postgres” and “Absurd Workflows: Durable Execution with Just Postgres”—treat the database as the orchestration backbone [1].
Durable Execution with Postgres Proponents say you can keep long-running tasks durable by looping inside Postgres rather than leaning on external schedulers. The thread even asks how this stacks up against DBOS, and one commenter quips, “I’m sure with time DBOS will be great,” while noting DBOS’s early SDKs and complexity [2].
Agent frameworks and reimplementation choices If you want provider independence, the discussion asks about using existing agent frameworks like Claude + MCP or OpenAI + tool calling, but the critique hits the same point: neither option has a built‑in durable‑execution solution, so you end up driving the loop yourself [2].
When reimplementing makes sense • Own LM models or tighter control on retries—some folks want more hand on the loop than an off‑the‑shelf SDK provides [2]. • Provider independence—you may prefer staying in your own stack rather than tying to a single vendor [2]. • Early tooling trade‑offs—the conversation flags complexity and early quality in options like DBOS [2].
The debate isn’t settled yet, but the idea of keeping durable workflows close to the data layer is clearly gaining traction.
References
Workflows: Durable Execution with Just Postgres
Discusses durable workflows using PostgreSQL, evaluating approaches and tradeoffs of orchestration with just Postgres.
View sourceAbsurd Workflows: Durable Execution with Just Postgres
Durable execution with Postgres; comparison to DBOS; discusses agent frameworks and custom loop control; why reimplement framework vs existing tools
View source