DO blocks in PostgreSQL 18 are sparking a production-use debate. The real intrigue isn’t syntax alone—it’s whether pipelining and batching in PostgreSQL 18 and its go-to tool psql can trim latency enough for real apps [1].
Pipelines vs Interactive Transactions Pipelining (batches) can bound network roundtrips, letting you run many reads at once and a follow-up write batch after processing results [1]. That idea challenges the usual one-query-at-a-time pattern and nudges developers toward batch-friendly design.
Concurrency Conundrum Batching introduces tougher concurrency reasoning. When you group multiple statements, you juggle locks, isolation, and potential conflicts in ways that aren’t present with single-statement executions [1].
Procedural vs Declarative Tension There's an impedance mismatch between procedural and declarative styles. The discussion notes that the default unit of code is a procedure call—and the SQL-as-strings pattern can push you toward complex, dynamic SQL and more risk of injection [1].
What it means for DO blocks in production Does this push DO blocks from ad-hoc scripting toward production logic? The discussion highlights trade-offs: latency wins vs complexity, and the need for careful concurrency handling [1].
Closing thought: the DO blocks question isn’t resolved. If batching tooling and safety around multi-statement edits improve, production use may follow [1].
References
Pipelining in psql (PostgreSQL 18)
Discusses batching, pipelining, DO blocks in PostgreSQL; debates network latency, driver support, and practicality for apps.
View source