PostgreSQL 18's pipelining in psql is stirring a heated debate about latency, batching, and DO blocks. The core claim: network roundtrips are the biggest latency contributor in most apps, and batching could bound those trips by grouping reads and deferring writes [1].
Latency math - Batching shines when you fetch read-only data in one go and run a write batch after processing. That approach can dramatically cut roundtrips, but it isn’t free of tradeoffs (concurrency gets messier) [1].
Driver reality - In practice, many PostgreSQL drivers, especially in JavaScript, don’t yet support batching, which tempers the practical gains of pipelining in real apps [1].
Trade-offs - Where batching helps, it trims latency; where it doesn’t, it adds orchestration complexity and subtle timing concerns. The post points out the tension between a batch-oriented workflow and traditional per-call execution models [1].
Mindset shift - The discussion also flags a deeper issue: the default unit of expression in code is still a procedure call, making batch-oriented ideas feel foreign and harder to retrofit into existing code [1].
Bottom line: the batching/pipelining story in PostgreSQL 18 is evolving. As teams prototype and benchmarks land, the real winners will be apps that pair the right batching strategy with driver support and a clear concurrency model [1].
References
Pipelining in psql (PostgreSQL 18)
Discusses batching, pipelining, DO blocks in PostgreSQL; debates network latency, driver support, and practicality for apps.
View source