Pipelining in psql for PostgreSQL 18 is sparking a latency debate. Batching could slice network roundtrips, but driver support and architecture limits complicate the math. [1]
Why batching matters Network roundtrips are the biggest latency culprit in many apps. If you batch reads, you can bound roundtrips and boost throughput; some teams even batch writes after processing on the obtained data, instead of firing one query at a time. Foregoing interactive transactions and executing all read-only queries at once, then a write batch later, is a core idea here. [1]
Where pipelining hits walls Most Postgres drivers don’t even support batching, at least in the JavaScript world. Concurrency becomes more complicated when you batch, with potential trade-offs in consistency and timing. [1]
• Batching/read-only queries – execute all read-only queries at once, then a separate write batch after processing. [1] • Driver support – most Postgres drivers don’t support batching; in the JavaScript ecosystem this gap is especially visible. [1] • Impedance between code and SQL – the default procedure-call mindset makes batching feel awkward and can slow adoption. [1]
Closing thought: the payoff from pipelining hinges on your client stack and workload mix. If JavaScript tooling and your code architecture aren’t batching-friendly, the latency wins may stay limited. [1]
References
Pipelining in psql (PostgreSQL 18)
Discusses batching, pipelining, DO blocks in PostgreSQL; debates network latency, driver support, and practicality for apps.
View source