| From: | Eduard Stepanov <crtxcz(at)gmail(dot)com> |
|---|---|
| To: | Induja Sreekanthan <indujas(at)google(dot)com> |
| Cc: | pgsql-hackers(at)postgresql(dot)org, Simhachala Sasikanth Gottapu <simhachala(at)google(dot)com>, Vishal Bagga <vishalbagga(at)google(dot)com>, Madhukar <madhukarprasad(at)google(dot)com>, Shihao Zhong <shihaozhong(at)google(dot)com>, Yi Ding <yidin(at)google(dot)com>, Hardik Singh Negi <hardiksnegi(at)google(dot)com> |
| Subject: | Re: BUG: ReadStream look-ahead exhausts local buffers when effective_io_concurrency>=64 |
| Date: | 2026-03-12 11:13:38 |
| Message-ID: | CADOio5KHu990BKOf3tiZOBbd8U0sHLHVS+ARajbYY0Wg_dLfKw@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Hi Induja!
I've encountered the same issue but in a different context, so I came
up with an alternative solution: dynamic buffer_limit check in
read_stream_look_ahead()
In read_stream_look_ahead(), I dynamically check the actual remaining
pin budget via GetAdditionalLocalPinLimit(). This adds a third
condition to the main while loop and re-checks after each pin
operation. Additionally, when the budget is exhausted, the pending
read is started immediately rather than waiting for io_combine_limit
accumulation.
The attached patch includes changes to
src/backend/storage/aio/read_stream.c and adds two additional tests to
temp.sql.
Best regards,
Eduard Stepanov
Tantor Labs LLC
On Tue, 10 Mar 2026 at 23:22, Induja Sreekanthan <indujas(at)google(dot)com> wrote:
>
> Hi,
>
> I found an issue where Postgres (with effective_io_concurrency of 64 or higher) runs out of local buffers during a sequential scan on a temporary table with TOAST data.
>
> The issue occurs because the ReadStream look-ahead pins all the local buffers. This results in the TOAST index look-up and TOAST page read being unable to find any available local buffers. The ReadStream's max_pinned_buffers can be as high as the num_temp_buffers, depending on the effective_io_concurrency.
>
> Here is a reproduction of the issue using the default temp_buffers setting and effective_io_concurrency=128:
>
> docker run --name my-postgres -e POSTGRES_PASSWORD=my-password -p 5432:5432 -d postgres:18 -c effective_io_concurrency=128
>
> postgres=# CREATE TEMPORARY TABLE tmp_tbl1 (
> s_suppkey NUMERIC NOT NULL,
> s_nationkey NUMERIC,
> s_comment VARCHAR(256),
> s_name CHAR(256),
> s_address VARCHAR(256),
> s_phone TEXT,
> s_acctbal NUMERIC,
> CONSTRAINT supplier_pk PRIMARY KEY (s_suppkey)
> );
> CREATE TABLE
> postgres=# INSERT INTO tmp_tbl1 (s_suppkey, s_nationkey, s_comment, s_name, s_address, s_phone, s_acctbal)
> SELECT
> ('1' || repeat('0', 2000) || i::text)::NUMERIC AS s_suppkey,
> ('5' || repeat('0', 2000) || floor(random() * 25)::text)::NUMERIC AS s_nationkey,
> md5(random()::text) || ' some comment' AS s_comment,
> 'Supplier#' || LPAD(i::text, 9, '0') AS s_name,
> 'Address-' || md5(i::text) AS s_address,
> repeat('P', 4096) || '-' || i::text || repeat('P', 2048) || 'fwoiefrr' ||
> repeat('fejwfelwkmfP', 4096) || '-' || i::text || repeat('fnwekjfmelkwf', 2048) AS s_phone,
> ('9' || repeat('9', 2000) || '.' || floor(random()*100)::text)::NUMERIC AS s_acctbal
> FROM generate_series(1, 8000) AS i;
> INSERT 0 8000
> postgres=# SELECT * FROM tmp_tbl1;
> ERROR: no empty local buffer available
>
> Attached is a patch that addresses this by limiting ReadStream's max_pinned_buffers for temp tables to 75% of the available local buffers. It also introduces a cap on max_ios for temp tables to DEFAULT_EFFECTIVE_IO_CONCURRENCY, to account for multiple sequential scan look-aheads happening simultaneously.
>
> Regards,
> Induja Sreekanthan
| Attachment | Content-Type | Size |
|---|---|---|
| 0001-Throttle-read-stream-look-ahead-against-local-buffer.patch | application/octet-stream | 8.8 KB |
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Ajin Cherian | 2026-03-12 11:17:31 | Re: synchronized_standby_slots behavior inconsistent with quorum-based synchronous replication |
| Previous Message | Etsuro Fujita | 2026-03-12 11:02:43 | Re: Import Statistics in postgres_fdw before resorting to sampling. |