Re: BUG #19363: PostgreSQL shared memory exhaustion during query execution involving views and parallel workers

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Richard Guo <guofenglinux(at)gmail(dot)com>
Cc: jinhui(dot)lai(at)qq(dot)com, pgsql-bugs(at)lists(dot)postgresql(dot)org
Subject: Re: BUG #19363: PostgreSQL shared memory exhaustion during query execution involving views and parallel workers
Date: 2025-12-27 17:02:17
Message-ID: 1712093.1766854937@sss.pgh.pa.us
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Richard Guo <guofenglinux(at)gmail(dot)com> writes:
> The executor needs to build a hash table for 9.7 billion rows. That
> requires a lot of memory. In practice, the executor splits the work
> into 8192 batches, that still results in 1.2 million rows per batch.
> To manage that many rows, the executor allocated 4194304 buckets.
> This means the executor needs to allocate 32 MB just for the bucket
> array, assuming you are on a 64-bit system. I guess your available
> shared memory was less than 32 MB at that moment.

On my machine, the query does complete, but it's ridiculously slow:
about 15 sec to compute an empty result. AFAICT nearly all of that
is being spent in hash table setup.

What I'm wondering is why this plan was accepted at all. Does PHJ
not consider work_mem or a similar limit on hash table size?

regards, tom lane

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Tom Lane 2025-12-27 21:33:19 Re: BUG #19363: PostgreSQL shared memory exhaustion during query execution involving views and parallel workers
Previous Message Richard Guo 2025-12-27 05:12:06 Re: BUG #19363: PostgreSQL shared memory exhaustion during query execution involving views and parallel workers