From: | Sergey Koposov <Sergey(dot)Koposov(at)ed(dot)ac(dot)uk> |
---|---|
To: | "tgl(at)sss(dot)pgh(dot)pa(dot)us" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "pgsql-bugs(at)lists(dot)postgresql(dot)org" <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | Re: BUG #18909: Query creates millions of temporary files and stalls |
Date: | 2025-05-03 16:52:21 |
Message-ID: | 80bab58be182e80e73b5d5f71664ede9ce58f957.camel@ed.ac.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Sat, 2025-05-03 at 12:27 -0400, Tom Lane wrote:
> Sergey Koposov <Sergey(dot)Koposov(at)ed(dot)ac(dot)uk> writes:
> > #8 0x00005615d84f6a59 in ExecHashTableInsert (hashtable=0x5615da85e5c0, slot=0x5615da823378, hashvalue=2415356794)
> > at nodeHash.c:1714
> > shouldFree = true
> > tuple = 0x5615da85f5e8
> > bucketno = 32992122
> > batchno = 3521863
>
> Yeah, this confirms the idea that the hashtable has exploded into an
> unreasonable number of buckets and batches. I don't know why a
> parallel hash join would be more prone to do that than a non-parallel
> one, though. I'm hoping some of the folks who worked on PHJ will
> look at this.
>
Thanks
> What have you got work_mem set to? I hope it's fairly large, if
> you need to join such large tables.
>
Here're my memory settings
shared_buffers = 32GB
work_mem = 1GB
S
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th’ ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
From | Date | Subject | |
---|---|---|---|
Next Message | Andrei Lepikhov | 2025-05-03 17:56:53 | Re: BUG #18909: Query creates millions of temporary files and stalls |
Previous Message | Abdullah DURSUN | 2025-05-03 16:50:00 | Planner does not use btree index for LIKE 'prefix%' on text column, but does for equivalent range query (PostgreSQL 17.4) |