| From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
|---|---|
| To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
| Cc: | pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: Trouble with hashagg spill I/O pattern and costing |
| Date: | 2020-05-20 04:15:40 |
| Message-ID: | acad6f2ba8d560b9dfa9a564c7b7d3162abf8347.camel@j-davis.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Tue, 2020-05-19 at 19:53 +0200, Tomas Vondra wrote:
>
> And if there a way to pre-allocate larger chunks? Presumably we could
> assign the blocks to tape in larger chunks (e.g. 128kB, i.e. 16 x
> 8kB)
> instead of just single block. I haven't seen anything like that in
> tape.c, though ...
It turned out to be simple (at least a POC) so I threw together a
patch. I just added a 32-element array of block numbers to each tape.
When we need a new block, we retrieve a block number from that array;
or if it's empty, we fill it by calling ltsGetFreeBlock() 32 times.
I reproduced the problem on a smaller scale (330M groups, ~30GB of
memory on a 16GB box). Work_mem=64MB. The query is a simple distinct.
Unpatched master:
Sort: 250s
HashAgg: 310s
Patched master:
Sort: 245s
HashAgg: 262s
That's a nice improvement for such a simple patch. We can tweak the
number of blocks to preallocate, or do other things like double from a
small number up to a maximum. Also, a proper patch would probably
release the blocks back as free when the tape was rewound.
As long as the number of block numbers to preallocate is not too large,
I don't think we need to change the API. It seems fine for sort to do
the same thing, even though there's not any benefit.
Regards,
Jeff Davis
| Attachment | Content-Type | Size |
|---|---|---|
| logtape-prealloc.patch | text/x-patch | 1.7 KB |
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Kyotaro Horiguchi | 2020-05-20 04:32:04 | Re: Is it useful to record whether plans are generic or custom? |
| Previous Message | Justin Pryzby | 2020-05-20 03:26:57 | Re: Warn when parallel restoring a custom dump without data offsets |