From: | "Hou, Zhijie" <houzj(dot)fnst(at)cn(dot)fujitsu(dot)com> |
---|---|
To: | PostgreSQL Developers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Some comment problem in nodeAgg.c |
Date: | 2020-09-30 05:14:33 |
Message-ID: | 451c212277b84af59e603865beab7ca5@G08CNEXMBPEKD05.g08.fujitsu.local |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi
When looking into the code about hash disk,
I found some comment in nodeAgg.c may have not been updated.
1. Since function lookup_hash_entry() has been deleted,
there are still some comment talk about lookup_hash_entry().
* and is packed/unpacked in lookup_hash_entry() / agg_retrieve_hash_table()
...
* GROUP BY columns. The per-group data is allocated in lookup_hash_entry(),
...
* Be aware that lookup_hash_entry can reset the tmpcontext.
2. Now we can use hash_mem_multiplier to set hashagg's mem limit,
The comment in hash_agg_set_limits() still use "work mem" like the following.
/*
* Don't set the limit below 3/4 of hash_mem. In that case, we are at the
* minimum number of partitions, so we aren't going to dramatically exceed
* ## work mem ## anyway.
Does it mean hash_mem here?
Best regards,
houzj
From | Date | Subject | |
---|---|---|---|
Next Message | Keisuke Kuroda | 2020-09-30 05:39:03 | Re: Logical replication CPU-bound with TRUNCATE/DROP/CREATE many tables |
Previous Message | Pavel Stehule | 2020-09-30 04:20:25 | Re: DROP relation IF EXISTS Docs and Tests - Bug Fix |