Re: Optimize planner memory consumption for huge arrays

From: Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>
Cc: Ashutosh Bapat <ashutosh(dot)bapat(dot)oss(at)gmail(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org, Евгений Бредня <e(dot)brednya(at)postgrespro(dot)ru>
Subject: Re: Optimize planner memory consumption for huge arrays
Date: 2024-02-20 04:41:17
Message-ID: 3f8fde0c-b4aa-4e36-9113-604ef6e20cb2@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 20/2/2024 04:51, Tom Lane wrote:
> Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com> writes:
>> On 2/19/24 16:45, Tom Lane wrote:
>>> Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com> writes:
>>>> For example, I don't think we expect selectivity functions to allocate
>>>> long-lived objects, right? So maybe we could run them in a dedicated
>>>> memory context, and reset it aggressively (after each call).
> Here's a quick and probably-incomplete implementation of that idea.
> I've not tried to study its effects on memory consumption, just made
> sure it passes check-world.
Thanks for the sketch. The trick with the planner_tmp_cxt_depth
especially looks interesting.
I think we should design small memory contexts - one per scalable
direction of memory utilization, like selectivity or partitioning
(appending ?).
My coding experience shows that short-lived GEQO memory context forces
people to learn on Postgres internals more precisely :).

--
regards,
Andrei Lepikhov
Postgres Professional

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2024-02-20 04:53:04 Re: RFC: Logging plan of the running query
Previous Message Andrew Dunstan 2024-02-20 04:41:07 Re: WIP Incremental JSON Parser