Re: Optimize planner memory consumption for huge arrays

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>
Cc: Lepikhov Andrei <a(dot)lepikhov(at)postgrespro(dot)ru>, Ashutosh Bapat <ashutosh(dot)bapat(dot)oss(at)gmail(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org, Евгений Бредня <e(dot)brednya(at)postgrespro(dot)ru>
Subject: Re: Optimize planner memory consumption for huge arrays
Date: 2024-02-19 15:45:12
Message-ID: 4095836.1708357512@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com> writes:
> Considering there are now multiple patches improving memory usage during
> planning with partitions, perhaps it's time to take a step back and
> think about how we manage (or rather not manage) memory during query
> planning, and see if we could improve that instead of an infinite
> sequence of ad hoc patches?

+1, I've been getting an itchy feeling about that too. I don't have
any concrete proposals ATM, but I quite like your idea here:

> For example, I don't think we expect selectivity functions to allocate
> long-lived objects, right? So maybe we could run them in a dedicated
> memory context, and reset it aggressively (after each call).

That could eliminate a whole lot of potential leaks. I'm not sure
though how much it moves the needle in terms of overall planner memory
consumption. I've always supposed that the big problem was data
structures associated with rejected Paths, but I might be wrong.
Is there some simple way we could get a handle on where the most
memory goes while planning?

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tomas Vondra 2024-02-19 16:26:21 Re: JIT compilation per plan node
Previous Message Tom Lane 2024-02-19 15:35:25 Re: numeric_big in make check?