Re: Limiting memory allocation

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Bruce Momjian <bruce(at)momjian(dot)us>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, Oleksii Kliukin <alexk(at)hintbits(dot)com>, Álvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, Jan Wieck <jan(at)wi3ck(dot)info>, Postgres hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Limiting memory allocation
Date: 2022-05-24 23:40:45
Message-ID: 1755814.1653435645@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Bruce Momjian <bruce(at)momjian(dot)us> writes:
> If the plan output is independent of work_mem,

... it isn't ...

> I always wondered why we
> didn't just determine the number of simultaneous memory requests in the
> plan and just allocate accordingly, e.g. if there are four simultaneous
> memory requests in the plan, each gets work_mem/4.

(1) There are not a predetermined number of allocations. For example,
if we do a given join as nestloop+inner index scan, that doesn't require
any large amount of memory; but if we do it as merge or hash join then
it will consume memory.

(2) They may not all need the same amount of memory, eg joins might
be working on different amounts of data.

If this were an easy problem to solve, we'd have solved it decades
ago.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2022-05-24 23:52:50 Re: suboverflowed subtransactions concurrency performance optimize
Previous Message Bruce Momjian 2022-05-24 23:36:00 Re: First draft of the PG 15 release notes