Re: Add the ability to limit the amount of memory that can be allocated to backends.

From: David Rowley <dgrowleyml(at)gmail(dot)com>
To: reid(dot)thompson(at)crunchydata(dot)com, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Add the ability to limit the amount of memory that can be allocated to backends.
Date: 2022-09-02 08:52:54
Message-ID: CAApHDvrzacGEA1ZRea2aio_LAi2fQcgoK74bZGfBddg4ymW-Ow@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, 1 Sept 2022 at 04:52, Reid Thompson
<reid(dot)thompson(at)crunchydata(dot)com> wrote:
> Add the ability to limit the amount of memory that can be allocated to
> backends.

Are you aware that relcache entries are stored in backend local memory
and that once we've added a relcache entry for a relation that we have
no current code which attempts to reduce the memory consumption used
by cache entries when there's memory pressure?

It seems to me that if we had this feature as you propose that a
backend could hit the limit and stay there just from the memory
requirements of the relation cache after some number of tables have
been accessed from the given backend. It's not hard to imagine a
situation where the palloc() would start to fail during parse, which
might make it quite infuriating for anyone trying to do something
like:

SET max_total_backend_memory TO 0;

or

ALTER SYSTEM SET max_total_backend_memory TO 0;

I think a better solution to this problem would be to have "memory
grants", where we configure some amount of "pool" memory that backends
are allowed to use for queries. The planner would have to add the
expected number of work_mem that the given query is expected to use
and before that query starts, the executor would have to "checkout"
that amount of memory from the pool and return it when finished. If
there is not enough memory in the pool then the query would have to
wait until enough memory is available. This creates a deadlocking
hazard that the deadlock detector would need to be made aware of.

I know Thomas Munro has mentioned this "memory grant" or "memory pool"
feature to me previously and I think he even has some work in progress
code for it. It's a very tricky problem, however, as aside from the
deadlocking issue, it requires working out how much memory a given
plan will use concurrently. That's not as simple as counting the nodes
that use work_mem and summing those up.

There is some discussion about the feature in [1]. I was unable to
find what Thomas mentioned on the list about this. I've included him
here in case he has any extra information to share.

David

[1] https://www.postgresql.org/message-id/flat/20220713222342.GE18011%40telsasoft.com#b4f526aa8f2c893567c1ecf069f9e6c7

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Drouvot, Bertrand 2022-09-02 09:06:50 Re: [PATCH] Query Jumbling for CALL and SET utility statements
Previous Message Peter Smith 2022-09-02 08:26:08 Re: Handle infinite recursion in logical replication setup