Re: Add the ability to limit the amount of memory that can be allocated to backends.

From: Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>
To: reid(dot)thompson(at)crunchydata(dot)com, Arne Roland <A(dot)Roland(at)index(dot)de>, Andres Freund <andres(at)anarazel(dot)de>
Cc: vignesh C <vignesh21(at)gmail(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Ibrar Ahmed <ibrar(dot)ahmad(at)gmail(dot)com>, "stephen(dot)frost" <stephen(dot)frost(at)crunchydata(dot)com>
Subject: Re: Add the ability to limit the amount of memory that can be allocated to backends.
Date: 2023-09-29 02:52:47
Message-ID: f62b4107-227d-49e7-a9be-104d1ceb03d9@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 22/5/2023 22:59, reid(dot)thompson(at)crunchydata(dot)com wrote:
> Attach patches updated to master.
> Pulled from patch 2 back to patch 1 a change that was also pertinent to patch 1.
+1 to the idea, have doubts on the implementation.

I have a question. I see the feature triggers ERROR on the exceeding of
the memory limit. The superior PG_CATCH() section will handle the error.
As I see, many such sections use memory allocations. What if some
routine, like the CopyErrorData(), exceeds the limit, too? In this case,
we could repeat the error until the top PG_CATCH(). Is this correct
behaviour? Maybe to check in the exceeds_max_total_bkend_mem() for
recursion and allow error handlers to slightly exceed this hard limit?

Also, the patch needs to be rebased.

--
regards,
Andrey Lepikhov
Postgres Professional

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2023-09-29 02:53:09 Re: Annoying build warnings from latest Apple toolchain
Previous Message Andres Freund 2023-09-29 02:33:15 Re: Annoying build warnings from latest Apple toolchain